Commit Graph

1097 Commits

Author SHA1 Message Date
kk-89
38ed2da2dd
Fix typo in lowvram patcher (#3209) 2024-04-05 12:02:13 -04:00
comfyanonymous
1088d1850f Support for CosXL models. 2024-04-05 10:53:41 -04:00
doctorpangloss
3e002b9f72 Fix string joining node, improve model downloading 2024-04-04 23:40:29 -07:00
comfyanonymous
41ed7e85ea Fix object_patches_backup not being the same object across clones. 2024-04-05 00:22:44 -04:00
comfyanonymous
0f5768e038 Fix missing arguments in cfg_function. 2024-04-04 23:38:57 -04:00
comfyanonymous
1f4fc9ea0c Fix issue with get_model_object on patched model. 2024-04-04 23:01:02 -04:00
comfyanonymous
1a0486bb96 Fix model needing to be loaded on GPU to generate the sigmas. 2024-04-04 22:08:49 -04:00
comfyanonymous
c6bd456c45 Make zero denoise a NOP. 2024-04-04 11:41:27 -04:00
comfyanonymous
fcfd2bdf8a Small cleanup. 2024-04-04 11:16:49 -04:00
comfyanonymous
0542088ef8 Refactor sampler code for more advanced sampler nodes part 2. 2024-04-04 01:26:41 -04:00
comfyanonymous
57753c964a Refactor sampling code for more advanced sampler nodes. 2024-04-03 22:09:51 -04:00
comfyanonymous
6c6a39251f Fix saving text encoder in fp8. 2024-04-02 11:46:34 -04:00
doctorpangloss
abb952ad77 Tweak headers to accept default when none are specified 2024-04-01 20:34:55 -07:00
comfyanonymous
e6482fbbfc Refactor calc_cond_uncond_batch into calc_cond_batch.
calc_cond_batch can take an arbitrary amount of cond inputs.

Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
2024-04-01 18:07:47 -04:00
comfyanonymous
575acb69e4 IP2P model loading support.
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.

This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
2024-03-31 03:10:28 -04:00
Benjamin Berman
5208618681
Merge pull request #4 from cjonesuk/patch-1
Allow iteration over folder paths
2024-03-30 14:25:25 -07:00
doctorpangloss
b0ab12bf05 Fix #5 TAESD node was using a bad variable name that shadowed a module in a relative import 2024-03-29 16:28:13 -07:00
doctorpangloss
bd87697fdf ComfyUI Manager now starts successfully, but needs more mitigations:
- /manager/reboot needs to use a different approach to restart the
   currently running Python process.
 - runpy should be used for install.py invocations
2024-03-29 16:25:29 -07:00
doctorpangloss
8f548d4d19 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-03-29 13:36:57 -07:00
comfyanonymous
94a5a67c32 Cleanup to support different types of inpaint models. 2024-03-29 14:44:13 -04:00
doctorpangloss
1f705ba9d9 Fix history retrieval bug when accessing a distributed frontend 2024-03-29 10:49:51 -07:00
doctorpangloss
c6f4301e88 Fix model downloader invoking symlink when it should not 2024-03-28 15:45:04 -07:00
comfyanonymous
5d8898c056 Fix some performance issues with weight loading and unloading.
Lower peak memory usage when changing model.

Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous
327ca1313d Support SDXS 0.9 2024-03-27 23:58:58 -04:00
doctorpangloss
b0be335d59 Improved support for ControlNet workflows with depth
- ComfyUI can now load EXR files.
 - There are new arithmetic nodes for floats and integers.
 - EXR nodes can load depth maps and be remapped with
   ImageApplyColormap. This allows end users to use ground truth depth
   data from video game engines or 3D graphics tools and recolor it to
   the format expected by depth ControlNets: grayscale inverse depth
   maps and "inferno" colored inverse depth maps.
 - Fixed license notes.
 - Added an additional known ControlNet model.
 - Because CV2 is now used to read OpenEXR files, an environment
   variable must be set early on in the application, before CV2 is
   imported. This file, main_pre, is now imported early on in more
   places.
2024-03-26 22:32:15 -07:00
comfyanonymous
ae77590b4e dora_scale support for lora file. 2024-03-25 18:09:23 -04:00
comfyanonymous
c6de09b02e Optimize memory unload strategy for more optimized performance. 2024-03-24 02:36:30 -04:00
doctorpangloss
d8846fcb39 Improved testing of API nodes
- dynamicPrompts now set to False by default; CLIPTextEncoder and
   related nodes now have it set to True.
 - Fixed return values of API nodes.
2024-03-22 22:04:35 -07:00
doctorpangloss
4cd8f9d2ed Merge with upstream 2024-03-22 14:35:17 -07:00
doctorpangloss
feae8c679b Add nodes to support OpenAPI and similar backend workflows 2024-03-22 14:22:50 -07:00
Christopher Jones
f8120bbd72
Allow iteration over folder paths 2024-03-22 17:01:17 +00:00
doctorpangloss
0db040cc47 Improve API support
- Removed /api/v1/images because you should use your own CDN style
   image host and /view for maximum compatibility
 - The /api/v1/prompts POST application/json response will now return
   the outputs dictionary
 - Caching has been removed
 - More tests
 - Subdirectory prefixes are now supported
 - Fixed an issue where a Linux frontend and Windows backend would have
   paths that could not interact with each other correctly
2024-03-21 16:24:22 -07:00
doctorpangloss
d73b116446 Update OpenAPI spec 2024-03-21 15:16:52 -07:00
doctorpangloss
005e370254 Merge upstream 2024-03-21 13:15:36 -07:00
comfyanonymous
0624838237 Add inverse noise scaling function. 2024-03-21 14:49:11 -04:00
doctorpangloss
59cf9e5d93 Improve distributed testing 2024-03-20 20:43:21 -07:00
comfyanonymous
5d875d77fe Fix regression with lcm not working with batches. 2024-03-20 20:48:54 -04:00
comfyanonymous
4b9005e949 Fix regression with model merging. 2024-03-20 13:56:12 -04:00
comfyanonymous
c18a203a8a Don't unload model weights for non weight patches. 2024-03-20 02:27:58 -04:00
comfyanonymous
150a3e946f Make LCM sampler use the model noise scaling function. 2024-03-20 01:35:59 -04:00
doctorpangloss
3f4049c5f4 Fix Python 3.10 compatibility detail 2024-03-19 10:48:20 -07:00
comfyanonymous
40e124c6be SV3D support. 2024-03-18 16:54:13 -04:00
doctorpangloss
74a9c45395 Fix subpath model downloads 2024-03-18 10:10:34 -07:00
comfyanonymous
cacb022c4a Make saved SD1 checkpoints match more closely the official one. 2024-03-18 00:26:23 -04:00
comfyanonymous
d7897fff2c Move cascade scale factor from stage_a to latent_formats.py 2024-03-16 14:49:35 -04:00
comfyanonymous
f2fe635c9f SamplerDPMAdaptative node to test the different options. 2024-03-15 22:36:10 -04:00
comfyanonymous
448d9263a2 Fix control loras breaking. 2024-03-14 09:30:21 -04:00
doctorpangloss
a892411cf8 Add known controlnet models and add --disable-known-models to prevent it from appearing or downloading 2024-03-13 18:11:16 -07:00
comfyanonymous
db8b59ecff Lower memory usage for loras in lowvram mode at the cost of perf. 2024-03-13 20:07:27 -04:00
doctorpangloss
341c9f2e90 Improvements to node loading, node API, folder paths and progress
- Improve node loading order. It now occurs "as late as possible".
   Configuration should be exposed as per the README.
 - Added methods to specify custom folders and models used in examples
   more robustly for custom nodes.
 - Downloading models can now be gracefully interrupted.
 - Progress notifications are now sent over the network for distributed
   ComfyUI operations.
 - Python objects have been moved around to prevent less transitive
   package importing issues.
2024-03-13 16:14:18 -07:00
doctorpangloss
3ccbda36da Adjust known models 2024-03-12 15:51:57 -07:00
doctorpangloss
e68f8885e3 Improve model downloading 2024-03-12 15:27:08 -07:00
doctorpangloss
93cdef65a4 Merge upstream 2024-03-12 09:49:47 -07:00
comfyanonymous
2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous
0ed72befe1 Change log levels.
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
doctorpangloss
00728eb20f Merge upstream 2024-03-11 09:32:57 -07:00
Benjamin Berman
3c57ef831c Download known models from HuggingFace 2024-03-11 00:15:06 -07:00
comfyanonymous
65397ce601 Replace prints with logging and add --verbose argument. 2024-03-10 12:14:23 -04:00
doctorpangloss
175a50d7ba Improve vanilla node importing 2024-03-08 16:29:48 -08:00
doctorpangloss
c0d9bc0129 Merge with upstream 2024-03-08 15:17:20 -08:00
comfyanonymous
5f60ee246e Support loading the sr cascade controlnet. 2024-03-07 01:22:48 -05:00
comfyanonymous
03e6e81629 Set upscale algorithm to bilinear for stable cascade controlnet. 2024-03-06 02:59:40 -05:00
comfyanonymous
03e83bb5d0 Support stable cascade canny controlnet. 2024-03-06 02:25:42 -05:00
comfyanonymous
10860bcd28 Add compression_ratio to controlnet code. 2024-03-05 15:15:20 -05:00
comfyanonymous
478f71a249 Remove useless check. 2024-03-04 08:51:25 -05:00
comfyanonymous
12c1080ebc Simplify differential diffusion code. 2024-03-03 15:34:42 -05:00
Shiimizu
727021bdea
Implement Differential Diffusion (#2876)
* Implement Differential Diffusion

* Cleanup.

* Fix.

* Masks should be applied at full strength.

* Fix colors.

* Register the node.

* Cleaner code.

* Fix issue with getting unipc sampler.

* Adjust thresholds.

* Switch to linear thresholds.

* Only calculate nearest_idx on valid thresholds.
2024-03-03 15:34:13 -05:00
comfyanonymous
1abf8374ec utils.set_attr can now be used to set any attribute.
The old set_attr has been renamed to set_attr_param.
2024-03-02 17:27:23 -05:00
comfyanonymous
dce3555339 Add some tesla pascal GPUs to the fp16 working but slower list. 2024-03-02 17:16:31 -05:00
comfyanonymous
51df846598 Let conditioning specify custom concat conds. 2024-03-02 11:44:06 -05:00
comfyanonymous
9f71e4b62d Let model patches patch sub objects. 2024-03-02 11:43:27 -05:00
comfyanonymous
00425563c0 Cleanup: Use sampling noise scaling function for inpainting. 2024-03-01 14:24:41 -05:00
comfyanonymous
c62e836167 Move noise scaling to object with sampling math. 2024-03-01 12:54:38 -05:00
doctorpangloss
148d57a772 Add extra_model_paths_config to the valid configuration for the comfyui-worker entry point 2024-03-01 08:14:21 -08:00
doctorpangloss
440f72d36f support differential diffusion nodes 2024-03-01 00:43:33 -08:00
doctorpangloss
915f2da874 Merge upstream 2024-02-29 20:48:27 -08:00
doctorpangloss
44882eab0c Improve typing and file path handling 2024-02-29 19:29:38 -08:00
Benjamin Berman
e6623a1359 Fix CLIPLoader node, fix CustomNode typing, improve digest 2024-02-29 15:54:42 -08:00
comfyanonymous
cb7c3a2921 Allow image_only_indicator to be None. 2024-02-29 13:11:30 -05:00
doctorpangloss
bae2068111 Fix issue when custom_nodes directory does not exist 2024-02-28 17:46:23 -08:00
doctorpangloss
9d3eb74796 Fix importing vanilla custom nodes 2024-02-28 14:11:34 -08:00
comfyanonymous
b3e97fc714 Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
37a86e4618 Remove duplicate text_projection key from some saved models. 2024-02-28 03:57:41 -05:00
Benjamin Berman
e0e98a8783
Update distributed_prompt_worker.py 2024-02-27 23:15:40 -08:00
doctorpangloss
d3c13f8172 (Fixed) Move new_updated 2024-02-27 17:01:16 -08:00
doctorpangloss
272e3ee357 (broken) Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-27 16:58:34 -08:00
comfyanonymous
8daedc5bf2 Auto detect playground v2.5 model. 2024-02-27 18:03:03 -05:00
comfyanonymous
d46583ecec Playground V2.5 support with ModelSamplingContinuousEDM node.
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
2024-02-27 15:12:33 -05:00
comfyanonymous
1e0fcc9a65 Make XL checkpoints save in a more standard format. 2024-02-27 02:07:40 -05:00
comfyanonymous
b416be7d78 Make the text projection saved in the checkpoint the right format. 2024-02-27 01:52:23 -05:00
comfyanonymous
03c47fc0f2 Add a min_length property to tokenizer class. 2024-02-26 21:36:37 -05:00
doctorpangloss
bd5073caf2 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-26 08:51:07 -08:00
comfyanonymous
8ac69f62e5 Make return_projected_pooled setable from the __init__ 2024-02-25 14:49:13 -05:00
comfyanonymous
ca7c310a0e Support loading old CLIP models saved with CLIPSave. 2024-02-25 08:29:12 -05:00
comfyanonymous
c2cb8e889b Always return unprojected pooled output for gligen. 2024-02-25 07:33:13 -05:00
comfyanonymous
1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous
6533b172c1 Support text encoder text_projection in lora. 2024-02-24 23:50:46 -05:00
comfyanonymous
1e5f0f66be Support lora keys with lora_prior_unet_ and lora_prior_te_ 2024-02-23 12:21:20 -05:00
logtd
e1cb93c383 Fix model and cond transformer options merge 2024-02-23 01:19:43 -07:00
comfyanonymous
10847dfafe Cleanup uni_pc inpainting.
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
2024-02-23 02:39:35 -05:00
doctorpangloss
dd1f7b6183 Adapt to more custom nodes specifications 2024-02-22 20:05:48 -08:00
doctorpangloss
b95fd25380 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-22 12:06:41 -08:00
doctorpangloss
fca0d8a050 Plugins can add configuration 2024-02-22 11:58:40 -08:00
doctorpangloss
c941ee09fc Improve nodes handling 2024-02-21 23:44:16 -08:00
doctorpangloss
e3ee2418bf Fix OpenAPI schema 2024-02-21 14:14:59 -08:00
doctorpangloss
8549f4682f Fix OpenAPI schema validation 2024-02-21 14:13:12 -08:00
doctorpangloss
bf6d91fec0 Merge upstream 2024-02-21 13:33:05 -08:00
doctorpangloss
5cd06a727f Fix relative imports 2024-02-21 13:31:55 -08:00
doctorpangloss
f419c5760d Adding __init__.py 2024-02-20 14:24:51 -08:00
comfyanonymous
18c151b3e3 Add some latent2rgb matrices for previews. 2024-02-20 10:57:24 -05:00
comfyanonymous
0d0fbabd1d Pass pooled CLIP to stage b. 2024-02-20 04:24:45 -05:00
comfyanonymous
c6b7a157ed Align simple scheduling closer to official stable cascade scheduler. 2024-02-20 04:24:39 -05:00
doctorpangloss
7520691021 Merge with master 2024-02-19 10:55:22 -08:00
comfyanonymous
88f300401c Enable fp16 by default on mps. 2024-02-19 12:00:48 -05:00
comfyanonymous
e93cdd0ad0 Remove print. 2024-02-19 11:47:26 -05:00
comfyanonymous
3711b31dff Support Stable Cascade in checkpoint format. 2024-02-19 11:20:48 -05:00
comfyanonymous
d91f45ef28 Some cleanups to how the text encoders are loaded. 2024-02-19 10:46:30 -05:00
comfyanonymous
a7b5eaa7e3 Forgot to commit this. 2024-02-19 04:25:46 -05:00
comfyanonymous
3b2e579926 Support loading the Stable Cascade effnet and previewer as a VAE.
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous
dccca1daa5 Fix gligen lowvram mode. 2024-02-18 02:20:23 -05:00
doctorpangloss
ab56eadcc8 Fix missing path arguments 2024-02-17 23:19:21 -08:00
comfyanonymous
8b60d33bb7 Add ModelSamplingStableCascade to control the shift sampling parameter.
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
2024-02-18 00:55:23 -05:00
comfyanonymous
6bcf57ff10 Fix attention masks properly for multiple batches. 2024-02-17 16:15:18 -05:00
comfyanonymous
11e3221f1f fp8 weight support for Stable Cascade. 2024-02-17 15:27:31 -05:00
comfyanonymous
f8706546f3 Fix attention mask batch size in some attention functions. 2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5 Properly fix attention masks in CLIP with batches. 2024-02-17 12:13:13 -05:00
comfyanonymous
5b40e7a5ed Implement shift schedule for cascade stage C. 2024-02-17 11:38:47 -05:00
comfyanonymous
929e266f3e Manual cast for bf16 on older GPUs. 2024-02-17 09:01:17 -05:00
comfyanonymous
6c875d846b Fix clip attention mask issues on some hardware. 2024-02-17 07:53:52 -05:00
comfyanonymous
805c36ac9c Make Stable Cascade work on old pytorch 2.0 2024-02-17 00:42:30 -05:00
comfyanonymous
f2d1d16f4f Support Stable Cascade Stage B lite. 2024-02-16 23:41:23 -05:00
comfyanonymous
0b3c50480c Make --force-fp32 disable loading models in bf16. 2024-02-16 23:01:54 -05:00
comfyanonymous
97d03ae04a StableCascade CLIP model support. 2024-02-16 13:29:04 -05:00
comfyanonymous
667c92814e Stable Cascade Stage B. 2024-02-16 13:02:03 -05:00
doctorpangloss
955fcb8bb0 Fix API 2024-02-16 08:47:11 -08:00
comfyanonymous
f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
doctorpangloss
d29028dd3e Fix distributed prompting 2024-02-16 06:17:06 -08:00
comfyanonymous
5e06baf112 Stable Cascade Stage A. 2024-02-16 06:30:39 -05:00
doctorpangloss
0546b01080 Tweak RPC parameters 2024-02-15 23:43:14 -08:00
doctorpangloss
a62f2c00ed Fix RPC issue in frontend 2024-02-15 21:54:13 -08:00
doctorpangloss
cd54bf3b3d Tweak worker RPC creation 2024-02-15 20:28:11 -08:00
doctorpangloss
a5d51c1ae2 Tweak API 2024-02-15 19:08:29 -08:00
comfyanonymous
aeaeca10bd Small refactor of is_device_* functions. 2024-02-15 21:10:10 -05:00
doctorpangloss
06e74226df Add external address parameter 2024-02-15 17:39:15 -08:00
doctorpangloss
7c6b8ecb02 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-15 12:56:07 -08:00
comfyanonymous
38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 2024-02-13 00:01:08 -05:00
doctorpangloss
8b4e5cee61 Fix _model_patcher typo 2024-02-12 15:12:49 -08:00
doctorpangloss
b4eda2d5a4 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-12 14:24:20 -08:00
comfyanonymous
7dd352cbd7 Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI 2024-02-11 12:23:30 -05:00
Jedrzej Kosinski
f44225fd5f Fix infinite while loop being possible in ddim_scheduler 2024-02-09 17:11:34 -06:00
doctorpangloss
15ff903b35 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-09 12:19:00 -08:00
comfyanonymous
25a4805e51 Add a way to set different conditioning for the controlnet. 2024-02-09 14:13:31 -05:00
doctorpangloss
f195230e2a More tweaks to cli args 2024-02-09 01:40:27 -08:00
doctorpangloss
a3f9d007d4 Fix CLI args issues 2024-02-09 01:20:57 -08:00
doctorpangloss
bdc843ced1 Tweak this message 2024-02-08 22:56:06 -08:00
doctorpangloss
b3f1ce7ef0 Update comment 2024-02-08 21:15:58 -08:00
doctorpangloss
d5bcaa515c Specify worker,frontend as default roles 2024-02-08 20:37:16 -08:00
doctorpangloss
54d419d855 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-08 20:31:05 -08:00
doctorpangloss
80f8c40248 Distributed queueing with amqp-compatible servers like RabbitMQ.
- Binary previews are not yet supported
 - Use `--distributed-queue-connection-uri=amqp://guest:guest@rabbitmqserver/`
 - Roles supported: frontend, worker or both (see `--help`)
 - Run `comfy-worker` for a lightweight worker you can wrap your head
   around
 - Workers and frontends must have the same directory structure (set
   with `--cwd`) and supported nodes. Frontends must still have access
   to inputs and outputs.
 - Configuration notes:

   distributed_queue_connection_uri (Optional[str]): Servers and clients will connect to this AMQP URL to form a distributed queue and exchange prompt execution requests and progress updates.
   distributed_queue_roles (List[str]): Specifies one or more roles for the distributed queue. Acceptable values are "worker" or "frontend", or both by writing the flag twice with each role. Frontends will start the web UI and connect to the provided AMQP URL to submit prompts; workers will pull requests off the AMQP URL.
   distributed_queue_name (str): This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue name, followed by a '.', then the user ID.
2024-02-08 20:24:27 -08:00
doctorpangloss
0673262940 Fix entrypoints, add comfyui-worker entrypoint 2024-02-08 19:08:42 -08:00
doctorpangloss
72e92514a4 Better compatibility with pre-existing prompt_worker method 2024-02-08 18:07:37 -08:00
doctorpangloss
92898b8c9d Improved support for distributed queues 2024-02-08 14:55:07 -08:00
doctorpangloss
3367362cec Fix directml again now that I understand what the command line is doing 2024-02-08 10:17:49 -08:00
doctorpangloss
09838ed604 Update readme, remove unused import 2024-02-08 10:09:47 -08:00
doctorpangloss
04ce040d28 Fix commonpath / using arg.cwd on Windows 2024-02-08 09:30:16 -08:00
Benjamin Berman
8508a5a853 Fix args.directml is not None error 2024-02-08 08:40:13 -08:00
Benjamin Berman
b8fc850b47 Correctly preserves your installed torch when installed like pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git 2024-02-08 08:36:05 -08:00
blepping
a352c021ec Allow custom samplers to request discard penultimate sigma 2024-02-08 02:24:23 -07:00
Benjamin Berman
e45433755e Include missing seed parameter in sample workflow 2024-02-07 22:18:46 -08:00
doctorpangloss
123c512a84 Fix compatibility with Python 3.9, 3.10, fix Configuration class declaration issue 2024-02-07 21:52:20 -08:00
comfyanonymous
c661a8b118 Don't use numpy for calculating sigmas. 2024-02-07 18:52:51 -05:00
doctorpangloss
25c28867d2 Update script examples 2024-02-07 15:52:26 -08:00
doctorpangloss
d9b4607c36 Add locks to model_management to prevent multiple copies of the models from being loaded at the same time 2024-02-07 15:18:13 -08:00
doctorpangloss
8e9052c843 Merge with upstream 2024-02-07 14:27:50 -08:00
doctorpangloss
1b2ea61345 Improved API support
- Run comfyui workflows directly inside other python applications using
   EmbeddedComfyClient.
 - Optional telemetry in prompts and models using anonymity preserving
   Plausible self-hosted or hosted.
 - Better OpenAPI schema
 - Basic support for distributed ComfyUI backends. Limitations: no
   progress reporting, no easy way to start your own distributed
   backend, requires RabbitMQ as a message broker.
2024-02-07 14:20:21 -08:00
comfyanonymous
236bda2683 Make minimum tile size the size of the overlap. 2024-02-05 01:29:26 -05:00
comfyanonymous
66e28ef45c Don't use is_bf16_supported to check for fp16 support. 2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6 Speed up SDXL on 16xx series with fp16 weights and manual cast. 2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d Always use fp16 for the text encoders. 2024-02-02 10:02:49 -05:00
comfyanonymous
da7a8df0d2 Put VAE key name in model config. 2024-01-30 02:24:38 -05:00
doctorpangloss
32d83e52ff Fix CheckpointLoader even though it is deprecated 2024-01-29 17:20:10 -08:00
doctorpangloss
0d2cc553bc Revert "Remove unused configs contents"
This reverts commit 65549c39f1.
2024-01-29 17:03:27 -08:00
doctorpangloss
2400da51e5 PyInstaller 2024-01-29 17:02:45 -08:00
doctorpangloss
82edb2ff0e Merge with latest upstream. 2024-01-29 15:06:31 -08:00
doctorpangloss
65549c39f1 Remove unused configs contents 2024-01-29 14:14:46 -08:00
comfyanonymous
89507f8adf Remove some unused imports. 2024-01-25 23:42:37 -05:00
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler (#2619)
self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2024-01-23 03:47:01 -05:00
comfyanonymous
4871a36458 Cleanup some unused imports. 2024-01-21 21:51:22 -05:00
comfyanonymous
78a70fda87 Remove useless import. 2024-01-19 15:38:05 -05:00
comfyanonymous
d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
f9e55d8463 Only auto enable bf16 VAE on nvidia GPUs that actually support it. 2024-01-15 03:10:22 -05:00
comfyanonymous
2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
53c8a99e6c Make server storage the default.
Remove --server-storage argument.
2024-01-11 17:21:40 -05:00
comfyanonymous
977eda19a6 Don't round noise mask. 2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd Add InpaintModelConditioning node.
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
1a57423d30 Fix issue when using multiple t2i adapters with batched images. 2024-01-10 04:00:49 -05:00
comfyanonymous
6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 2024-01-09 13:46:52 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support (#2160)
* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa Support attention mask in split attention. 2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb Implement attention mask on xformers. 2024-01-06 04:33:03 -05:00
doctorpangloss
42232f4d20 fix module ref 2024-01-03 20:12:33 -08:00
doctorpangloss
345825dfb5 Fix issues with missing __init__ in upscaler, move web/ directory to comfy/web so that the need for symbolic link support on windows is eliminated 2024-01-03 16:35:00 -08:00
doctorpangloss
d31298ac60 gligen is missing math import 2024-01-03 16:01:44 -08:00
doctorpangloss
369aeb598f Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node 2024-01-03 16:00:36 -08:00
comfyanonymous
8c6493578b Implement noise augmentation for SD 4X upscale model. 2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb Fix model patches not working in custom sampling scheduler nodes. 2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b Fix regression. 2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
a47f609f90 Auto detect out_channels from model. 2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33 Remove useless code. 2024-01-02 01:50:29 -05:00
comfyanonymous
1b103e0cb2 Add argument to run the VAE on the CPU. 2023-12-30 05:49:07 -05:00
comfyanonymous
12e822c6c8 Use function to calculate model size in model patcher. 2023-12-28 21:46:20 -05:00
comfyanonymous
e1e322cf69 Load weights that can't be lowvramed to target device. 2023-12-28 21:41:10 -05:00
comfyanonymous
c782144433 Fix clip vision lowvram mode not working. 2023-12-27 13:50:57 -05:00
comfyanonymous
f21bb41787 Fix taesd VAE in lowvram mode. 2023-12-26 12:52:21 -05:00
comfyanonymous
61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a Fix SVD lowvram mode. 2023-12-24 07:13:18 -05:00
comfyanonymous
a252963f95 --disable-smart-memory now unloads everything like it did originally. 2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate.
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
261bcbb0d9 A few missing comfy ops in the VAE. 2023-12-22 04:05:42 -05:00
comfyanonymous
9a7619b72d Fix regression with inpaint model. 2023-12-19 02:32:59 -05:00
comfyanonymous
571ea8cdcc Fix SAG not working with cfg 1.0 2023-12-18 17:03:32 -05:00
comfyanonymous
8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 2023-12-18 12:54:23 -05:00
comfyanonymous
2258f85159 Support stable zero 123 model.
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
2023-12-18 03:48:04 -05:00
comfyanonymous
2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 2023-12-17 16:59:21 -05:00
comfyanonymous
e45d920ae3 Don't resize clip vision image when the size is already good. 2023-12-16 03:06:10 -05:00
comfyanonymous
13e6d5366e Switch clip vision to manual cast.
Make it use the same dtype as the text encoder.
2023-12-16 02:47:26 -05:00
comfyanonymous
719fa0866f Set clip vision model in eval mode so it works without inference mode. 2023-12-15 18:53:08 -05:00
Hari
574363a8a6 Implement Perp-Neg 2023-12-16 00:28:16 +05:30
comfyanonymous
a5056cfb1f Remove useless code. 2023-12-15 01:28:16 -05:00
comfyanonymous
329c571993 Improve code legibility. 2023-12-14 11:41:49 -05:00
comfyanonymous
6c5990f7db Fix cfg being calculated more than once if sampler_cfg_function. 2023-12-13 20:28:04 -05:00
comfyanonymous
ba04a87d10 Refactor and improve the sag node.
Moved all the sag related code to comfy_extras/nodes_sag.py
2023-12-13 16:11:26 -05:00
Rafie Walker
6761233e9d
Implement Self-Attention Guidance (#2201)
* First SAG test

* need to put extra options on the model instead of patcher

* no errors and results seem not-broken

* Use @ashen-uncensored formula, which works better!!!

* Fix a crash when using weird resolutions. Remove an unnecessary UNet call

* Improve comments, optimize memory in blur routine

* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
comfyanonymous
b454a67bb9 Support segmind vega model. 2023-12-12 19:09:53 -05:00
comfyanonymous
824e4935f5 Add dtype parameter to VAE object. 2023-12-12 12:03:29 -05:00
comfyanonymous
32b7e7e769 Add manual cast to controlnet. 2023-12-12 11:32:42 -05:00
comfyanonymous
3152023fbc Use inference dtype for unet memory usage estimation. 2023-12-11 23:50:38 -05:00
comfyanonymous
77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e Use faster manual cast for fp8 in unet. 2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8 Disable non blocking on mps. 2023-12-10 01:30:35 -05:00
comfyanonymous
614b7e731f Implement GLora. 2023-12-09 18:15:26 -05:00
comfyanonymous
cb63e230b4 Make lora code a bit cleaner. 2023-12-09 14:15:09 -05:00
comfyanonymous
174eba8e95 Use own clip vision model implementation. 2023-12-09 11:56:31 -05:00
comfyanonymous
97015b6b38 Cleanup. 2023-12-08 16:02:08 -05:00