Commit Graph

1292 Commits

Author SHA1 Message Date
doctorpangloss
f8fcfa6f08 Improve tracing to propagate to backend workers correctly when using the API. Fix distributed tests. 2024-05-07 13:44:34 -07:00
comfyanonymous
c61eadf69a Make the load checkpoint with config function call the regular one.
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.

The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
doctorpangloss
75b63fce91 Remove redudant resume_download argument 2024-05-06 10:31:58 -07:00
doctorpangloss
eb7d466b95 Fix dependency on opentelemetry instrumentor; remove websocket based API example since it isn't appropriate for this fork. 2024-05-03 16:54:33 -07:00
doctorpangloss
fd81790f12 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-03 06:42:56 -07:00
doctorpangloss
330ecb10b2 Merge with upstream. Remove TLS flags, because a third party proxy will do this better 2024-05-02 21:57:20 -07:00
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. (#3388) 2024-05-02 03:26:50 -04:00
doctorpangloss
4d060f0555 Improve support for extra models 2024-05-01 16:58:29 -07:00
comfyanonymous
f81a6fade8 Fix some edge cases with samplers and arrays with a single sigma. 2024-05-01 17:05:30 -04:00
comfyanonymous
2aed53c4ac Workaround xformers bug. 2024-04-30 21:23:40 -04:00
Garrett Sutula
bacce529fb
Add TLS Support (#3312)
* Add TLS Support

* Add to readme

* Add guidance for windows users on generating certificates

* Add guidance for windows users on generating certificates

* Fix typo
2024-04-30 20:17:02 -04:00
doctorpangloss
b94b90c1cc Improve model downloader coherence with packages like controlnext-aux 2024-04-30 14:28:44 -07:00
doctorpangloss
0862863bc0 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-04-29 13:37:03 -07:00
doctorpangloss
46a712b4d6 Fix missing extension addition from upstream 2024-04-29 13:32:46 -07:00
Jedrzej Kosinski
7990ae18c1
Fix error when more cond masks passed in than batch size (#3353) 2024-04-26 12:51:12 -04:00
doctorpangloss
f965fb2bc0 Merge upstream 2024-04-24 22:41:43 -07:00
comfyanonymous
8dc19e40d1 Don't init a VAE model when there are no VAE weights. 2024-04-24 09:20:31 -04:00
Jacob Segal
b3e547f22b Merge branch 'master' into execution_model_inversion 2024-04-21 21:31:58 -07:00
Jacob Segal
06f3ce9200 Raise exception for bad get_node calls. 2024-04-21 16:10:01 -07:00
Jacob Segal
ecbef304ed Remove superfluous function parameter 2024-04-20 23:07:18 -07:00
Jacob Segal
b5e4583583 Remove unused functionality 2024-04-20 23:01:34 -07:00
Jacob Segal
7dbee88485 Add docs on when ExecutionBlocker should be used 2024-04-20 22:54:38 -07:00
Jacob Segal
dd3bafb40b Display an error for dependency cycles
Previously, dependency cycles that were created during node expansion
would cause the application to quit (due to an uncaught exception). Now,
we'll throw a proper error to the UI. We also make an attempt to 'blame'
the most relevant node in the UI.
2024-04-20 22:40:38 -07:00
Jacob Segal
5dc13651b0 Use custom exception types. 2024-04-20 18:12:42 -07:00
Jacob Segal
a0bf532558 Use fstrings instead of '%' formatting syntax 2024-04-20 17:52:23 -07:00
comfyanonymous
c59fe9f254 Support VAE without quant_conv. 2024-04-18 21:05:33 -04:00
doctorpangloss
2643730acc Tracing 2024-04-17 08:20:07 -07:00
comfyanonymous
719fb2c81d Add basic PAG node. 2024-04-14 23:49:50 -04:00
comfyanonymous
258dbc06c3 Fix some memory related issues. 2024-04-14 12:08:58 -04:00
comfyanonymous
58812ab8ca Support SDXS 512 model. 2024-04-12 22:12:35 -04:00
comfyanonymous
831511a1ee Fix issue with sampling_settings persisting across models. 2024-04-09 23:20:43 -04:00
doctorpangloss
e49c662c7f Enable previews by default and over distributed channels 2024-04-09 13:15:05 -07:00
doctorpangloss
37cca051b6 Enable real-time progress notifications. 2024-04-08 14:56:16 -07:00
doctorpangloss
dd6f7c4215 Fix retrieving history from distributed instance 2024-04-08 14:39:16 -07:00
doctorpangloss
034ffcea03 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-04-08 10:02:37 -07:00
comfyanonymous
30abc324c2 Support properly saving CosXL checkpoints. 2024-04-08 00:36:22 -04:00
comfyanonymous
0a03009808 Fix issue with controlnet models getting loaded multiple times. 2024-04-06 18:38:39 -04:00
kk-89
38ed2da2dd
Fix typo in lowvram patcher (#3209) 2024-04-05 12:02:13 -04:00
comfyanonymous
1088d1850f Support for CosXL models. 2024-04-05 10:53:41 -04:00
doctorpangloss
3e002b9f72 Fix string joining node, improve model downloading 2024-04-04 23:40:29 -07:00
comfyanonymous
41ed7e85ea Fix object_patches_backup not being the same object across clones. 2024-04-05 00:22:44 -04:00
comfyanonymous
0f5768e038 Fix missing arguments in cfg_function. 2024-04-04 23:38:57 -04:00
comfyanonymous
1f4fc9ea0c Fix issue with get_model_object on patched model. 2024-04-04 23:01:02 -04:00
comfyanonymous
1a0486bb96 Fix model needing to be loaded on GPU to generate the sigmas. 2024-04-04 22:08:49 -04:00
comfyanonymous
c6bd456c45 Make zero denoise a NOP. 2024-04-04 11:41:27 -04:00
comfyanonymous
fcfd2bdf8a Small cleanup. 2024-04-04 11:16:49 -04:00
comfyanonymous
0542088ef8 Refactor sampler code for more advanced sampler nodes part 2. 2024-04-04 01:26:41 -04:00
comfyanonymous
57753c964a Refactor sampling code for more advanced sampler nodes. 2024-04-03 22:09:51 -04:00
comfyanonymous
6c6a39251f Fix saving text encoder in fp8. 2024-04-02 11:46:34 -04:00
doctorpangloss
abb952ad77 Tweak headers to accept default when none are specified 2024-04-01 20:34:55 -07:00
comfyanonymous
e6482fbbfc Refactor calc_cond_uncond_batch into calc_cond_batch.
calc_cond_batch can take an arbitrary amount of cond inputs.

Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
2024-04-01 18:07:47 -04:00
comfyanonymous
575acb69e4 IP2P model loading support.
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.

This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
2024-03-31 03:10:28 -04:00
Benjamin Berman
5208618681
Merge pull request #4 from cjonesuk/patch-1
Allow iteration over folder paths
2024-03-30 14:25:25 -07:00
doctorpangloss
b0ab12bf05 Fix #5 TAESD node was using a bad variable name that shadowed a module in a relative import 2024-03-29 16:28:13 -07:00
doctorpangloss
bd87697fdf ComfyUI Manager now starts successfully, but needs more mitigations:
- /manager/reboot needs to use a different approach to restart the
   currently running Python process.
 - runpy should be used for install.py invocations
2024-03-29 16:25:29 -07:00
doctorpangloss
8f548d4d19 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-03-29 13:36:57 -07:00
comfyanonymous
94a5a67c32 Cleanup to support different types of inpaint models. 2024-03-29 14:44:13 -04:00
doctorpangloss
1f705ba9d9 Fix history retrieval bug when accessing a distributed frontend 2024-03-29 10:49:51 -07:00
doctorpangloss
c6f4301e88 Fix model downloader invoking symlink when it should not 2024-03-28 15:45:04 -07:00
comfyanonymous
5d8898c056 Fix some performance issues with weight loading and unloading.
Lower peak memory usage when changing model.

Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous
327ca1313d Support SDXS 0.9 2024-03-27 23:58:58 -04:00
doctorpangloss
b0be335d59 Improved support for ControlNet workflows with depth
- ComfyUI can now load EXR files.
 - There are new arithmetic nodes for floats and integers.
 - EXR nodes can load depth maps and be remapped with
   ImageApplyColormap. This allows end users to use ground truth depth
   data from video game engines or 3D graphics tools and recolor it to
   the format expected by depth ControlNets: grayscale inverse depth
   maps and "inferno" colored inverse depth maps.
 - Fixed license notes.
 - Added an additional known ControlNet model.
 - Because CV2 is now used to read OpenEXR files, an environment
   variable must be set early on in the application, before CV2 is
   imported. This file, main_pre, is now imported early on in more
   places.
2024-03-26 22:32:15 -07:00
comfyanonymous
ae77590b4e dora_scale support for lora file. 2024-03-25 18:09:23 -04:00
comfyanonymous
c6de09b02e Optimize memory unload strategy for more optimized performance. 2024-03-24 02:36:30 -04:00
Jacob Segal
6b6a93cc5d Merge branch 'master' into execution_model_inversion 2024-03-23 16:31:14 -07:00
doctorpangloss
d8846fcb39 Improved testing of API nodes
- dynamicPrompts now set to False by default; CLIPTextEncoder and
   related nodes now have it set to True.
 - Fixed return values of API nodes.
2024-03-22 22:04:35 -07:00
doctorpangloss
4cd8f9d2ed Merge with upstream 2024-03-22 14:35:17 -07:00
doctorpangloss
feae8c679b Add nodes to support OpenAPI and similar backend workflows 2024-03-22 14:22:50 -07:00
Christopher Jones
f8120bbd72
Allow iteration over folder paths 2024-03-22 17:01:17 +00:00
doctorpangloss
0db040cc47 Improve API support
- Removed /api/v1/images because you should use your own CDN style
   image host and /view for maximum compatibility
 - The /api/v1/prompts POST application/json response will now return
   the outputs dictionary
 - Caching has been removed
 - More tests
 - Subdirectory prefixes are now supported
 - Fixed an issue where a Linux frontend and Windows backend would have
   paths that could not interact with each other correctly
2024-03-21 16:24:22 -07:00
doctorpangloss
d73b116446 Update OpenAPI spec 2024-03-21 15:16:52 -07:00
doctorpangloss
005e370254 Merge upstream 2024-03-21 13:15:36 -07:00
comfyanonymous
0624838237 Add inverse noise scaling function. 2024-03-21 14:49:11 -04:00
doctorpangloss
59cf9e5d93 Improve distributed testing 2024-03-20 20:43:21 -07:00
comfyanonymous
5d875d77fe Fix regression with lcm not working with batches. 2024-03-20 20:48:54 -04:00
comfyanonymous
4b9005e949 Fix regression with model merging. 2024-03-20 13:56:12 -04:00
comfyanonymous
c18a203a8a Don't unload model weights for non weight patches. 2024-03-20 02:27:58 -04:00
comfyanonymous
150a3e946f Make LCM sampler use the model noise scaling function. 2024-03-20 01:35:59 -04:00
doctorpangloss
3f4049c5f4 Fix Python 3.10 compatibility detail 2024-03-19 10:48:20 -07:00
comfyanonymous
40e124c6be SV3D support. 2024-03-18 16:54:13 -04:00
doctorpangloss
74a9c45395 Fix subpath model downloads 2024-03-18 10:10:34 -07:00
comfyanonymous
cacb022c4a Make saved SD1 checkpoints match more closely the official one. 2024-03-18 00:26:23 -04:00
comfyanonymous
d7897fff2c Move cascade scale factor from stage_a to latent_formats.py 2024-03-16 14:49:35 -04:00
comfyanonymous
f2fe635c9f SamplerDPMAdaptative node to test the different options. 2024-03-15 22:36:10 -04:00
comfyanonymous
448d9263a2 Fix control loras breaking. 2024-03-14 09:30:21 -04:00
doctorpangloss
a892411cf8 Add known controlnet models and add --disable-known-models to prevent it from appearing or downloading 2024-03-13 18:11:16 -07:00
comfyanonymous
db8b59ecff Lower memory usage for loras in lowvram mode at the cost of perf. 2024-03-13 20:07:27 -04:00
doctorpangloss
341c9f2e90 Improvements to node loading, node API, folder paths and progress
- Improve node loading order. It now occurs "as late as possible".
   Configuration should be exposed as per the README.
 - Added methods to specify custom folders and models used in examples
   more robustly for custom nodes.
 - Downloading models can now be gracefully interrupted.
 - Progress notifications are now sent over the network for distributed
   ComfyUI operations.
 - Python objects have been moved around to prevent less transitive
   package importing issues.
2024-03-13 16:14:18 -07:00
doctorpangloss
3ccbda36da Adjust known models 2024-03-12 15:51:57 -07:00
doctorpangloss
e68f8885e3 Improve model downloading 2024-03-12 15:27:08 -07:00
doctorpangloss
93cdef65a4 Merge upstream 2024-03-12 09:49:47 -07:00
comfyanonymous
2a813c3b09 Switch some more prints to logging. 2024-03-11 16:34:58 -04:00
comfyanonymous
0ed72befe1 Change log levels.
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
doctorpangloss
00728eb20f Merge upstream 2024-03-11 09:32:57 -07:00
Benjamin Berman
3c57ef831c Download known models from HuggingFace 2024-03-11 00:15:06 -07:00
comfyanonymous
65397ce601 Replace prints with logging and add --verbose argument. 2024-03-10 12:14:23 -04:00
doctorpangloss
175a50d7ba Improve vanilla node importing 2024-03-08 16:29:48 -08:00
doctorpangloss
c0d9bc0129 Merge with upstream 2024-03-08 15:17:20 -08:00
comfyanonymous
5f60ee246e Support loading the sr cascade controlnet. 2024-03-07 01:22:48 -05:00
comfyanonymous
03e6e81629 Set upscale algorithm to bilinear for stable cascade controlnet. 2024-03-06 02:59:40 -05:00
comfyanonymous
03e83bb5d0 Support stable cascade canny controlnet. 2024-03-06 02:25:42 -05:00
comfyanonymous
10860bcd28 Add compression_ratio to controlnet code. 2024-03-05 15:15:20 -05:00
comfyanonymous
478f71a249 Remove useless check. 2024-03-04 08:51:25 -05:00
comfyanonymous
12c1080ebc Simplify differential diffusion code. 2024-03-03 15:34:42 -05:00
Shiimizu
727021bdea
Implement Differential Diffusion (#2876)
* Implement Differential Diffusion

* Cleanup.

* Fix.

* Masks should be applied at full strength.

* Fix colors.

* Register the node.

* Cleaner code.

* Fix issue with getting unipc sampler.

* Adjust thresholds.

* Switch to linear thresholds.

* Only calculate nearest_idx on valid thresholds.
2024-03-03 15:34:13 -05:00
comfyanonymous
1abf8374ec utils.set_attr can now be used to set any attribute.
The old set_attr has been renamed to set_attr_param.
2024-03-02 17:27:23 -05:00
comfyanonymous
dce3555339 Add some tesla pascal GPUs to the fp16 working but slower list. 2024-03-02 17:16:31 -05:00
comfyanonymous
51df846598 Let conditioning specify custom concat conds. 2024-03-02 11:44:06 -05:00
comfyanonymous
9f71e4b62d Let model patches patch sub objects. 2024-03-02 11:43:27 -05:00
comfyanonymous
00425563c0 Cleanup: Use sampling noise scaling function for inpainting. 2024-03-01 14:24:41 -05:00
comfyanonymous
c62e836167 Move noise scaling to object with sampling math. 2024-03-01 12:54:38 -05:00
doctorpangloss
148d57a772 Add extra_model_paths_config to the valid configuration for the comfyui-worker entry point 2024-03-01 08:14:21 -08:00
doctorpangloss
440f72d36f support differential diffusion nodes 2024-03-01 00:43:33 -08:00
doctorpangloss
915f2da874 Merge upstream 2024-02-29 20:48:27 -08:00
doctorpangloss
44882eab0c Improve typing and file path handling 2024-02-29 19:29:38 -08:00
Benjamin Berman
e6623a1359 Fix CLIPLoader node, fix CustomNode typing, improve digest 2024-02-29 15:54:42 -08:00
comfyanonymous
cb7c3a2921 Allow image_only_indicator to be None. 2024-02-29 13:11:30 -05:00
doctorpangloss
bae2068111 Fix issue when custom_nodes directory does not exist 2024-02-28 17:46:23 -08:00
doctorpangloss
9d3eb74796 Fix importing vanilla custom nodes 2024-02-28 14:11:34 -08:00
comfyanonymous
b3e97fc714 Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
37a86e4618 Remove duplicate text_projection key from some saved models. 2024-02-28 03:57:41 -05:00
Benjamin Berman
e0e98a8783
Update distributed_prompt_worker.py 2024-02-27 23:15:40 -08:00
doctorpangloss
d3c13f8172 (Fixed) Move new_updated 2024-02-27 17:01:16 -08:00
doctorpangloss
272e3ee357 (broken) Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-27 16:58:34 -08:00
comfyanonymous
8daedc5bf2 Auto detect playground v2.5 model. 2024-02-27 18:03:03 -05:00
comfyanonymous
d46583ecec Playground V2.5 support with ModelSamplingContinuousEDM node.
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
2024-02-27 15:12:33 -05:00
comfyanonymous
1e0fcc9a65 Make XL checkpoints save in a more standard format. 2024-02-27 02:07:40 -05:00
comfyanonymous
b416be7d78 Make the text projection saved in the checkpoint the right format. 2024-02-27 01:52:23 -05:00
comfyanonymous
03c47fc0f2 Add a min_length property to tokenizer class. 2024-02-26 21:36:37 -05:00
doctorpangloss
bd5073caf2 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-26 08:51:07 -08:00
comfyanonymous
8ac69f62e5 Make return_projected_pooled setable from the __init__ 2024-02-25 14:49:13 -05:00
comfyanonymous
ca7c310a0e Support loading old CLIP models saved with CLIPSave. 2024-02-25 08:29:12 -05:00
comfyanonymous
c2cb8e889b Always return unprojected pooled output for gligen. 2024-02-25 07:33:13 -05:00
Jacob Segal
6d09dd70f8 Make custom VALIDATE_INPUTS skip normal validation
Additionally, if `VALIDATE_INPUTS` takes an argument named `input_types`,
that variable will be a dictionary of the socket type of all incoming
connections. If that argument exists, normal socket type validation will
not occur. This removes the last hurdle for enabling variant types
entirely from custom nodes, so I've removed that command-line option.

I've added appropriate unit tests for these changes.
2024-02-24 23:17:01 -08:00
comfyanonymous
1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous
6533b172c1 Support text encoder text_projection in lora. 2024-02-24 23:50:46 -05:00
comfyanonymous
1e5f0f66be Support lora keys with lora_prior_unet_ and lora_prior_te_ 2024-02-23 12:21:20 -05:00
logtd
e1cb93c383 Fix model and cond transformer options merge 2024-02-23 01:19:43 -07:00
comfyanonymous
10847dfafe Cleanup uni_pc inpainting.
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
2024-02-23 02:39:35 -05:00
doctorpangloss
dd1f7b6183 Adapt to more custom nodes specifications 2024-02-22 20:05:48 -08:00
doctorpangloss
b95fd25380 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-22 12:06:41 -08:00
doctorpangloss
fca0d8a050 Plugins can add configuration 2024-02-22 11:58:40 -08:00
doctorpangloss
c941ee09fc Improve nodes handling 2024-02-21 23:44:16 -08:00
Jacob Segal
5ab1565418 Merge branch 'master' into execution_model_inversion 2024-02-21 19:41:09 -08:00
doctorpangloss
e3ee2418bf Fix OpenAPI schema 2024-02-21 14:14:59 -08:00
doctorpangloss
8549f4682f Fix OpenAPI schema validation 2024-02-21 14:13:12 -08:00
doctorpangloss
bf6d91fec0 Merge upstream 2024-02-21 13:33:05 -08:00
doctorpangloss
5cd06a727f Fix relative imports 2024-02-21 13:31:55 -08:00
doctorpangloss
f419c5760d Adding __init__.py 2024-02-20 14:24:51 -08:00
comfyanonymous
18c151b3e3 Add some latent2rgb matrices for previews. 2024-02-20 10:57:24 -05:00
comfyanonymous
0d0fbabd1d Pass pooled CLIP to stage b. 2024-02-20 04:24:45 -05:00
comfyanonymous
c6b7a157ed Align simple scheduling closer to official stable cascade scheduler. 2024-02-20 04:24:39 -05:00
doctorpangloss
7520691021 Merge with master 2024-02-19 10:55:22 -08:00
comfyanonymous
88f300401c Enable fp16 by default on mps. 2024-02-19 12:00:48 -05:00
comfyanonymous
e93cdd0ad0 Remove print. 2024-02-19 11:47:26 -05:00
comfyanonymous
3711b31dff Support Stable Cascade in checkpoint format. 2024-02-19 11:20:48 -05:00
comfyanonymous
d91f45ef28 Some cleanups to how the text encoders are loaded. 2024-02-19 10:46:30 -05:00
comfyanonymous
a7b5eaa7e3 Forgot to commit this. 2024-02-19 04:25:46 -05:00
comfyanonymous
3b2e579926 Support loading the Stable Cascade effnet and previewer as a VAE.
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous
dccca1daa5 Fix gligen lowvram mode. 2024-02-18 02:20:23 -05:00
doctorpangloss
ab56eadcc8 Fix missing path arguments 2024-02-17 23:19:21 -08:00
Jacob Segal
508d286b8f Fix Pyright warnings 2024-02-17 21:56:46 -08:00
comfyanonymous
8b60d33bb7 Add ModelSamplingStableCascade to control the shift sampling parameter.
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
2024-02-18 00:55:23 -05:00
Jacob Segal
9c1e3f7b98 Fix an overly aggressive assertion.
This could happen when attempting to evaluate `IS_CHANGED` for a node
during the creation of the cache (in order to create the cache key).
2024-02-17 21:02:59 -08:00
comfyanonymous
6bcf57ff10 Fix attention masks properly for multiple batches. 2024-02-17 16:15:18 -05:00
comfyanonymous
11e3221f1f fp8 weight support for Stable Cascade. 2024-02-17 15:27:31 -05:00
comfyanonymous
f8706546f3 Fix attention mask batch size in some attention functions. 2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5 Properly fix attention masks in CLIP with batches. 2024-02-17 12:13:13 -05:00
comfyanonymous
5b40e7a5ed Implement shift schedule for cascade stage C. 2024-02-17 11:38:47 -05:00
comfyanonymous
929e266f3e Manual cast for bf16 on older GPUs. 2024-02-17 09:01:17 -05:00
comfyanonymous
6c875d846b Fix clip attention mask issues on some hardware. 2024-02-17 07:53:52 -05:00
comfyanonymous
805c36ac9c Make Stable Cascade work on old pytorch 2.0 2024-02-17 00:42:30 -05:00
comfyanonymous
f2d1d16f4f Support Stable Cascade Stage B lite. 2024-02-16 23:41:23 -05:00
comfyanonymous
0b3c50480c Make --force-fp32 disable loading models in bf16. 2024-02-16 23:01:54 -05:00
comfyanonymous
97d03ae04a StableCascade CLIP model support. 2024-02-16 13:29:04 -05:00
comfyanonymous
667c92814e Stable Cascade Stage B. 2024-02-16 13:02:03 -05:00
doctorpangloss
955fcb8bb0 Fix API 2024-02-16 08:47:11 -08:00
comfyanonymous
f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
doctorpangloss
d29028dd3e Fix distributed prompting 2024-02-16 06:17:06 -08:00
comfyanonymous
5e06baf112 Stable Cascade Stage A. 2024-02-16 06:30:39 -05:00
doctorpangloss
0546b01080 Tweak RPC parameters 2024-02-15 23:43:14 -08:00
doctorpangloss
a62f2c00ed Fix RPC issue in frontend 2024-02-15 21:54:13 -08:00
doctorpangloss
cd54bf3b3d Tweak worker RPC creation 2024-02-15 20:28:11 -08:00
doctorpangloss
a5d51c1ae2 Tweak API 2024-02-15 19:08:29 -08:00
comfyanonymous
aeaeca10bd Small refactor of is_device_* functions. 2024-02-15 21:10:10 -05:00
doctorpangloss
06e74226df Add external address parameter 2024-02-15 17:39:15 -08:00
doctorpangloss
7c6b8ecb02 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-15 12:56:07 -08:00
Jacob Segal
12627ca75a Add a command-line argument to enable variants
This allows the use of nodes that have sockets of type '*' without
applying a patch to the code.
2024-02-14 21:06:53 -08:00
Jacob Segal
e4e20d79b2 Allow input_info to be of type None 2024-02-14 21:04:50 -08:00
comfyanonymous
38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 2024-02-13 00:01:08 -05:00
doctorpangloss
8b4e5cee61 Fix _model_patcher typo 2024-02-12 15:12:49 -08:00
doctorpangloss
b4eda2d5a4 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-12 14:24:20 -08:00
comfyanonymous
7dd352cbd7 Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI 2024-02-11 12:23:30 -05:00
Jedrzej Kosinski
f44225fd5f Fix infinite while loop being possible in ddim_scheduler 2024-02-09 17:11:34 -06:00
doctorpangloss
15ff903b35 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-09 12:19:00 -08:00
comfyanonymous
25a4805e51 Add a way to set different conditioning for the controlnet. 2024-02-09 14:13:31 -05:00
doctorpangloss
f195230e2a More tweaks to cli args 2024-02-09 01:40:27 -08:00
doctorpangloss
a3f9d007d4 Fix CLI args issues 2024-02-09 01:20:57 -08:00
doctorpangloss
bdc843ced1 Tweak this message 2024-02-08 22:56:06 -08:00
doctorpangloss
b3f1ce7ef0 Update comment 2024-02-08 21:15:58 -08:00
doctorpangloss
d5bcaa515c Specify worker,frontend as default roles 2024-02-08 20:37:16 -08:00
doctorpangloss
54d419d855 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-08 20:31:05 -08:00
doctorpangloss
80f8c40248 Distributed queueing with amqp-compatible servers like RabbitMQ.
- Binary previews are not yet supported
 - Use `--distributed-queue-connection-uri=amqp://guest:guest@rabbitmqserver/`
 - Roles supported: frontend, worker or both (see `--help`)
 - Run `comfy-worker` for a lightweight worker you can wrap your head
   around
 - Workers and frontends must have the same directory structure (set
   with `--cwd`) and supported nodes. Frontends must still have access
   to inputs and outputs.
 - Configuration notes:

   distributed_queue_connection_uri (Optional[str]): Servers and clients will connect to this AMQP URL to form a distributed queue and exchange prompt execution requests and progress updates.
   distributed_queue_roles (List[str]): Specifies one or more roles for the distributed queue. Acceptable values are "worker" or "frontend", or both by writing the flag twice with each role. Frontends will start the web UI and connect to the provided AMQP URL to submit prompts; workers will pull requests off the AMQP URL.
   distributed_queue_name (str): This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue name, followed by a '.', then the user ID.
2024-02-08 20:24:27 -08:00
doctorpangloss
0673262940 Fix entrypoints, add comfyui-worker entrypoint 2024-02-08 19:08:42 -08:00
doctorpangloss
72e92514a4 Better compatibility with pre-existing prompt_worker method 2024-02-08 18:07:37 -08:00
doctorpangloss
92898b8c9d Improved support for distributed queues 2024-02-08 14:55:07 -08:00
doctorpangloss
3367362cec Fix directml again now that I understand what the command line is doing 2024-02-08 10:17:49 -08:00
doctorpangloss
09838ed604 Update readme, remove unused import 2024-02-08 10:09:47 -08:00
doctorpangloss
04ce040d28 Fix commonpath / using arg.cwd on Windows 2024-02-08 09:30:16 -08:00
Benjamin Berman
8508a5a853 Fix args.directml is not None error 2024-02-08 08:40:13 -08:00
Benjamin Berman
b8fc850b47 Correctly preserves your installed torch when installed like pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git 2024-02-08 08:36:05 -08:00
blepping
a352c021ec Allow custom samplers to request discard penultimate sigma 2024-02-08 02:24:23 -07:00
Benjamin Berman
e45433755e Include missing seed parameter in sample workflow 2024-02-07 22:18:46 -08:00
doctorpangloss
123c512a84 Fix compatibility with Python 3.9, 3.10, fix Configuration class declaration issue 2024-02-07 21:52:20 -08:00
comfyanonymous
c661a8b118 Don't use numpy for calculating sigmas. 2024-02-07 18:52:51 -05:00
doctorpangloss
25c28867d2 Update script examples 2024-02-07 15:52:26 -08:00
doctorpangloss
d9b4607c36 Add locks to model_management to prevent multiple copies of the models from being loaded at the same time 2024-02-07 15:18:13 -08:00
doctorpangloss
8e9052c843 Merge with upstream 2024-02-07 14:27:50 -08:00
doctorpangloss
1b2ea61345 Improved API support
- Run comfyui workflows directly inside other python applications using
   EmbeddedComfyClient.
 - Optional telemetry in prompts and models using anonymity preserving
   Plausible self-hosted or hosted.
 - Better OpenAPI schema
 - Basic support for distributed ComfyUI backends. Limitations: no
   progress reporting, no easy way to start your own distributed
   backend, requires RabbitMQ as a message broker.
2024-02-07 14:20:21 -08:00
comfyanonymous
236bda2683 Make minimum tile size the size of the overlap. 2024-02-05 01:29:26 -05:00
comfyanonymous
66e28ef45c Don't use is_bf16_supported to check for fp16 support. 2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6 Speed up SDXL on 16xx series with fp16 weights and manual cast. 2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d Always use fp16 for the text encoders. 2024-02-02 10:02:49 -05:00
comfyanonymous
da7a8df0d2 Put VAE key name in model config. 2024-01-30 02:24:38 -05:00
doctorpangloss
32d83e52ff Fix CheckpointLoader even though it is deprecated 2024-01-29 17:20:10 -08:00
doctorpangloss
0d2cc553bc Revert "Remove unused configs contents"
This reverts commit 65549c39f1.
2024-01-29 17:03:27 -08:00
doctorpangloss
2400da51e5 PyInstaller 2024-01-29 17:02:45 -08:00
doctorpangloss
82edb2ff0e Merge with latest upstream. 2024-01-29 15:06:31 -08:00
doctorpangloss
65549c39f1 Remove unused configs contents 2024-01-29 14:14:46 -08:00
Jacob Segal
36b2214e30 Execution Model Inversion
This PR inverts the execution model -- from recursively calling nodes to
using a topological sort of the nodes. This change allows for
modification of the node graph during execution. This allows for two
major advantages:

    1. The implementation of lazy evaluation in nodes. For example, if a
    "Mix Images" node has a mix factor of exactly 0.0, the second image
    input doesn't even need to be evaluated (and visa-versa if the mix
    factor is 1.0).

    2. Dynamic expansion of nodes. This allows for the creation of dynamic
    "node groups". Specifically, custom nodes can return subgraphs that
    replace the original node in the graph. This is an incredibly
    powerful concept. Using this functionality, it was easy to
    implement:
        a. Components (a.k.a. node groups)
        b. Flow control (i.e. while loops) via tail recursion
        c. All-in-one nodes that replicate the WebUI functionality
        d. and more
    All of those were able to be implemented entirely via custom nodes,
    so those features are *not* a part of this PR. (There are some
    front-end changes that should occur before that functionality is
    made widely available, particularly around variant sockets.)

The custom nodes associated with this PR can be found at:
https://github.com/BadCafeCode/execution-inversion-demo-comfyui

Note that some of them require that variant socket types ("*") be
enabled.
2024-01-28 20:48:42 -08:00
comfyanonymous
89507f8adf Remove some unused imports. 2024-01-25 23:42:37 -05:00
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler (#2619)
self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2024-01-23 03:47:01 -05:00
comfyanonymous
4871a36458 Cleanup some unused imports. 2024-01-21 21:51:22 -05:00
comfyanonymous
78a70fda87 Remove useless import. 2024-01-19 15:38:05 -05:00
comfyanonymous
d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
f9e55d8463 Only auto enable bf16 VAE on nvidia GPUs that actually support it. 2024-01-15 03:10:22 -05:00
comfyanonymous
2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
53c8a99e6c Make server storage the default.
Remove --server-storage argument.
2024-01-11 17:21:40 -05:00
comfyanonymous
977eda19a6 Don't round noise mask. 2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd Add InpaintModelConditioning node.
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
1a57423d30 Fix issue when using multiple t2i adapters with batched images. 2024-01-10 04:00:49 -05:00
comfyanonymous
6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 2024-01-09 13:46:52 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support (#2160)
* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa Support attention mask in split attention. 2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb Implement attention mask on xformers. 2024-01-06 04:33:03 -05:00
doctorpangloss
42232f4d20 fix module ref 2024-01-03 20:12:33 -08:00
doctorpangloss
345825dfb5 Fix issues with missing __init__ in upscaler, move web/ directory to comfy/web so that the need for symbolic link support on windows is eliminated 2024-01-03 16:35:00 -08:00
doctorpangloss
d31298ac60 gligen is missing math import 2024-01-03 16:01:44 -08:00
doctorpangloss
369aeb598f Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node 2024-01-03 16:00:36 -08:00
comfyanonymous
8c6493578b Implement noise augmentation for SD 4X upscale model. 2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb Fix model patches not working in custom sampling scheduler nodes. 2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b Fix regression. 2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
a47f609f90 Auto detect out_channels from model. 2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33 Remove useless code. 2024-01-02 01:50:29 -05:00
comfyanonymous
1b103e0cb2 Add argument to run the VAE on the CPU. 2023-12-30 05:49:07 -05:00
comfyanonymous
12e822c6c8 Use function to calculate model size in model patcher. 2023-12-28 21:46:20 -05:00
comfyanonymous
e1e322cf69 Load weights that can't be lowvramed to target device. 2023-12-28 21:41:10 -05:00
comfyanonymous
c782144433 Fix clip vision lowvram mode not working. 2023-12-27 13:50:57 -05:00
comfyanonymous
f21bb41787 Fix taesd VAE in lowvram mode. 2023-12-26 12:52:21 -05:00
comfyanonymous
61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a Fix SVD lowvram mode. 2023-12-24 07:13:18 -05:00
comfyanonymous
a252963f95 --disable-smart-memory now unloads everything like it did originally. 2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate.
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
261bcbb0d9 A few missing comfy ops in the VAE. 2023-12-22 04:05:42 -05:00
comfyanonymous
9a7619b72d Fix regression with inpaint model. 2023-12-19 02:32:59 -05:00
comfyanonymous
571ea8cdcc Fix SAG not working with cfg 1.0 2023-12-18 17:03:32 -05:00
comfyanonymous
8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 2023-12-18 12:54:23 -05:00
comfyanonymous
2258f85159 Support stable zero 123 model.
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
2023-12-18 03:48:04 -05:00
comfyanonymous
2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 2023-12-17 16:59:21 -05:00
comfyanonymous
e45d920ae3 Don't resize clip vision image when the size is already good. 2023-12-16 03:06:10 -05:00
comfyanonymous
13e6d5366e Switch clip vision to manual cast.
Make it use the same dtype as the text encoder.
2023-12-16 02:47:26 -05:00
comfyanonymous
719fa0866f Set clip vision model in eval mode so it works without inference mode. 2023-12-15 18:53:08 -05:00
Hari
574363a8a6 Implement Perp-Neg 2023-12-16 00:28:16 +05:30
comfyanonymous
a5056cfb1f Remove useless code. 2023-12-15 01:28:16 -05:00
comfyanonymous
329c571993 Improve code legibility. 2023-12-14 11:41:49 -05:00
comfyanonymous
6c5990f7db Fix cfg being calculated more than once if sampler_cfg_function. 2023-12-13 20:28:04 -05:00
comfyanonymous
ba04a87d10 Refactor and improve the sag node.
Moved all the sag related code to comfy_extras/nodes_sag.py
2023-12-13 16:11:26 -05:00
Rafie Walker
6761233e9d
Implement Self-Attention Guidance (#2201)
* First SAG test

* need to put extra options on the model instead of patcher

* no errors and results seem not-broken

* Use @ashen-uncensored formula, which works better!!!

* Fix a crash when using weird resolutions. Remove an unnecessary UNet call

* Improve comments, optimize memory in blur routine

* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
comfyanonymous
b454a67bb9 Support segmind vega model. 2023-12-12 19:09:53 -05:00
comfyanonymous
824e4935f5 Add dtype parameter to VAE object. 2023-12-12 12:03:29 -05:00
comfyanonymous
32b7e7e769 Add manual cast to controlnet. 2023-12-12 11:32:42 -05:00
comfyanonymous
3152023fbc Use inference dtype for unet memory usage estimation. 2023-12-11 23:50:38 -05:00
comfyanonymous
77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e Use faster manual cast for fp8 in unet. 2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8 Disable non blocking on mps. 2023-12-10 01:30:35 -05:00
comfyanonymous
614b7e731f Implement GLora. 2023-12-09 18:15:26 -05:00
comfyanonymous
cb63e230b4 Make lora code a bit cleaner. 2023-12-09 14:15:09 -05:00
comfyanonymous
174eba8e95 Use own clip vision model implementation. 2023-12-09 11:56:31 -05:00
comfyanonymous
97015b6b38 Cleanup. 2023-12-08 16:02:08 -05:00
comfyanonymous
a4ec54a40d Add linear_start and linear_end to model_config.sampling_settings 2023-12-08 02:49:30 -05:00
comfyanonymous
9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 2023-12-08 02:35:45 -05:00
comfyanonymous
efb704c758 Support attention masking in CLIP implementation. 2023-12-07 02:51:02 -05:00
comfyanonymous
fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
doctorpangloss
3fd5de9784 fix nodes 2023-12-06 17:30:33 -08:00
comfyanonymous
2db86b4676 Slightly faster lora applying. 2023-12-06 05:13:14 -05:00
comfyanonymous
1bbd65ab30 Missed this one. 2023-12-05 12:48:41 -05:00
comfyanonymous
9b655d4fd7 Fix memory issue with control loras. 2023-12-04 21:55:19 -05:00
comfyanonymous
26b1c0a771 Fix control lora on fp8. 2023-12-04 13:47:41 -05:00
comfyanonymous
be3468ddd5 Less useless downcasting. 2023-12-04 12:53:46 -05:00
comfyanonymous
ca82ade765 Use .itemsize to get dtype size for fp8. 2023-12-04 11:52:06 -05:00
comfyanonymous
31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1 All the unet ops with weights are now handled by comfy.ops 2023-12-04 03:12:18 -05:00
Benjamin Berman
01312a55a4 merge upstream 2023-12-03 20:41:13 -08:00
comfyanonymous
61a123a1e0 A different way of handling multiple images passed to SVD.
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]

now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous
c97be4db91 Support SD2.1 turbo checkpoint. 2023-11-30 19:27:03 -05:00
comfyanonymous
983ebc5792 Use smart model management for VAE to decrease latency. 2023-11-28 04:58:51 -05:00
comfyanonymous
c45d1b9b67 Add a function to load a unet from a state dict. 2023-11-27 17:41:29 -05:00
comfyanonymous
f30b992b18 .sigma and .timestep now return tensors on the same device as the input. 2023-11-27 16:41:33 -05:00
comfyanonymous
13fdee6abf Try to free memory for both cond+uncond before inference. 2023-11-27 14:55:40 -05:00
comfyanonymous
be71bb5e13 Tweak memory inference calculations a bit. 2023-11-27 14:04:16 -05:00
comfyanonymous
39e75862b2 Fix regression from last commit. 2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
5d6dfce548 Fix importing diffusers unets. 2023-11-24 20:35:29 -05:00
comfyanonymous
3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
comfyanonymous
410bf07771 Make VAE memory estimation take dtype into account. 2023-11-22 18:17:19 -05:00
comfyanonymous
32447f0c39 Add sampling_settings so models can specify specific sampling settings. 2023-11-22 17:24:00 -05:00
comfyanonymous
c3ae99a749 Allow controlling downscale and upscale methods in PatchModelAddDownscale. 2023-11-22 03:23:16 -05:00
comfyanonymous
72741105a6 Remove useless code. 2023-11-21 17:27:28 -05:00
comfyanonymous
6a491ebe27 Allow model config to preprocess the vae state dict on load. 2023-11-21 16:29:18 -05:00
comfyanonymous
cd4fc77d5f Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
ce67dcbcda Make it easy for models to process the unet state dict on load. 2023-11-20 23:17:53 -05:00
comfyanonymous
d9d8702d8d percent_to_sigma now returns a float instead of a tensor. 2023-11-18 23:20:29 -05:00
comfyanonymous
0cf4e86939 Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
107e78b1cb Add support for loading SSD1B diffusers unet version.
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
comfyanonymous
7e3fe3ad28 Make deep shrink behave like it should. 2023-11-16 15:26:28 -05:00
comfyanonymous
9f00a18095 Fix potential issues. 2023-11-16 14:59:54 -05:00
comfyanonymous
7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 2023-11-16 12:57:12 -05:00
comfyanonymous
dcec1047e6 Invert the start and end percentages in the code.
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.

percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.

Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00
comfyanonymous
57eea0efbb heunpp2 sampler. 2023-11-14 23:50:55 -05:00
comfyanonymous
728613bb3e Fix last pr. 2023-11-14 14:41:31 -05:00
comfyanonymous
ec3d0ab432 Merge branch 'master' of https://github.com/Jannchie/ComfyUI 2023-11-14 14:38:07 -05:00
comfyanonymous
c962884a5c Make bislerp work on GPU. 2023-11-14 11:38:36 -05:00
comfyanonymous
420beeeb05 Clean up and refactor sampler code.
This should make it much easier to write custom nodes with kdiffusion type
samplers.
2023-11-14 00:39:34 -05:00
Jianqi Pan
f2e49b1d57 fix: adaptation to older versions of pytroch 2023-11-14 14:32:05 +09:00
comfyanonymous
94cc718e9c Add a way to add patches to the input block. 2023-11-14 00:08:12 -05:00
comfyanonymous
7339479b10 Disable xformers when it can't load properly. 2023-11-13 12:31:10 -05:00
comfyanonymous
4781819a85 Make memory estimation aware of model dtype. 2023-11-12 04:28:26 -05:00
comfyanonymous
dd4ba68b6e Allow different models to estimate memory usage differently. 2023-11-12 04:03:52 -05:00
comfyanonymous
2c9dba8dc0 sampling_function now has the model object as the argument. 2023-11-12 03:45:10 -05:00
comfyanonymous
8d80584f6a Remove useless argument from uni_pc sampler. 2023-11-12 01:25:33 -05:00
comfyanonymous
248aa3e563 Fix bug. 2023-11-11 12:20:16 -05:00
comfyanonymous
4a8a839b40 Add option to use in place weight updating in ModelPatcher. 2023-11-11 01:11:12 -05:00
comfyanonymous
412d3ff57d Refactor. 2023-11-11 01:11:06 -05:00
comfyanonymous
58d5d71a93 Working RescaleCFG node.
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous
3e0033ef30 Fix model merge bug.
Unload models before getting weights for model patching.
2023-11-10 03:19:05 -05:00
comfyanonymous
002aefa382 Support lcm models.
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous
ec12000136 Add support for full diff lora keys. 2023-11-08 22:05:31 -05:00
comfyanonymous
064d7583eb Add a CONDConstant for passing non tensor conds to unet. 2023-11-08 01:59:09 -05:00
comfyanonymous
794dd2064d Fix typo. 2023-11-07 23:41:55 -05:00
comfyanonymous
0a6fd49a3e Print leftover keys when using the UNETLoader. 2023-11-07 22:15:55 -05:00
comfyanonymous
fe40109b57 Fix issue with object patches not being copied with patcher. 2023-11-07 22:15:15 -05:00
comfyanonymous
a527d0c795 Code refactor. 2023-11-07 19:33:40 -05:00
comfyanonymous
2a23ba0b8c Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous
844dbf97a7 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
656c0b5d90 CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 2023-11-06 01:09:18 -05:00
gameltb
7e455adc07 fix unet_wrapper_function name in ModelPatcher 2023-11-05 17:11:44 +08:00
comfyanonymous
1ffa8858e7 Move model sampling code to comfy/model_sampling.py 2023-11-04 01:32:23 -04:00
comfyanonymous
ae2acfc21b Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
d2e27b48f1 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
2455aaed8a Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous
ecb80abb58 Allow ModelSamplingDiscrete to be instantiated without a model config. 2023-11-01 19:13:03 -04:00
comfyanonymous
e73ec8c4da Not used anymore. 2023-11-01 00:01:30 -04:00
comfyanonymous
111f1b5255 Fix some issues with sampling precision. 2023-10-31 23:49:29 -04:00
comfyanonymous
7c0f255de1 Clean up percent start/end and make controlnets work with sigmas. 2023-10-31 22:14:32 -04:00
comfyanonymous
a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
1777b54d02 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
c837a173fa Fix some memory issues in sub quad attention. 2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead Fix some OOM issues with split attention. 2023-10-30 13:14:11 -04:00
comfyanonymous
a12cc05323 Add --max-upload-size argument, the default is 100MB. 2023-10-29 03:55:46 -04:00
comfyanonymous
2a134bfab9 Fix checkpoint loader with config. 2023-10-27 22:13:55 -04:00
comfyanonymous
e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 2023-10-27 15:54:04 -04:00
comfyanonymous
6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00
comfyanonymous
434ce25ec0 Restrict loading embeddings from embedding folders. 2023-10-27 02:54:13 -04:00
comfyanonymous
723847f6b3 Faster clip image processing. 2023-10-26 01:53:01 -04:00
comfyanonymous
a373367b0c Fix some OOM issues with split and sub quad attention. 2023-10-25 20:17:28 -04:00
comfyanonymous
7fbb217d3a Fix uni_pc returning noisy image when steps <= 3 2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
3783cb8bfd change 'c_adm' to 'y' in ControlNet.get_control 2023-10-25 08:24:32 -05:00
comfyanonymous
d1d2fea806 Pass extra conds directly to unet. 2023-10-25 00:07:53 -04:00
comfyanonymous
036f88c621 Refactor to make it easier to add custom conds to models. 2023-10-24 23:31:12 -04:00
comfyanonymous
3fce8881ca Sampling code refactor to make it easier to add more conds. 2023-10-24 03:38:41 -04:00
comfyanonymous
8594c8be4d Empty the cache when torch cache is more than 25% free mem. 2023-10-22 13:58:12 -04:00
comfyanonymous
8b65f5de54 attention_basic now works with hypertile. 2023-10-22 03:59:53 -04:00
comfyanonymous
e6bc42df46 Make sub_quad and split work with hypertile. 2023-10-22 03:51:29 -04:00
comfyanonymous
a0690f9df9 Fix t2i adapter issue. 2023-10-21 20:31:24 -04:00
comfyanonymous
9906e3efe3 Make xformers work with hypertile. 2023-10-21 13:23:03 -04:00
comfyanonymous
4185324a1d Fix uni_pc sampler math. This changes the images this sampler produces. 2023-10-20 04:16:53 -04:00
comfyanonymous
e6962120c6 Make sure cond_concat is on the right device. 2023-10-19 01:14:25 -04:00
comfyanonymous
45c972aba8 Refactor cond_concat into conditioning. 2023-10-18 20:36:58 -04:00
comfyanonymous
430a8334c5 Fix some potential issues. 2023-10-18 19:48:36 -04:00
comfyanonymous
782a24fce6 Refactor cond_concat into model object. 2023-10-18 16:48:37 -04:00
comfyanonymous
0d45a565da Fix memory issue related to control loras.
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
doctorpangloss
04d0ecd0d4 merge upstream 2023-10-17 17:49:31 -07:00