Commit Graph

425 Commits

Author SHA1 Message Date
comfyanonymous
cac68ca813 Fix some more video tiled encode issues.
The downscale_ratio formula for the temporal had issues with some frame
numbers.
2024-12-19 23:14:03 -05:00
comfyanonymous
c441048a4f Make VAE Encode tiled node work with video VAE. 2024-12-19 05:31:39 -05:00
doctorpangloss
86b15084d5 Fix issues with directories and running on macOS
- include detailed runtime instructions for Windows and macOS
 - include instructions for agreeing to use Hugging Face repositories
 - always create directories by default when run interactively
 - model downloader now supports multiple folder names for known models
 - improve logging in sd.py
2024-12-18 15:37:16 -08:00
comfyanonymous
d6656b0c0c Support llama hunyuan video text encoder in scaled fp8 format. 2024-12-17 04:19:22 -05:00
comfyanonymous
f4cdedea62 Fix regression with ltxv VAE. 2024-12-17 02:17:31 -05:00
comfyanonymous
39b1fc4ccc Adjust used dtypes for hunyuan video VAE and diffusion model. 2024-12-16 23:31:10 -05:00
comfyanonymous
bda1482a27 Basic Hunyuan Video model support. 2024-12-16 19:35:40 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables (#5989)
* Enable F841

* Autofix

* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
doctorpangloss
d989e65fde Update ComfyUI and fix tests 2024-12-09 19:45:17 -08:00
doctorpangloss
2d1676c717 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-12-09 15:54:37 -08:00
Jedrzej Kosinski
0ee322ec5f
ModelPatcher Overhaul and Hook Support (#5583)
* Added hook_patches to ModelPatcher for weights (model)

* Initial changes to calc_cond_batch to eventually support hook_patches

* Added current_patcher property to BaseModel

* Consolidated add_hook_patches_as_diffs into add_hook_patches func, fixed fp8 support for model-as-lora feature

* Added call to initialize_timesteps on hooks in process_conds func, and added call prepare current keyframe on hooks in calc_cond_batch

* Added default_conds support in calc_cond_batch func

* Added initial set of hook-related nodes, added code to register hooks for loras/model-as-loras, small renaming/refactoring

* Made CLIP work with hook patches

* Added initial hook scheduling nodes, small renaming/refactoring

* Fixed MaxSpeed and default conds implementations

* Added support for adding weight hooks that aren't registered on the ModelPatcher at sampling time

* Made Set Clip Hooks node work with hooks from Create Hook nodes, began work on better Create Hook Model As LoRA node

* Initial work on adding 'model_as_lora' lora type to calculate_weight

* Continued work on simpler Create Hook Model As LoRA node, started to implement ModelPatcher callbacks, attachments, and additional_models

* Fix incorrect ref to create_hook_patches_clone after moving function

* Added injections support to ModelPatcher + necessary bookkeeping, added additional_models support in ModelPatcher, conds, and hooks

* Added wrappers to ModelPatcher to facilitate standardized function wrapping

* Started scaffolding for other hook types, refactored get_hooks_from_cond to organize hooks by type

* Fix skip_until_exit logic bug breaking injection after first run of model

* Updated clone_has_same_weights function to account for new ModelPatcher properties, improved AutoPatcherEjector usage in partially_load

* Added WrapperExecutor for non-classbound functions, added calc_cond_batch wrappers

* Refactored callbacks+wrappers to allow storing lists by id

* Added forward_timestep_embed_patch type, added helper functions on ModelPatcher for emb_patch and forward_timestep_embed_patch, added helper functions for removing callbacks/wrappers/additional_models by key, added custom_should_register prop to hooks

* Added get_attachment func on ModelPatcher

* Implement basic MemoryCounter system for determing with cached weights due to hooks should be offloaded in hooks_backup

* Modified ControlNet/T2IAdapter get_control function to receive transformer_options as additional parameter, made the model_options stored in extra_args in inner_sample be a clone of the original model_options instead of same ref

* Added create_model_options_clone func, modified type annotations to use __future__ so that I can use the better type annotations

* Refactored WrapperExecutor code to remove need for WrapperClassExecutor (now gone), added sampler.sample wrapper (pending review, will likely keep but will see what hacks this could currently let me get rid of in ACN/ADE)

* Added Combine versions of Cond/Cond Pair Set Props nodes, renamed Pair Cond to Cond Pair, fixed default conds never applying hooks (due to hooks key typo)

* Renamed Create Hook Model As LoRA nodes to make the test node the main one (more changes pending)

* Added uuid to conds in CFGGuider and uuids to transformer_options to allow uniquely identifying conds in batches during sampling

* Fixed models not being unloaded properly due to current_patcher reference; the current ComfyUI model cleanup code requires that nothing else has a reference to the ModelPatcher instances

* Fixed default conds not respecting hook keyframes, made keyframes not reset cache when strength is unchanged, fixed Cond Set Default Combine throwing error, fixed model-as-lora throwing error during calculate_weight after a recent ComfyUI update, small refactoring/scaffolding changes for hooks

* Changed CreateHookModelAsLoraTest to be the new CreateHookModelAsLora, rename old ones as 'direct' and will be removed prior to merge

* Added initial support within CLIP Text Encode (Prompt) node for scheduling weight hook CLIP strength via clip_start_percent/clip_end_percent on conds, added schedule_clip toggle to Set CLIP Hooks node, small cleanup/fixes

* Fix range check in get_hooks_for_clip_schedule so that proper keyframes get assigned to corresponding ranges

* Optimized CLIP hook scheduling to treat same strength as same keyframe

* Less fragile memory management.

* Make encode_from_tokens_scheduled call cleaner, rollback change in model_patcher.py for hook_patches_backup dict

* Fix issue.

* Remove useless function.

* Prevent and detect some types of memory leaks.

* Run garbage collector when switching workflow if needed.

* Moved WrappersMP/CallbacksMP/WrapperExecutor to patcher_extension.py

* Refactored code to store wrappers and callbacks in transformer_options, added apply_model and diffusion_model.forward wrappers

* Fix issue.

* Refactored hooks in calc_cond_batch to be part of get_area_and_mult tuple, added extra_hooks to ControlBase to allow custom controlnets w/ hooks, small cleanup and renaming

* Fixed inconsistency of results when schedule_clip is set to False, small renaming/typo fixing, added initial support for ControlNet extra_hooks to work in tandem with normal cond hooks, initial work on calc_cond_batch merging all subdicts in returned transformer_options

* Modified callbacks and wrappers so that unregistered types can be used, allowing custom_nodes to have their own unique callbacks/wrappers if desired

* Updated different hook types to reflect actual progress of implementation, initial scaffolding for working WrapperHook functionality

* Fixed existing weight hook_patches (pre-registered) not working properly for CLIP

* Removed Register/Direct hook nodes since they were present only for testing, removed diff-related weight hook calculation as improved_memory removes unload_model_clones and using sample time registered hooks is less hacky

* Added clip scheduling support to all other native ComfyUI text encoding nodes (sdxl, flux, hunyuan, sd3)

* Made WrapperHook functional, added another wrapper/callback getter, added ON_DETACH callback to ModelPatcher

* Made opt_hooks append by default instead of replace, renamed comfy.hooks set functions to be more accurate

* Added apply_to_conds to Set CLIP Hooks, modified relevant code to allow text encoding to automatically apply hooks to output conds when apply_to_conds is set to True

* Fix cached_hook_patches not respecting target_device/memory_counter results

* Fixed issue with setting weights from hooks instead of copying them, added additional memory_counter check when caching hook patches

* Remove unnecessary torch.no_grad calls for hook patches

* Increased MemoryCounter minimum memory to leave free by *2 until a better way to get inference memory estimate of currently loaded models exists

* For encode_from_tokens_scheduled, allow start_percent and end_percent in add_dict to limit which scheduled conds get encoded for optimization purposes

* Removed a .to call on results of calculate_weight in patch_hook_weight_to_device that was screwing up the intermediate results for fp8 prior to being passed into stochastic_rounding call

* Made encode_from_tokens_scheduled work when no hooks are set on patcher

* Small cleanup of comments

* Turn off hook patch caching when only 1 hook present in sampling, replace some current_hook = None with calls to self.patch_hooks(None) instead to avoid a potential edge case

* On Cond/Cond Pair nodes, removed opt_ prefix from optional inputs

* Allow both FLOATS and FLOAT for floats_strength input

* Revert change, does not work

* Made patch_hook_weight_to_device respect set_func and convert_func

* Make discard_model_sampling True by default

* Add changes manually from 'master' so merge conflict resolution goes more smoothly

* Cleaned up text encode nodes with just a single clip.encode_from_tokens_scheduled call

* Make sure encode_from_tokens_scheduled will respect use_clip_schedule on clip

* Made nodes in nodes_hooks be marked as experimental (beta)

* Add get_nested_additional_models for cases where additional_models could have their own additional_models, and add robustness for circular additional_models references

* Made finalize_default_conds area math consistent with other sampling code

* Changed 'opt_hooks' input of Cond/Cond Pair Set Default Combine nodes to 'hooks'

* Remove a couple old TODO's and a no longer necessary workaround
2024-12-02 14:51:02 -05:00
doctorpangloss
4b77c4941c LTXV tests 2024-11-22 17:13:19 -08:00
doctorpangloss
f39b8dfebc Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-11-22 15:50:23 -08:00
comfyanonymous
6e8cdcd3cb Fix some tiled VAE decoding issues with LTX-Video. 2024-11-22 18:00:34 -05:00
comfyanonymous
5e16f1d24b Support Lightricks LTX-Video model. 2024-11-22 08:46:39 -05:00
comfyanonymous
8f0009aad0 Support new flux model variants. 2024-11-21 08:38:23 -05:00
doctorpangloss
c0f072ee0f Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-11-18 13:12:31 -08:00
comfyanonymous
2865f913f7 Free memory before doing tiled decode. 2024-11-07 04:01:24 -05:00
comfyanonymous
b49616f951 Make VAEDecodeTiled node work with video VAEs. 2024-11-07 03:47:12 -05:00
comfyanonymous
8afb97cd3f Fix unknown VAE being detected as the mochi VAE. 2024-11-05 03:43:27 -05:00
doctorpangloss
772e768fe8 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-11-04 10:17:26 -08:00
comfyanonymous
fabf449feb Mochi VAE encoder. 2024-11-01 17:33:09 -04:00
doctorpangloss
76a80a65ea Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-10-29 15:35:39 -07:00
comfyanonymous
c320801187 Remove useless line. 2024-10-28 17:41:12 -04:00
comfyanonymous
5cbb01bc2f Basic Genmo Mochi video model support.
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.

EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
0075c6d096 Mixed precision diffusion models with scaled fp8.
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
comfyanonymous
83ca891118 Support scaled fp8 t5xxl model. 2024-10-20 22:27:00 -04:00
comfyanonymous
a68bbafddb Support diffusion models with scaled fp8 weights. 2024-10-19 23:47:42 -04:00
doctorpangloss
8512f361fe Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-10-14 15:26:27 -07:00
comfyanonymous
6632365e16 model_options consistency between functions.
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
comfyanonymous
1b80895285 Make clip loader nodes support loading sd3 t5xxl in lower precision.
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 (#5121)
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
comfyanonymous
e38c94228b Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
comfyanonymous
7d2467e830 Some minor cleanups. 2024-10-05 13:22:39 -04:00
comfyanonymous
abcd006b8c Allow more permutations of clip/t5 in dual clip loader. 2024-10-03 09:26:11 -04:00
comfyanonymous
d985d1d7dc CLIP Loader node now supports clip_l and clip_g only for SD3. 2024-10-02 04:25:17 -04:00
comfyanonymous
d1cdf51e1b Refactor some of the TE detection code. 2024-10-01 07:08:41 -04:00
doctorpangloss
fa3176f96f Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-09-23 12:50:31 -07:00
comfyanonymous
ad66f7c7d8 Add model_options to load_controlnet function. 2024-09-19 08:23:35 -04:00
comfyanonymous
d514bb38ee Add some option to model_options for the text encoder.
load_device, offload_device and the initial_device can now be set.
2024-09-17 03:49:54 -04:00
comfyanonymous
e813abbb2c Long CLIP L support for SDXL, SD3 and Flux.
Use the *CLIPLoader nodes.
2024-09-15 07:59:38 -04:00
doctorpangloss
5155a3e248 Merge WIP 2024-08-25 18:52:29 -07:00
comfyanonymous
d1a6bd6845 Support loading long clipl model with the CLIP loader node. 2024-08-20 10:46:36 -04:00
comfyanonymous
9eee470244 New load_text_encoder_state_dicts function.
Now you can load text encoders straight from a list of state dicts.
2024-08-19 17:36:35 -04:00
comfyanonymous
4f7a3cb6fb unet -> diffusion_models. 2024-08-17 21:31:04 -04:00
comfyanonymous
fca42836f2 Add model_options for text encoder. 2024-08-17 11:17:20 -04:00
doctorpangloss
0549f35e85 Merge commit '39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd' of github.com:comfyanonymous/ComfyUI
- Improvements to tests
 - Fixes model management
 - Fixes issues with language nodes
2024-08-13 20:08:56 -07:00
comfyanonymous
74e124f4d7 Fix some issues with TE being in lowvram mode. 2024-08-12 23:42:21 -04:00
comfyanonymous
a562c17e8a load_unet -> load_diffusion_model with a model_options argument. 2024-08-12 23:20:57 -04:00
comfyanonymous
ad76574cb8 Fix some potential issues with the previous commits. 2024-08-12 00:23:29 -04:00
comfyanonymous
9acfe4df41 Support loading directly to vram with CLIPLoader node. 2024-08-12 00:06:01 -04:00
comfyanonymous
9829b013ea Fix mistake in last commit. 2024-08-12 00:00:17 -04:00
comfyanonymous
5c69cde037 Load TE model straight to vram if certain conditions are met. 2024-08-11 23:52:43 -04:00
comfyanonymous
e9589d6d92 Add a way to set model dtype and ops from load_checkpoint_guess_config. 2024-08-11 08:50:34 -04:00
comfyanonymous
0d82a798a5 Remove the ckpt_path from load_state_dict_guess_config. 2024-08-11 08:37:35 -04:00
ljleb
925fff26fd
alternative to load_checkpoint_guess_config that accepts a loaded state dict (#4249)
* make alternative fn

* add back ckpt path as 2nd argument?
2024-08-11 08:36:52 -04:00
comfyanonymous
b334605a66 Fix OOMs happening in some cases.
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
doctorpangloss
39c6335331 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-08-05 16:13:20 -07:00
comfyanonymous
2ba5cc8b86 Fix some issues. 2024-08-03 15:06:40 -04:00
comfyanonymous
ba9095e5bd Automatically use fp8 for diffusion model weights if:
Checkpoint contains weights in fp8.

There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
doctorpangloss
0a1ae64b0b Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-08-01 16:19:11 -07:00
comfyanonymous
d7430a1651 Add a way to load the diffusion model in fp8 with UNETLoader node. 2024-08-01 13:30:51 -04:00
comfyanonymous
5f98de7697 Load flux t5 in fp8 if weights are in fp8. 2024-08-01 11:05:56 -04:00
comfyanonymous
1589b58d3e Basic Flux Schnell and Flux Dev model implementation. 2024-08-01 09:49:29 -04:00
doctorpangloss
ce5fe01768 Improve performance and memory management of upscale models, improve messaging on models loaded and unloaded from the GPU 2024-07-30 17:05:53 -07:00
doctorpangloss
34522e0914 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-07-30 11:11:45 -07:00
comfyanonymous
4ba7fa0244 Refactor: Move sd2_clip.py to text_encoders folder. 2024-07-28 01:19:20 -04:00
comfyanonymous
a5f4292f9f
Basic hunyuan dit implementation. (#4102)
* Let tokenizers return weights to be stored in the saved checkpoint.

* Basic hunyuan dit implementation.

* Fix some resolutions not working.

* Support hydit checkpoint save.

* Init with right dtype.

* Switch to optimized attention in pooler.

* Fix black images on hunyuan dit.
2024-07-25 18:21:08 -04:00
comfyanonymous
f87810cd3e Let tokenizers return weights to be stored in the saved checkpoint. 2024-07-25 10:52:09 -04:00
comfyanonymous
10c919f4c7 Make it possible to load tokenizer data from checkpoints. 2024-07-24 16:43:53 -04:00
doctorpangloss
cc99d89ac6 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-07-18 16:31:21 -07:00
doctorpangloss
72baecad87 Improve logging and tracing for validation errors 2024-07-16 12:26:30 -07:00
comfyanonymous
1305fb294c Refactor: Move some code to the comfy/text_encoders folder. 2024-07-15 17:36:24 -04:00
doctorpangloss
3d1d833e6f Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-07-15 14:22:49 -07:00
comfyanonymous
a3dffc447a Support AuraFlow Lora and loading model weights in diffusers format.
You can load model weights in diffusers format using the UNETLoader node.
2024-07-13 13:51:40 -04:00
comfyanonymous
9f291d75b3 AuraFlow model implementation. 2024-07-11 16:52:26 -04:00
comfyanonymous
f45157e3ac Fix error message never being shown. 2024-07-11 11:46:51 -04:00
comfyanonymous
5e1fced639 Cleaner support for loading different diffusion model types. 2024-07-11 11:37:31 -04:00
comfyanonymous
ffe0bb0a33 Remove useless code. 2024-07-10 20:33:12 -04:00
comfyanonymous
391c1046cf More flexibility with text encoder return values.
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
comfyanonymous
4040491149 Better T5xxl detection. 2024-07-06 00:53:33 -04:00
doctorpangloss
a13088ccec Merge upstream 2024-07-04 11:58:55 -07:00
comfyanonymous
d7484ef30c Support loading checkpoints with the UNETLoader node. 2024-07-03 11:34:32 -04:00
comfyanonymous
537f35c7bc Don't update dict if contiguous. 2024-07-02 20:21:51 -04:00
Alex "mcmonkey" Goodwin
3f46362d22
fix non-contiguous tensor saving (from channels-last) (#3932) 2024-07-02 20:16:33 -04:00
comfyanonymous
8ceb5a02a3 Support saving stable audio checkpoint that can be loaded back. 2024-06-27 11:06:52 -04:00
comfyanonymous
4ef1479dcd Multi dimension tiled scale function and tiled VAE audio encoding fallback. 2024-06-22 11:57:49 -04:00
comfyanonymous
1e2839f4d9 More proper tiled audio decoding. 2024-06-20 16:50:31 -04:00
comfyanonymous
0d6a57938e Support loading diffusers SD3 model format with UNETLoader node. 2024-06-19 22:21:18 -04:00
comfyanonymous
a45df69570 Basic tiled decoding for audio VAE. 2024-06-17 22:48:23 -04:00
doctorpangloss
8cdc246450 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-06-17 16:19:48 -07:00
comfyanonymous
6425252c4f Use fp16 as the default vae dtype for the audio VAE. 2024-06-16 13:12:54 -04:00
comfyanonymous
ca9d300a80 Better estimation for memory usage during audio VAE encoding/decoding. 2024-06-16 11:47:32 -04:00
comfyanonymous
746a0410d4 Fix VAEEncode with taesd3. 2024-06-16 03:10:04 -04:00
comfyanonymous
04e8798c37 Improvements to the TAESD3 implementation. 2024-06-16 02:04:24 -04:00
Dr.Lt.Data
df7db0e027
support TAESD3 (#3738) 2024-06-16 02:03:53 -04:00
comfyanonymous
bb1969cab7 Initial support for the stable audio open model. 2024-06-15 12:14:56 -04:00
doctorpangloss
cac6690481 Add known SD3 model files, merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-06-12 10:56:41 -07:00
comfyanonymous
69c8d6d8a6 Single and dual clip loader nodes support SD3.
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
2024-06-11 23:27:39 -04:00
comfyanonymous
0e49211a11 Load the SD3 T5xxl model in the same dtype stored in the checkpoint. 2024-06-11 17:03:26 -04:00
comfyanonymous
5889b7ca0a Support multiple text encoder configurations on SD3. 2024-06-11 13:14:43 -04:00
doctorpangloss
a5d828be77 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-06-10 13:21:36 -07:00
comfyanonymous
8c4a9befa7 SD3 Support. 2024-06-10 14:06:23 -04:00
comfyanonymous
0920e0e5fe Remove some unused imports. 2024-05-27 19:08:27 -04:00
doctorpangloss
3d98440fb7 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-16 14:28:49 -07:00
doctorpangloss
5a9055fe05 Tokenizers are now shallow cloned when CLIP is cloned. This allows nodes to add vocab to the tokenizer, as some checkpoints and LoRAs may require. 2024-05-16 12:39:19 -07:00
doctorpangloss
8741cb3ce8 LLM support in ComfyUI
- Currently uses `transformers`
 - Supports model management and correctly loading and unloading models
   based on what your machine can support
 - Includes a Text Diffusers 2 workflow to demonstrate text rendering in
   SD1.5
2024-05-14 17:30:23 -07:00
comfyanonymous
e1489ad257 Fix issue with lowvram mode breaking model saving. 2024-05-11 21:55:20 -04:00
doctorpangloss
c2fa74f625 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-09 13:45:38 -07:00
comfyanonymous
93e876a3be Remove warnings that confuse people. 2024-05-09 05:29:42 -04:00
doctorpangloss
3a64e04a93 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-07 13:57:53 -07:00
comfyanonymous
c61eadf69a Make the load checkpoint with config function call the regular one.
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.

The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
doctorpangloss
f965fb2bc0 Merge upstream 2024-04-24 22:41:43 -07:00
comfyanonymous
8dc19e40d1 Don't init a VAE model when there are no VAE weights. 2024-04-24 09:20:31 -04:00
comfyanonymous
c59fe9f254 Support VAE without quant_conv. 2024-04-18 21:05:33 -04:00
doctorpangloss
034ffcea03 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-04-08 10:02:37 -07:00
comfyanonymous
30abc324c2 Support properly saving CosXL checkpoints. 2024-04-08 00:36:22 -04:00
doctorpangloss
93cdef65a4 Merge upstream 2024-03-12 09:49:47 -07:00
comfyanonymous
0ed72befe1 Change log levels.
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
doctorpangloss
00728eb20f Merge upstream 2024-03-11 09:32:57 -07:00
comfyanonymous
65397ce601 Replace prints with logging and add --verbose argument. 2024-03-10 12:14:23 -04:00
doctorpangloss
bd5073caf2 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-26 08:51:07 -08:00
comfyanonymous
ca7c310a0e Support loading old CLIP models saved with CLIPSave. 2024-02-25 08:29:12 -05:00
comfyanonymous
c2cb8e889b Always return unprojected pooled output for gligen. 2024-02-25 07:33:13 -05:00
comfyanonymous
1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
doctorpangloss
7520691021 Merge with master 2024-02-19 10:55:22 -08:00
comfyanonymous
d91f45ef28 Some cleanups to how the text encoders are loaded. 2024-02-19 10:46:30 -05:00
comfyanonymous
3b2e579926 Support loading the Stable Cascade effnet and previewer as a VAE.
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous
97d03ae04a StableCascade CLIP model support. 2024-02-16 13:29:04 -05:00
comfyanonymous
f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
comfyanonymous
5e06baf112 Stable Cascade Stage A. 2024-02-16 06:30:39 -05:00
doctorpangloss
7c6b8ecb02 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-15 12:56:07 -08:00
comfyanonymous
38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 2024-02-13 00:01:08 -05:00
doctorpangloss
8b4e5cee61 Fix _model_patcher typo 2024-02-12 15:12:49 -08:00
doctorpangloss
8e9052c843 Merge with upstream 2024-02-07 14:27:50 -08:00
comfyanonymous
da7a8df0d2 Put VAE key name in model config. 2024-01-30 02:24:38 -05:00
doctorpangloss
82edb2ff0e Merge with latest upstream. 2024-01-29 15:06:31 -08:00
comfyanonymous
4871a36458 Cleanup some unused imports. 2024-01-21 21:51:22 -05:00
comfyanonymous
d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
doctorpangloss
369aeb598f Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node 2024-01-03 16:00:36 -08:00
comfyanonymous
a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
824e4935f5 Add dtype parameter to VAE object. 2023-12-12 12:03:29 -05:00
comfyanonymous
ba07cb748e Use faster manual cast for fp8 in unet. 2023-12-11 18:24:44 -05:00
comfyanonymous
9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 2023-12-08 02:35:45 -05:00
Benjamin Berman
01312a55a4 merge upstream 2023-12-03 20:41:13 -08:00
comfyanonymous
983ebc5792 Use smart model management for VAE to decrease latency. 2023-11-28 04:58:51 -05:00
comfyanonymous
c45d1b9b67 Add a function to load a unet from a state dict. 2023-11-27 17:41:29 -05:00
comfyanonymous
871cc20e13 Support SVD img2vid model. 2023-11-23 19:41:33 -05:00
comfyanonymous
410bf07771 Make VAE memory estimation take dtype into account. 2023-11-22 18:17:19 -05:00