doctorpangloss
dd9a781654
Fix linting issues
2025-09-23 10:19:30 -07:00
doctorpangloss
a9a0f96408
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-09-22 14:29:50 -07:00
doctorpangloss
e6762bb82a
Use pylint dynamic member correctly
2025-09-18 13:42:05 -07:00
Kimbing Ng
e5e70636e7
Remove single quote pattern to avoid wrong matches ( #9842 )
2025-09-13 16:59:19 -04:00
comfyanonymous
85e34643f8
Support hunyuan image 2.1 regular model. ( #9792 )
2025-09-10 02:05:07 -04:00
contentis
97652d26b8
Add explicit casting in apply_rope for Qwen VL ( #9759 )
2025-09-08 15:08:18 -04:00
doctorpangloss
735a133ad4
Update to 0.3.51
2025-08-22 17:29:18 -07:00
doctorpangloss
dfc47e0611
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-08-22 13:24:52 -07:00
comfyanonymous
5a8f502db5
Disable prompt weights for qwen. ( #9438 )
2025-08-20 01:08:11 -04:00
comfyanonymous
dfa791eb4b
Rope fix for qwen vl. ( #9435 )
2025-08-19 20:47:42 -04:00
comfyanonymous
4977f203fa
P2 of qwen edit model. ( #9412 )
...
* P2 of qwen edit model.
* Typo.
* Fix normal qwen.
* Fix.
* Make the TextEncodeQwenImageEdit also set the ref latent.
If you don't want it to set the ref latent and want to use the
ReferenceLatent node with your custom latent instead just disconnect the
VAE.
2025-08-18 22:38:34 -04:00
doctorpangloss
cd17b42664
Qwen Image with sage attention workarounds
2025-08-07 17:29:23 -07:00
doctorpangloss
b72e5ff448
Qwen Image with 4 bit and 8 bit quants
2025-08-07 13:40:05 -07:00
doctorpangloss
d8dbff9226
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-08-07 13:23:38 -07:00
comfyanonymous
c012400240
Initial support for qwen image model. ( #9179 )
2025-08-04 22:53:25 -04:00
doctorpangloss
04e411c32e
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-14 13:45:09 -07:00
comfyanonymous
938d3e8216
Remove windows line endings. ( #8866 )
2025-07-11 02:37:51 -04:00
doctorpangloss
b268296504
update upstream for flux fixes
2025-07-09 06:28:17 -07:00
comfyanonymous
170c7bb90c
Fix contiguous issue with pytorch nightly. ( #8729 )
2025-06-29 06:38:40 -04:00
doctorpangloss
a7aff3565b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-06-26 16:57:25 -07:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. ( #8669 )
2025-06-25 19:35:57 -04:00
doctorpangloss
d79d7a7e08
fix imports and other basic problems
2025-06-17 11:19:48 -07:00
doctorpangloss
82388d51a2
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-06-17 10:35:10 -07:00
comfyanonymous
f2289a1f59
Delete useless file. ( #8327 )
2025-05-29 08:29:37 -04:00
doctorpangloss
a3ad9bdb1a
fix legacy kwargs
2025-05-17 16:10:54 -07:00
comfyanonymous
5d3cc85e13
Make japanese hiragana and katakana characters work with ACE. ( #7997 )
2025-05-08 03:32:36 -04:00
comfyanonymous
16417b40d9
Initial ACE-Step model implementation. ( #7972 )
2025-05-07 08:33:34 -04:00
comfyanonymous
08ff5fa08a
Cleanup chroma PR.
2025-04-30 20:57:30 -04:00
Silver
4ca3d84277
Support for Chroma - Flux1 Schnell distilled with CFG ( #7355 )
...
* Upload files for Chroma Implementation
* Remove trailing whitespace
* trim more trailing whitespace..oops
* remove unused imports
* Add supported_inference_dtypes
* Set min_length to 0 and remove attention_mask=True
* Set min_length to 1
* get_mdulations added from blepping and minor changes
* Add lora conversion if statement in lora.py
* Update supported_models.py
* update model_base.py
* add uptream commits
* set modelType.FLOW, will cause beta scheduler to work properly
* Adjust memory usage factor and remove unnecessary code
* fix mistake
* reduce code duplication
* remove unused imports
* refactor for upstream sync
* sync chroma-support with upstream via syncbranch patch
* Update sd.py
* Add Chroma as option for the OptimalStepsScheduler node
2025-04-30 20:57:00 -04:00
comfyanonymous
23e39f2ba7
Add a T5TokenizerOptions node to set options for the T5 tokenizer. ( #7803 )
2025-04-25 19:36:00 -04:00
doctorpangloss
17b14110ab
Update to latest ComfyUI
2025-04-21 14:11:56 -07:00
doctorpangloss
5823497d55
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-04-21 13:14:36 -07:00
comfyanonymous
fd27494441
Use empty t5 of size 128 for hidream, seems to give closer results.
2025-04-19 19:49:40 -04:00
power88
f43e1d7f41
Hidream: Allow loading hidream text encoders in CLIPLoader and DualCLIPLoader ( #7676 )
...
* Hidream: Allow partial loading text encoders
* reformat code for ruff check.
2025-04-19 19:47:30 -04:00
comfyanonymous
636d4bfb89
Fix hard crash when the spiece tokenizer path is bad.
2025-04-19 15:55:43 -04:00
comfyanonymous
9899d187b1
Limit T5 to 128 tokens for HiDream: #7620
2025-04-16 18:07:55 -04:00
comfyanonymous
9ad792f927
Basic support for hidream i1 model.
2025-04-15 17:35:05 -04:00
comfyanonymous
6fc5dbd52a
Cleanup.
2025-04-15 12:13:28 -04:00
comfyanonymous
3e8155f7a3
More flexible long clip support.
...
Add clip g long clip support.
Text encoder refactor.
Support llama models with different vocab sizes.
2025-04-15 10:32:21 -04:00
doctorpangloss
040a324346
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-03-29 15:57:24 -07:00
comfyanonymous
be4e760648
Add an image_interleave option to the Hunyuan image to video encode node.
...
See the tooltip for what it does.
2025-03-07 19:56:26 -05:00
doctorpangloss
0f85e7d2b0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-03-06 07:41:41 -08:00
comfyanonymous
29a70ca101
Support HunyuanVideo image to video model.
2025-03-06 03:07:15 -05:00
doctorpangloss
3c82be86d1
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-03-05 14:38:50 -08:00
comfyanonymous
85ef295069
Make applying embeddings more efficient.
...
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
comfyanonymous
65042f7d39
Make it easier to set a custom template for hunyuan video.
2025-03-04 09:26:05 -05:00
comfyanonymous
3ea3bc8546
Fix wan issues when prompt length is long.
2025-02-26 20:34:02 -05:00
comfyanonymous
63023011b9
WIP support for Wan t2v model.
2025-02-25 17:20:35 -05:00
comfyanonymous
f40076096e
Cleanup some lumina te code.
2025-02-25 04:10:26 -05:00
doctorpangloss
048746f58b
Update to 0.3.15 and improve models
...
- Cosmos now fully tested
- Preliminary support for essential Cosmos prompt "upsampler"
- Lumina tests
- Tweaks to language and image resizing nodes
- Fix for #31 all the samplers are now present again
2025-02-24 21:27:15 -08:00