doctorpangloss
d82261485f
Prompt upsampling, better torch.compile support for language models
2025-03-03 18:36:47 -08:00
doctorpangloss
c6111fae7d
Fix Pixtral 12b compatibility
2025-03-03 13:07:36 -08:00
doctorpangloss
048746f58b
Update to 0.3.15 and improve models
...
- Cosmos now fully tested
- Preliminary support for essential Cosmos prompt "upsampler"
- Lumina tests
- Tweaks to language and image resizing nodes
- Fix for #31 all the samplers are now present again
2025-02-24 21:27:15 -08:00
doctorpangloss
693038738a
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-02-24 09:39:26 -08:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
comfyanonymous
aff16532d4
Remove some useless code.
2025-02-22 04:45:14 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
Silver
c5be423d6b
Fix link pointing to non-exisiting docs ( #6891 )
...
* Fix link pointing to non-exisiting docs
The current link is pointing to a path that does not exist any longer.
I changed it to point to the currect correct path for custom nodes datatypes.
* Update node_typing.py
2025-02-20 07:07:07 -05:00
maedtb
5715be2ca9
Fix Hunyuan unet config detection for some models. ( #6877 )
...
The change to support 32 channel hunyuan models is missing the `key_prefix` on the key.
This addresses a complain in the comments of acc152b674 .
2025-02-19 07:14:45 -05:00
Benjamin Berman
83ae94b96c
Fix absolute import
2025-02-18 20:09:56 -08:00
Benjamin Berman
1e74a4cf08
Fix absolute imports, fix linting issue with dataclass
2025-02-18 19:59:09 -08:00
Benjamin Berman
ffc6a7fd38
Use spawn multiprocessing context to fix Linux ProcessPool issues
2025-02-18 19:46:57 -08:00
doctorpangloss
e65faca817
Distributed setup now defaults to panicking when out of memory now, to facilitate graceful recovery
2025-02-18 15:07:02 -08:00
bymyself
afc85cdeb6
Add Load Image Output node ( #6790 )
...
* add LoadImageOutput node
* add route for input/output/temp files
* update node_typing.py
* use literal type for image_folder field
* mark node as beta
2025-02-18 17:53:01 -05:00
doctorpangloss
3ddec8ae90
Better support for process pool executors
...
- --panics-when=torch.cuda.OutOfMemory will now correctly panic and
exit the worker, giving it time to reply that the execution failed
and better dealing with irrecoverable out-of-memory errors
- --executor-factory=ProcessPoolExecutor will use a process instead of
a thread to execute comfyui workflows when using the worker. When
this process panics and exits, it will be correctly replaced, making
a more robust worker
2025-02-18 14:37:20 -08:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V ( #6862 )
...
* Support SkyReels-V1-Hunyuan-I2V
* VAE scaling
* Fix T2V
oops
* Proper latent scaling
2025-02-18 17:06:54 -05:00
doctorpangloss
684d180446
Users can now configure their workers to panic if they have out of memory exceptions, which occur due to complex failures in custom nodes
2025-02-18 10:57:23 -08:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
doctorpangloss
d04288ce8d
ImagePadForOutpaint now correctly returns a MaskBatch
2025-02-16 15:39:36 -08:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
doctorpangloss
0ca30c3c87
export_custom_nodes now handles abstract base classes better
2025-02-14 15:36:51 -08:00
doctorpangloss
f4e65590b8
Fix subfolder being None when images are viewed
2025-02-14 07:20:58 -08:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
doctorpangloss
31b6b53236
Quality of life improvements
...
- export_custom_nodes() finds all the classes that inherit from
CustomNode and exports them correctly for custom node discovery to
find
- regular expressions
- additional string formatting and parsing nodes
2025-02-12 14:12:10 -08:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b
Add add_weight_wrapper function to model patcher.
...
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c
Cleanup.
2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
comfyanonymous
4027466c80
Make lumina model work with any latent resolution.
2025-02-10 00:24:20 -05:00
comfyanonymous
095d867147
Remove useless function.
2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers ( #6731 )
2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555
Make error more clear to user.
2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
comfyanonymous
079eccc92a
Don't compress http response by default.
...
Remove argument to disable it.
Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
doctorpangloss
ef74b9fdda
More graceful health check handling of this connection not being ready
2025-02-06 11:08:09 -08:00
doctorpangloss
5b3eb2e51c
Fix torch.zeroes error
2025-02-06 09:00:10 -08:00
comfyanonymous
14880e6dba
Remove some useless code.
2025-02-06 05:00:37 -05:00
doctorpangloss
3f1f427ff4
Distinct Seed and Seed64 input specs. numpy only supports 32 bit seeds
2025-02-05 14:08:09 -08:00