Benjamin Berman
0cfde0ad6d
Fix pylint issues
2025-02-18 20:23:09 -08:00
Benjamin Berman
83ae94b96c
Fix absolute import
2025-02-18 20:09:56 -08:00
Benjamin Berman
1e74a4cf08
Fix absolute imports, fix linting issue with dataclass
2025-02-18 19:59:09 -08:00
Benjamin Berman
ffc6a7fd38
Use spawn multiprocessing context to fix Linux ProcessPool issues
2025-02-18 19:46:57 -08:00
doctorpangloss
e65faca817
Distributed setup now defaults to panicking when out of memory now, to facilitate graceful recovery
2025-02-18 15:07:02 -08:00
bymyself
afc85cdeb6
Add Load Image Output node ( #6790 )
...
* add LoadImageOutput node
* add route for input/output/temp files
* update node_typing.py
* use literal type for image_folder field
* mark node as beta
2025-02-18 17:53:01 -05:00
doctorpangloss
3ddec8ae90
Better support for process pool executors
...
- --panics-when=torch.cuda.OutOfMemory will now correctly panic and
exit the worker, giving it time to reply that the execution failed
and better dealing with irrecoverable out-of-memory errors
- --executor-factory=ProcessPoolExecutor will use a process instead of
a thread to execute comfyui workflows when using the worker. When
this process panics and exits, it will be correctly replaced, making
a more robust worker
2025-02-18 14:37:20 -08:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V ( #6862 )
...
* Support SkyReels-V1-Hunyuan-I2V
* VAE scaling
* Fix T2V
oops
* Proper latent scaling
2025-02-18 17:06:54 -05:00
doctorpangloss
684d180446
Users can now configure their workers to panic if they have out of memory exceptions, which occur due to complex failures in custom nodes
2025-02-18 10:57:23 -08:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
doctorpangloss
d04288ce8d
ImagePadForOutpaint now correctly returns a MaskBatch
2025-02-16 15:39:36 -08:00
Zhong-Yu Li
61c8c70c6e
support system prompt and cfg renorm in Lumina2 ( #6795 )
...
* support system prompt and cfg renorm in Lumina2
* fix issues with the ruff style check
2025-02-16 18:15:43 -05:00
Comfy Org PR Bot
d0399f4343
Update frontend to v1.9.18 ( #6828 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-02-16 11:45:47 -05:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
doctorpangloss
d404ab3185
Fix images None issue
2025-02-15 17:06:27 -08:00
doctorpangloss
7bf9a86fc1
Fix another None images here
2025-02-15 17:00:46 -08:00
doctorpangloss
3a3e31a0a5
Fix unexpected None type for images
2025-02-15 16:52:08 -08:00
Terry Jia
93c8607d51
remove light_intensity and fov from load3d ( #6742 )
2025-02-15 15:34:36 -05:00
Comfy Org PR Bot
b3d6ae15b3
Update frontend to v1.9.17 ( #6814 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-02-15 04:32:47 -05:00
comfyanonymous
2e21122aab
Add a node to set the model compute dtype for debugging.
2025-02-15 04:15:37 -05:00
doctorpangloss
d4218f3f19
Fix NOFLAG not present on python 3.10 (?)
2025-02-14 16:54:55 -08:00
doctorpangloss
87a4af84ae
Fix regexp match expand returning wrong type
2025-02-14 16:03:46 -08:00
doctorpangloss
0ca30c3c87
export_custom_nodes now handles abstract base classes better
2025-02-14 15:36:51 -08:00
doctorpangloss
f4e65590b8
Fix subfolder being None when images are viewed
2025-02-14 07:20:58 -08:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
Robin Huang
042a905c37
Open yaml files with utf-8 encoding for extra_model_paths.yaml ( #6807 )
...
* Using utf-8 encoding for yaml files.
* Fix test assertion.
2025-02-13 20:39:04 -05:00
comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
doctorpangloss
31b6b53236
Quality of life improvements
...
- export_custom_nodes() finds all the classes that inherit from
CustomNode and exports them correctly for custom node discovery to
find
- regular expressions
- additional string formatting and parsing nodes
2025-02-12 14:12:10 -08:00
doctorpangloss
cf08b11132
Colour package added to requirements
2025-02-12 12:48:51 -08:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b
Add add_weight_wrapper function to model patcher.
...
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c
Cleanup.
2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
bananasss00
e57d2282d1
Fix incorrect Content-Type for WebP images ( #6752 )
2025-02-11 04:48:35 -05:00
comfyanonymous
4027466c80
Make lumina model work with any latent resolution.
2025-02-10 00:24:20 -05:00
comfyanonymous
095d867147
Remove useless function.
2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers ( #6731 )
2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555
Make error more clear to user.
2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
comfyanonymous
af93c8d1ee
Document which text encoder to use for lumina 2.
2025-02-08 06:57:25 -05:00
Raphael Walker
832e3f5ca3
Fix another small bug in attention_bias redux ( #6737 )
...
* fix a bug in the attn_masked redux code when using weight=1.0
* oh shit wait there was another bug
2025-02-07 14:44:43 -05:00
doctorpangloss
0a1b118cd4
try to free disk space using better script
2025-02-07 07:23:01 -08:00
doctorpangloss
713c3f06af
Use vanilla container
2025-02-07 07:05:48 -08:00