comfyanonymous
7d593baf91
Extra reserved vram on large cards on windows. ( #9093 )
2025-07-29 04:07:45 -04:00
doctorpangloss
03e5430121
Improvements for Wan 2.2 support
...
- add xet support and add the xet cache to manageable directories
- xet is enabled by default
- fix logging to root in various places
- improve logging about model unloading and loading
- TorchCompileNode now supports the VAE
- torchaudio missing will cause less noise in the logs
- feature flags will assume to be supporting everything in the distributed progress context
- fixes progress notifications
2025-07-28 14:36:27 -07:00
doctorpangloss
3684cff31b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-25 12:48:05 -07:00
comfyanonymous
69cb57b342
Print xpu device name. ( #9035 )
2025-07-24 15:06:25 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX ( #8585 )
...
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
comfyanonymous
d3504e1778
Enable pytorch attention by default for gfx1201 on torch 2.8 ( #9029 )
2025-07-23 19:21:29 -04:00
comfyanonymous
a86a58c308
Fix xpu function not implemented p2. ( #9027 )
2025-07-23 18:18:20 -04:00
comfyanonymous
39dda1d40d
Fix xpu function not implemented. ( #9026 )
2025-07-23 18:10:59 -04:00
comfyanonymous
5ad33787de
Add default device argument. ( #9023 )
2025-07-23 14:20:49 -04:00
Simon Lui
255f139863
Add xpu version for async offload and some other things. ( #9004 )
2025-07-22 15:20:09 -04:00
doctorpangloss
96b4e04315
packaging fixes
...
- enable user db
- fix main_pre order everywhere
- fix absolute to relative imports everywhere
- async better supported
2025-07-15 10:19:33 -07:00
doctorpangloss
a7aff3565b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-06-26 16:57:25 -07:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. ( #8672 )
2025-06-26 03:39:09 -04:00
doctorpangloss
1d901e32eb
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-06-17 13:20:31 -07:00
doctorpangloss
82388d51a2
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-06-17 10:35:10 -07:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. ( #8485 )
2025-06-10 13:06:24 -04:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. ( #8474 )
...
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 ( #8464 )
2025-06-08 14:15:34 -04:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. ( #8453 )
2025-06-07 10:01:15 -04:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. ( #8348 )
2025-05-30 15:41:02 -04:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. ( #8294 )
2025-05-26 16:39:27 -04:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 ( #8282 )
2025-05-26 04:29:25 -04:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. ( #8257 )
...
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
Benjamin Berman
b6d3f1fb08
Accept workflows from the command line
2025-05-07 14:53:39 -07:00
comfyanonymous
0a66d4b0af
Per device stream counters for async offload. ( #7873 )
2025-04-29 20:28:52 -04:00
comfyanonymous
5a50c3c7e5
Fix stream priority to support older pytorch. ( #7856 )
2025-04-28 13:07:21 -04:00
comfyanonymous
c8cd7ad795
Use stream for casting if enabled. ( #7833 )
2025-04-27 05:38:11 -04:00
comfyanonymous
0dcc75ca54
Add experimental --async-offload lowvram weight offloading. ( #7820 )
...
This should speed up the lowvram mode a bit. It currently is only enabled when --async-offload is used but it will be enabled by default in the future if there are no problems.
2025-04-26 16:11:21 -04:00
comfyanonymous
2d6805ce57
Add option for using fp8_e8m0fnu for model weights. ( #7733 )
...
Seems to break every model I have tried but worth testing?
2025-04-22 06:17:38 -04:00
doctorpangloss
5823497d55
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-04-21 13:14:36 -07:00
doctorpangloss
ffc1912eff
Fix issues with tests
2025-04-04 08:27:33 -07:00
BiologicalExplosion
2222cf67fd
MLU memory optimization ( #7470 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
BVH
301e26b131
Add option to store TE in bf16 ( #7461 )
2025-04-01 13:48:53 -04:00
doctorpangloss
040a324346
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-03-29 15:57:24 -07:00
comfyanonymous
8edc1f44c1
Support more float8 types.
2025-03-25 05:23:49 -04:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. ( #7223 )
...
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93
Fix.
2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed
Print mac version.
2025-03-13 10:05:40 -04:00
comfyanonymous
0952569493
Fix stable cascade VAE on some lowvram machines.
2025-03-08 20:24:04 -05:00
doctorpangloss
83948cafd1
WAN 2.1 support
2025-03-06 07:32:04 -08:00
doctorpangloss
3c82be86d1
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-03-05 14:38:50 -08:00
doctorpangloss
d82261485f
Prompt upsampling, better torch.compile support for language models
2025-03-03 18:36:47 -08:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options ( #7024 )
2025-03-01 02:37:35 -05:00
comfyanonymous
cf0b549d48
--fast now takes a number as argument to indicate how fast you want it.
...
The idea is that you can indicate how much quality vs speed you want.
At the moment:
--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.
--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b
Use fp16 for intermediate for fp8 weights with --fast if supported.
2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952
Use fp16 if checkpoint weights are fp16 and the model supports it.
2025-02-27 16:39:57 -05:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU ( #6964 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
doctorpangloss
048746f58b
Update to 0.3.15 and improve models
...
- Cosmos now fully tested
- Preliminary support for essential Cosmos prompt "upsampler"
- Lumina tests
- Tweaks to language and image resizing nodes
- Fix for #31 all the samplers are now present again
2025-02-24 21:27:15 -08:00
doctorpangloss
693038738a
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-02-24 09:39:26 -08:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
doctorpangloss
6ab1aa1e8a
Improving MLLM/VLLM support and fixing bugs
...
- fix #29 str(model) no longer raises exceptions like with
HyVideoModelLoader
- don't try to format CUDA tensors because that can sometimes raise
exceptions
- cudaAllocAsync has been disabled for now due to 2.6.0 bugs
- improve florence2 support
- add support for paligemma 2. This requires the fix for transformers
that is currently staged in another repo, install with
`uv pip install --no-deps "transformers@git+https://github.com/zucchini-nlp/transformers.git#branch=paligemma-fix-kwargs "`
- triton has been updated
- fix missing __init__.py files
2025-02-05 14:02:28 -08:00
doctorpangloss
dcac115f68
Revert "Update logging when models are loaded"
...
This reverts commit 0d15a091c2 .
2025-02-04 15:18:00 -08:00
Rafał Leszko
0d15a091c2
Update logging when models are loaded
...
The "Loaded " log was logged even if no model were actually loaded into VRAM
2025-02-04 14:44:12 +01:00
doctorpangloss
a3452f6e6a
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-01-28 13:45:51 -08:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
comfyanonymous
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
doctorpangloss
631d9e44c6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-01-16 09:58:02 -08:00
doctorpangloss
12082d877d
Fix linting issues
2025-01-04 14:07:19 -08:00
comfyanonymous
d45ebb63f6
Remove old unused function.
2025-01-04 07:20:54 -05:00
comfyanonymous
9e9c8a1c64
Clear cache as often on AMD as Nvidia.
...
I think the issue this was working around has been solved.
If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
doctorpangloss
9d5a5dd533
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-12-28 14:24:27 -08:00
comfyanonymous
160ca08138
Use python 3.9 in launch test instead of 3.8
...
Fix ruff check.
2024-12-26 20:05:54 -05:00
Huazhong Ji
c4bfdba330
Support ascend npu ( #5436 )
...
* support ascend npu
Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
doctorpangloss
7655be873c
Updates to support Hunyuan Video
2024-12-25 22:39:12 -08:00
comfyanonymous
19a64d6291
Cleanup some mac related code.
2024-12-25 05:32:51 -05:00
comfyanonymous
b486885e08
Disable bfloat16 on older mac.
2024-12-25 05:18:50 -05:00
comfyanonymous
0229228f3f
Clean up the VAE dtypes code.
2024-12-25 04:50:34 -05:00
doctorpangloss
0fd407ae87
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-12-24 16:48:03 -08:00
comfyanonymous
15564688ed
Add a try except block so if torch version is weird it won't crash.
2024-12-23 03:22:48 -05:00
Simon Lui
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. ( #6112 )
...
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
comfyanonymous
e44d0ac7f7
Make --novram completely offload weights.
...
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.
Remove useless import.
2024-12-23 01:51:08 -05:00
comfyanonymous
57f330caf9
Relax minimum ratio of weights loaded in memory on nvidia.
...
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
Chenlei Hu
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
comfyanonymous
2dda7c11a3
More proper fix for the memory issue.
2024-12-19 16:21:56 -05:00
comfyanonymous
3ad3248ad7
Fix lowvram bug when using a model multiple times in a row.
...
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
comfyanonymous
37e5390f5f
Add: --use-sage-attention to enable SageAttention.
...
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
comfyanonymous
fd5dfb812c
Set initial load devices for te and model to mps device on mac.
2024-12-12 06:00:31 -05:00
doctorpangloss
2d1676c717
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-12-09 15:54:37 -08:00
comfyanonymous
57e8bf6a9f
Fix case where a memory leak could cause crash.
...
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00
comfyanonymous
79d5ceae6e
Improved memory management. ( #5450 )
...
* Less fragile memory management.
* Fix issue.
* Remove useless function.
* Prevent and detect some types of memory leaks.
* Run garbage collector when switching workflow if needed.
* Fix issue.
2024-12-02 14:39:34 -05:00
comfyanonymous
61196d8857
Add option to inference the diffusion model in fp32 and fp64.
2024-11-25 05:00:23 -05:00