patientx
258da26c98
Merge branch 'comfyanonymous:master' into master
2025-09-25 15:08:16 +03:00
Guy Niv
c8d2117f02
Fix memory leak by properly detaching model finalizer ( #9979 )
...
When unloading models in load_models_gpu(), the model finalizer was not
being explicitly detached, leading to a memory leak. This caused
linear memory consumption increase over time as models are repeatedly
loaded and unloaded.
This change prevents orphaned finalizer references from accumulating in
memory during model switching operations.
2025-09-24 22:35:12 -04:00
patientx
c62e820d45
Merge branch 'comfyanonymous:master' into master
2025-09-20 01:51:06 +03:00
DELUXA
8d6653fca6
Enable fp8 ops by default on gfx1200 ( #9926 )
2025-09-18 19:50:37 -04:00
patientx
b46622ffa5
Merge branch 'comfyanonymous:master' into master
2025-09-08 11:14:04 +03:00
comfyanonymous
fb763d4333
Fix amd_min_version crash when cpu device. ( #9754 )
2025-09-07 21:16:29 -04:00
patientx
9417753a6c
Merge branch 'comfyanonymous:master' into master
2025-09-07 13:16:57 +03:00
comfyanonymous
bcbd7884e3
Don't enable pytorch attention on AMD if triton isn't available. ( #9747 )
2025-09-07 00:29:38 -04:00
comfyanonymous
27a0fcccc3
Enable bf16 VAE on RDNA4. ( #9746 )
2025-09-06 23:25:22 -04:00
patientx
7ff01ded58
Merge branch 'comfyanonymous:master' into master
2025-08-21 09:24:26 +03:00
comfyanonymous
0963493a9c
Support for Qwen Diffsynth Controlnets canny and depth. ( #9465 )
...
These are not real controlnets but actually a patch on the model so they
will be treated as such.
Put them in the models/model_patches/ folder.
Use the new ModelPatchLoader and QwenImageDiffsynthControlnet nodes.
2025-08-20 22:26:37 -04:00
patientx
a927fbd99b
Merge branch 'comfyanonymous:master' into master
2025-08-14 12:16:50 +03:00
Simon Lui
c991a5da65
Fix XPU iGPU regressions ( #9322 )
...
* Change bf16 check and switch non-blocking to off default with option to force to regain speed on certain classes of iGPUs and refactor xpu check.
* Turn non_blocking off by default for xpu.
* Update README.md for Intel GPUs.
2025-08-13 19:13:35 -04:00
patientx
c2686a3968
Merge branch 'comfyanonymous:master' into master
2025-08-10 12:09:19 +03:00
comfyanonymous
5828607ccf
Not sure if AMD actually support fp16 acc but it doesn't crash. ( #9258 )
2025-08-09 12:49:25 -04:00
patientx
89499c6fae
Merge branch 'comfyanonymous:master' into master
2025-08-08 11:40:07 +03:00
comfyanonymous
735bb4bdb1
Users report gfx1201 is buggy on flux with pytorch attention. ( #9244 )
2025-08-08 04:21:00 -04:00
patientx
d8ca8134c3
Merge branch 'comfyanonymous:master' into master
2025-07-29 11:56:59 +03:00
comfyanonymous
7d593baf91
Extra reserved vram on large cards on windows. ( #9093 )
2025-07-29 04:07:45 -04:00
patientx
970b7fb84f
Merge branch 'comfyanonymous:master' into master
2025-07-24 22:30:55 +03:00
comfyanonymous
69cb57b342
Print xpu device name. ( #9035 )
2025-07-24 15:06:25 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX ( #8585 )
...
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
patientx
30539d0d13
Merge branch 'comfyanonymous:master' into master
2025-07-24 13:59:09 +03:00
comfyanonymous
d3504e1778
Enable pytorch attention by default for gfx1201 on torch 2.8 ( #9029 )
2025-07-23 19:21:29 -04:00
comfyanonymous
a86a58c308
Fix xpu function not implemented p2. ( #9027 )
2025-07-23 18:18:20 -04:00
comfyanonymous
39dda1d40d
Fix xpu function not implemented. ( #9026 )
2025-07-23 18:10:59 -04:00
patientx
58f3250106
Merge branch 'comfyanonymous:master' into master
2025-07-23 23:49:46 +03:00
comfyanonymous
5ad33787de
Add default device argument. ( #9023 )
2025-07-23 14:20:49 -04:00
patientx
bd33a5d382
Merge branch 'comfyanonymous:master' into master
2025-07-23 03:27:52 +03:00
Simon Lui
255f139863
Add xpu version for async offload and some other things. ( #9004 )
2025-07-22 15:20:09 -04:00
patientx
c1ce148f41
Merge branch 'comfyanonymous:master' into master
2025-06-26 11:34:14 +03:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. ( #8672 )
2025-06-26 03:39:09 -04:00
patientx
8909c12ea9
Update model_management.py
2025-06-14 13:27:49 +03:00
patientx
06ac233007
Merge branch 'comfyanonymous:master' into master
2025-06-10 20:34:42 +03:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. ( #8485 )
2025-06-10 13:06:24 -04:00
patientx
4bc3866c67
Merge branch 'comfyanonymous:master' into master
2025-06-09 21:10:00 +03:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. ( #8474 )
...
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
patientx
b4d015f5f3
Merge branch 'comfyanonymous:master' into master
2025-06-08 21:21:41 +03:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 ( #8464 )
2025-06-08 14:15:34 -04:00
patientx
156aedd995
Merge branch 'comfyanonymous:master' into master
2025-06-07 19:30:45 +03:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. ( #8453 )
2025-06-07 10:01:15 -04:00
patientx
07b8d211e6
Merge branch 'comfyanonymous:master' into master
2025-05-30 23:48:15 +03:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. ( #8348 )
2025-05-30 15:41:02 -04:00
patientx
8609a6dced
Merge branch 'comfyanonymous:master' into master
2025-05-27 01:03:35 +03:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. ( #8294 )
2025-05-26 16:39:27 -04:00
patientx
bbcb33ea72
Merge branch 'comfyanonymous:master' into master
2025-05-26 16:26:39 +03:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 ( #8282 )
2025-05-26 04:29:25 -04:00
patientx
3b69a08c08
Merge branch 'comfyanonymous:master' into master
2025-05-24 04:06:28 +03:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. ( #8257 )
...
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
patientx
f49d26848b
Merge branch 'comfyanonymous:master' into master
2025-04-30 13:46:06 +03:00