Commit Graph

2246 Commits

Author SHA1 Message Date
patientx
4c5915d5cb
Merge branch 'comfyanonymous:master' into master 2025-09-12 09:29:27 +03:00
comfyanonymous
33bd9ed9cb
Implement hunyuan image refiner model. (#9817) 2025-09-12 00:43:20 -04:00
comfyanonymous
18de0b2830
Fast preview for hunyuan image. (#9814) 2025-09-11 19:33:02 -04:00
patientx
aae8c1486f
Merge pull request #297 from Rando717/Rando717-zluda.py
zluda.py "Expanded gfx identifier, lowercase gpu search, detect Triton version"
2025-09-11 20:35:35 +03:00
patientx
06fe8754d2
Merge branch 'comfyanonymous:master' into master 2025-09-11 13:46:42 +03:00
comfyanonymous
e01e99d075
Support hunyuan image distilled model. (#9807) 2025-09-10 23:17:34 -04:00
patientx
666b2e05fa
Merge branch 'comfyanonymous:master' into master 2025-09-10 10:47:09 +03:00
comfyanonymous
543888d3d8
Fix lowvram issue with hunyuan image vae. (#9794) 2025-09-10 02:15:34 -04:00
comfyanonymous
85e34643f8
Support hunyuan image 2.1 regular model. (#9792) 2025-09-10 02:05:07 -04:00
comfyanonymous
5c33872e2f
Fix issue on old torch. (#9791) 2025-09-10 00:23:47 -04:00
comfyanonymous
b288fb0db8
Small refactor of some vae code. (#9787) 2025-09-09 18:09:56 -04:00
Rando717
4057f2984c
Update zluda.py (MEM_BUS_WIDTH#3)
Lower casing the lookup inside MEM_BUS_WIDTH, just in case of incorrect casing on Radeon Pro (PRO) GPUs.

fixed/lower-casing "Triton device properties" lookup inside MEM_BUS_WIDTH.
2025-09-09 20:04:20 +02:00
Rando717
13ba6a8a8d
Update zluda.py (cleanup print Triton version)
compacted, without exception, silent if no version string
2025-09-09 19:30:54 +02:00
Rando717
ce8900fa25
Update zluda.py (gpu_name_to_gfx)
-function changed into list of rules

-correct gfx codes attached to each GPU name

-addressed potential incorrect designation for  RX 6000 S Series, sort priority
2025-09-09 18:51:41 +02:00
patientx
a531352603
Merge branch 'comfyanonymous:master' into master 2025-09-09 01:35:58 +03:00
comfyanonymous
103a12cb66
Support qwen inpaint controlnet. (#9772) 2025-09-08 17:30:26 -04:00
patientx
6f38e729cc
Merge branch 'comfyanonymous:master' into master 2025-09-08 22:15:28 +03:00
Rando717
e7d48450a3
Update zluda.py (removed previously added gfx90c)
'radeon graphics' check is not viable enough
considering 'radeon (tm) graphics' also exists on Vega.

Plus gfx1036 Raphael (Ryzen 7000) is called 'radeon (tm) graphics' , same with Granite Ridge (Ryzen 9000).
2025-09-08 21:10:20 +02:00
contentis
97652d26b8
Add explicit casting in apply_rope for Qwen VL (#9759) 2025-09-08 15:08:18 -04:00
Rando717
590f46ab41
Update zluda.py (typo) 2025-09-08 20:31:49 +02:00
Rando717
675d6d8f4c
Update zluda.py (gfx gpu names)
-expanded GPU gfx names
-added RDNA4, RDNA3.5, ...
-added missing Polaris cards to prevent 'gfx1010' and 'gfx1030' fallback
-kept gfx designation mostly the same, based on available custom lib's for hip57/62

might need some post adjustments
2025-09-08 17:55:29 +02:00
Rando717
ddb1e3da47
Update zluda.py (typo) 2025-09-08 17:22:41 +02:00
Rando717
a7336ad630
Update zluda.py (MEM_BUS_WIDTH#2)
Added Vega10/20 cards.
Can't test, no clue if it has effect or just a placebo effect.
2025-09-08 17:19:03 +02:00
Rando717
40199a5244
Update zluda.py (print Triton version)
Added check for Triton version string, if it exists.
Could be useful info for troubleshooting reports.
2025-09-08 17:00:40 +02:00
patientx
b46622ffa5
Merge branch 'comfyanonymous:master' into master 2025-09-08 11:14:04 +03:00
comfyanonymous
fb763d4333
Fix amd_min_version crash when cpu device. (#9754) 2025-09-07 21:16:29 -04:00
patientx
9417753a6c
Merge branch 'comfyanonymous:master' into master 2025-09-07 13:16:57 +03:00
comfyanonymous
bcbd7884e3
Don't enable pytorch attention on AMD if triton isn't available. (#9747) 2025-09-07 00:29:38 -04:00
comfyanonymous
27a0fcccc3
Enable bf16 VAE on RDNA4. (#9746) 2025-09-06 23:25:22 -04:00
patientx
afbcd5d57e
Merge branch 'comfyanonymous:master' into master 2025-09-06 11:51:33 +03:00
comfyanonymous
ea6cdd2631
Print all fast options in --help (#9737) 2025-09-06 01:05:05 -04:00
patientx
3ca065a755
fix 2025-09-05 23:11:57 +03:00
patientx
0488fe3748
rmsnorm patch second try 2025-09-05 23:10:27 +03:00
patientx
8966009181
added rmsnorm patch for torch's older than 2.4 2025-09-05 22:43:39 +03:00
patientx
f9d7fcb696
Merge branch 'comfyanonymous:master' into master 2025-09-05 22:09:30 +03:00
comfyanonymous
2ee7879a0b
Fix lowvram issues with hunyuan3d 2.1 (#9735) 2025-09-05 14:57:35 -04:00
patientx
c7c7269f48
Merge branch 'comfyanonymous:master' into master 2025-09-05 17:11:07 +03:00
comfyanonymous
c9ebe70072
Some changes to the previous hunyuan PR. (#9725) 2025-09-04 20:39:02 -04:00
Yousef R. Gamaleldin
261421e218
Add Hunyuan 3D 2.1 Support (#8714) 2025-09-04 20:36:20 -04:00
patientx
d79e93a0a9
Merge branch 'comfyanonymous:master' into master 2025-09-04 12:41:48 +03:00
comfyanonymous
72855db715
Fix potential rope issue. (#9710) 2025-09-03 22:20:13 -04:00
patientx
991209d11d
Merge branch 'comfyanonymous:master' into master 2025-09-03 00:05:33 +03:00
comfyanonymous
e3018c2a5a
uso -> uxo/uno as requested. (#9688) 2025-09-02 16:12:07 -04:00
patientx
b30a38dca0
Merge branch 'comfyanonymous:master' into master 2025-09-02 22:46:44 +03:00
comfyanonymous
3412d53b1d
USO style reference. (#9677)
Load the projector.safetensors file with the ModelPatchLoader node and use
the siglip_vision_patch14_384.safetensors "clip vision" model and the
USOStyleReferenceNode.
2025-09-02 15:36:22 -04:00
patientx
47c6fb34c9
Merge branch 'comfyanonymous:master' into master 2025-09-02 09:46:42 +03:00
contentis
e2d1e5dad9
Enable Convolution AutoTuning (#9301) 2025-09-01 20:33:50 -04:00
comfyanonymous
27e067ce50
Implement the USO subject identity lora. (#9674)
Use the lora with FluxContextMultiReferenceLatentMethod node set to "uso"
and a ReferenceLatent node with the reference image.
2025-09-01 18:54:02 -04:00
patientx
9cb469282e
Merge branch 'comfyanonymous:master' into master 2025-08-31 11:24:57 +03:00
chaObserv
32a627bf1f
SEEDS: update noise decomposition and refactor (#9633)
- Update the decomposition to reflect interval dependency
- Extract phi computations into functions
- Use torch.lerp for interpolation
2025-08-31 00:01:45 -04:00