patientx
08784dc90d
Update zluda.py
2025-06-12 13:19:59 +03:00
patientx
11af025690
Update zluda.py
2025-06-12 13:11:03 +03:00
patientx
828b7636d0
Update zluda.py
2025-06-12 13:10:40 +03:00
patientx
f53791d5d2
Merge branch 'comfyanonymous:master' into master
2025-06-12 00:32:55 +03:00
pythongosssss
50c605e957
Add support for sqlite database ( #8444 )
...
* Add support for sqlite database
* fix
2025-06-11 16:43:39 -04:00
comfyanonymous
8a4ff747bd
Fix mistake in last commit. ( #8496 )
...
* Move to right place.
2025-06-11 15:13:29 -04:00
comfyanonymous
af1eb58be8
Fix black images on some flux models in fp16. ( #8495 )
2025-06-11 15:09:11 -04:00
patientx
06ac233007
Merge branch 'comfyanonymous:master' into master
2025-06-10 20:34:42 +03:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. ( #8485 )
2025-06-10 13:06:24 -04:00
patientx
4bc3866c67
Merge branch 'comfyanonymous:master' into master
2025-06-09 21:10:00 +03:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. ( #8474 )
...
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
patientx
b4d015f5f3
Merge branch 'comfyanonymous:master' into master
2025-06-08 21:21:41 +03:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 ( #8464 )
2025-06-08 14:15:34 -04:00
patientx
156aedd995
Merge branch 'comfyanonymous:master' into master
2025-06-07 19:30:45 +03:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. ( #8453 )
2025-06-07 10:01:15 -04:00
patientx
d28b4525b3
Merge branch 'comfyanonymous:master' into master
2025-06-06 17:10:34 +03:00
comfyanonymous
3b4b171e18
Alternate fix for #8435 ( #8442 )
2025-06-06 09:43:27 -04:00
patientx
67fc8e3325
Merge branch 'comfyanonymous:master' into master
2025-06-06 01:42:57 +03:00
comfyanonymous
4248b1618f
Let chroma TE work on regular flux. ( #8429 )
2025-06-05 10:07:17 -04:00
patientx
9aeff135b2
Update zluda.py
2025-06-02 02:55:19 +03:00
patientx
803f82189a
Merge branch 'comfyanonymous:master' into master
2025-06-01 17:44:48 +03:00
comfyanonymous
fb4754624d
Make the casting in lists the same as regular inputs. ( #8373 )
2025-06-01 05:39:54 -04:00
comfyanonymous
19e45e9b0e
Make it easier to pass lists of tensors to models. ( #8358 )
2025-05-31 20:00:20 -04:00
patientx
d74ffb792a
Merge branch 'comfyanonymous:master' into master
2025-05-31 01:55:42 +03:00
drhead
08b7cc7506
use fused multiply-add pointwise ops in chroma ( #8279 )
2025-05-30 18:09:54 -04:00
patientx
07b8d211e6
Merge branch 'comfyanonymous:master' into master
2025-05-30 23:48:15 +03:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. ( #8348 )
2025-05-30 15:41:02 -04:00
patientx
c74742444d
Merge branch 'comfyanonymous:master' into master
2025-05-29 18:29:06 +03:00
comfyanonymous
f2289a1f59
Delete useless file. ( #8327 )
2025-05-29 08:29:37 -04:00
patientx
46a997fb23
Merge branch 'comfyanonymous:master' into master
2025-05-29 10:56:01 +03:00
comfyanonymous
5e5e46d40c
Not really tested WAN Phantom Support. ( #8321 )
2025-05-28 23:46:15 -04:00
comfyanonymous
1c1687ab1c
Support HiDream SimpleTuner loras. ( #8318 )
2025-05-28 18:47:15 -04:00
patientx
5b5165371e
Merge branch 'comfyanonymous:master' into master
2025-05-28 01:06:44 +03:00
comfyanonymous
06c661004e
Memory estimation code can now take into account conds. ( #8307 )
2025-05-27 15:09:05 -04:00
patientx
8609a6dced
Merge branch 'comfyanonymous:master' into master
2025-05-27 01:03:35 +03:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. ( #8294 )
2025-05-26 16:39:27 -04:00
patientx
bbcb33ea72
Merge branch 'comfyanonymous:master' into master
2025-05-26 16:26:39 +03:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 ( #8282 )
2025-05-26 04:29:25 -04:00
patientx
48bbdd0842
Merge branch 'comfyanonymous:master' into master
2025-05-25 15:38:51 +03:00
comfyanonymous
a0651359d7
Return proper error if diffusion model not detected properly. ( #8272 )
2025-05-25 05:28:11 -04:00
patientx
9790aaac7b
Merge branch 'comfyanonymous:master' into master
2025-05-24 14:00:54 +03:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. ( #8264 )
2025-05-24 06:43:12 -04:00
patientx
3b69a08c08
Merge branch 'comfyanonymous:master' into master
2025-05-24 04:06:28 +03:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. ( #8257 )
...
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
patientx
3e49c3e2ff
Merge branch 'comfyanonymous:master' into master
2025-05-24 00:01:56 +03:00
drhead
30b2eb8a93
create arange on-device ( #8255 )
2025-05-23 16:15:06 -04:00
patientx
c653935b37
Merge branch 'comfyanonymous:master' into master
2025-05-23 13:57:11 +03:00
LuXuxue
dc4958db54
add some architectures to utils.py
2025-05-23 13:54:03 +08:00
comfyanonymous
f85c08df06
Make VACE conditionings stackable. ( #8240 )
2025-05-22 19:22:26 -04:00
comfyanonymous
87f9130778
Revert "This doesn't seem to be needed on chroma. ( #8209 )" ( #8210 )
...
This reverts commit 7e84bf5373 .
2025-05-20 05:39:55 -04:00
comfyanonymous
7e84bf5373
This doesn't seem to be needed on chroma. ( #8209 )
2025-05-20 05:29:23 -04:00
patientx
acf83b60f4
Merge branch 'comfyanonymous:master' into master
2025-05-18 11:37:17 +03:00
comfyanonymous
aee2908d03
Remove useless log. ( #8166 )
2025-05-17 06:27:34 -04:00
patientx
de4e3dd19a
Merge branch 'comfyanonymous:master' into master
2025-05-16 02:43:05 +03:00
comfyanonymous
1c2d45d2b5
Fix typo in last PR. ( #8144 )
...
More robust model detection for future proofing.
2025-05-15 19:02:19 -04:00
George0726
c820ef950d
Add Wan-FUN Camera Control models and Add WanCameraImageToVideo node ( #8013 )
...
* support wan camera models
* fix by ruff check
* change camera_condition type; make camera_condition optional
* support camera trajectory nodes
* fix camera direction
---------
Co-authored-by: Qirui Sun <sunqr0667@126.com>
2025-05-15 19:00:43 -04:00
patientx
1cf25c6980
Add files via upload
2025-05-15 13:55:15 +03:00
patientx
44cac886c4
Create quant_per_block.py
2025-05-15 13:54:47 +03:00
patientx
01aae8eddc
Create fwd_prefill.py
2025-05-15 13:52:35 +03:00
patientx
39279bda97
Merge branch 'comfyanonymous:master' into master
2025-05-14 14:15:49 +03:00
Christian Byrne
98ff01e148
Display progress and result URL directly on API nodes ( #8102 )
...
* [Luma] Print download URL of successful task result directly on nodes (#177 )
[Veo] Print download URL of successful task result directly on nodes (#184 )
[Recraft] Print download URL of successful task result directly on nodes (#183 )
[Pixverse] Print download URL of successful task result directly on nodes (#182 )
[Kling] Print download URL of successful task result directly on nodes (#181 )
[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179 )
[Docs] Link to docs in `API_NODE` class property type annotation comment (#178 )
[Ideogram] Print download URL of successful task result directly on nodes (#176 )
[Kling] Print download URL of successful task result directly on nodes (#181 )
[Veo] Print download URL of successful task result directly on nodes (#184 )
[Recraft] Print download URL of successful task result directly on nodes (#183 )
[Pixverse] Print download URL of successful task result directly on nodes (#182 )
[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179 )
[Docs] Link to docs in `API_NODE` class property type annotation comment (#178 )
[Luma] Print download URL of successful task result directly on nodes (#177 )
[Ideogram] Print download URL of successful task result directly on nodes (#176 )
Show output URL and progress text on Pika nodes (#168 )
[BFL] Print download URL of successful task result directly on nodes (#175 )
[OpenAI ] Print download URL of successful task result directly on nodes (#174 )
* fix ruff errors
* fix 3.10 syntax error
2025-05-14 00:33:18 -04:00
patientx
c5ecbe5b30
Merge branch 'comfyanonymous:master' into master
2025-05-14 00:08:44 +03:00
comfyanonymous
4a9014e201
Hunyuan Custom initial untested implementation. ( #8101 )
2025-05-13 15:53:47 -04:00
patientx
3609a0cf35
Update zluda.py
2025-05-13 18:57:09 +03:00
patientx
8ced886a3d
Update zluda.py
2025-05-13 16:23:24 +03:00
patientx
6fc773788f
updated how to handle comfyui package updates
2025-05-13 16:22:44 +03:00
patientx
f0127d6326
Merge branch 'comfyanonymous:master' into master
2025-05-13 16:00:32 +03:00
patientx
d934498f60
Update rmsnorm.py
2025-05-13 16:00:26 +03:00
comfyanonymous
a814f2e8cc
Fix issue with old pytorch RMSNorm. ( #8095 )
2025-05-13 07:54:28 -04:00
comfyanonymous
481732a0ed
Support official ACE Step loras. ( #8094 )
2025-05-13 07:32:16 -04:00
patientx
d15ce39530
Merge branch 'comfyanonymous:master' into master
2025-05-13 01:06:19 +03:00
patientx
05d6c876ad
Update zluda.py
2025-05-13 01:06:08 +03:00
patientx
6aa9077f3c
Update zluda.py
2025-05-13 01:05:49 +03:00
comfyanonymous
640c47e7de
Fix torch warning about deprecated function. ( #8075 )
...
Drop support for torch versions below 2.2 on the audio VAEs.
2025-05-12 14:32:01 -04:00
patientx
cd7eb9bd36
"Boolean value of Tensor with more than one value is ambiguous" fix
2025-05-11 20:39:42 +03:00
patientx
8abcc4ec4f
Update zluda.py
2025-05-11 16:51:43 +03:00
patientx
3f58f50df3
Update zluda.py
2025-05-11 16:51:06 +03:00
patientx
6098615cda
Merge branch 'comfyanonymous:master' into master
2025-05-11 15:10:21 +03:00
comfyanonymous
577de83ca9
ACE VAE works in fp16. ( #8055 )
2025-05-11 04:58:00 -04:00
patientx
515bf966b2
Merge branch 'comfyanonymous:master' into master
2025-05-10 14:57:16 +03:00
comfyanonymous
d42613686f
Fix issue with fp8 ops on some models. ( #8045 )
...
_scaled_mm errors when an input is non contiguous.
2025-05-10 07:52:56 -04:00
patientx
2862921cca
Merge branch 'comfyanonymous:master' into master
2025-05-10 14:17:24 +03:00
Pam
1b3bf0a5da
Fix res_multistep_ancestral sampler ( #8030 )
2025-05-09 20:14:13 -04:00
patientx
56461e2e90
Merge branch 'comfyanonymous:master' into master
2025-05-09 22:40:55 +03:00
blepping
42da274717
Use normal ComfyUI attention in ACE-Steps model ( #8023 )
...
* Use normal ComfyUI attention in ACE-Steps model
* Let optimized_attention handle output reshape for ACE
2025-05-09 13:51:02 -04:00
patientx
4672537a99
Merge branch 'comfyanonymous:master' into master
2025-05-09 12:39:13 +03:00
comfyanonymous
8ab15c863c
Add --mmap-torch-files to enable use of mmap when loading ckpt/pt ( #8021 )
2025-05-09 04:52:47 -04:00
patientx
8a3424f354
Update zluda.py
2025-05-09 00:17:15 +03:00
patientx
a59deda664
Update zluda.py
2025-05-09 00:16:37 +03:00
patientx
9444d18408
Update zluda.py
2025-05-08 23:44:22 +03:00
patientx
184c8521cb
Update zluda.py
2025-05-08 23:44:05 +03:00
patientx
6c2370a577
Update zluda.py
2025-05-08 20:18:26 +03:00
patientx
0436293d99
Update zluda.py
2025-05-08 20:17:58 +03:00
patientx
81a16eefbc
Update zluda.py
2025-05-08 19:57:59 +03:00
patientx
f9671afff0
Update zluda.py
2025-05-08 19:57:28 +03:00
patientx
a963fba415
Update utils.py
2025-05-08 15:33:45 +03:00
patientx
e973632f11
Merge branch 'comfyanonymous:master' into master
2025-05-08 14:52:54 +03:00
comfyanonymous
a692c3cca4
Make ACE VAE tiling work. ( #8004 )
2025-05-08 07:25:45 -04:00
comfyanonymous
5d3cc85e13
Make japanese hiragana and katakana characters work with ACE. ( #7997 )
2025-05-08 03:32:36 -04:00
comfyanonymous
c7c025b8d1
Adjust memory estimation code for ACE VAE. ( #7990 )
2025-05-08 01:22:23 -04:00