patientx
|
08784dc90d
|
Update zluda.py
|
2025-06-12 13:19:59 +03:00 |
|
patientx
|
11af025690
|
Update zluda.py
|
2025-06-12 13:11:03 +03:00 |
|
patientx
|
828b7636d0
|
Update zluda.py
|
2025-06-12 13:10:40 +03:00 |
|
patientx
|
f53791d5d2
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-12 00:32:55 +03:00 |
|
pythongosssss
|
50c605e957
|
Add support for sqlite database (#8444)
* Add support for sqlite database
* fix
|
2025-06-11 16:43:39 -04:00 |
|
comfyanonymous
|
8a4ff747bd
|
Fix mistake in last commit. (#8496)
* Move to right place.
|
2025-06-11 15:13:29 -04:00 |
|
comfyanonymous
|
af1eb58be8
|
Fix black images on some flux models in fp16. (#8495)
|
2025-06-11 15:09:11 -04:00 |
|
patientx
|
06ac233007
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-10 20:34:42 +03:00 |
|
comfyanonymous
|
6e28a46454
|
Apple most likely is never fixing the fp16 attention bug. (#8485)
|
2025-06-10 13:06:24 -04:00 |
|
patientx
|
4bc3866c67
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-09 21:10:00 +03:00 |
|
comfyanonymous
|
7f800d04fa
|
Enable AMD fp8 and pytorch attention on some GPUs. (#8474)
Information is from the pytorch source code.
|
2025-06-09 12:50:39 -04:00 |
|
patientx
|
b4d015f5f3
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-08 21:21:41 +03:00 |
|
comfyanonymous
|
97755eed46
|
Enable fp8 ops by default on gfx1201 (#8464)
|
2025-06-08 14:15:34 -04:00 |
|
patientx
|
156aedd995
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-07 19:30:45 +03:00 |
|
comfyanonymous
|
daf9d25ee2
|
Cleaner torch version comparisons. (#8453)
|
2025-06-07 10:01:15 -04:00 |
|
patientx
|
d28b4525b3
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-06 17:10:34 +03:00 |
|
comfyanonymous
|
3b4b171e18
|
Alternate fix for #8435 (#8442)
|
2025-06-06 09:43:27 -04:00 |
|
patientx
|
67fc8e3325
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-06 01:42:57 +03:00 |
|
comfyanonymous
|
4248b1618f
|
Let chroma TE work on regular flux. (#8429)
|
2025-06-05 10:07:17 -04:00 |
|
patientx
|
9aeff135b2
|
Update zluda.py
|
2025-06-02 02:55:19 +03:00 |
|
patientx
|
803f82189a
|
Merge branch 'comfyanonymous:master' into master
|
2025-06-01 17:44:48 +03:00 |
|
comfyanonymous
|
fb4754624d
|
Make the casting in lists the same as regular inputs. (#8373)
|
2025-06-01 05:39:54 -04:00 |
|
comfyanonymous
|
19e45e9b0e
|
Make it easier to pass lists of tensors to models. (#8358)
|
2025-05-31 20:00:20 -04:00 |
|
patientx
|
d74ffb792a
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-31 01:55:42 +03:00 |
|
drhead
|
08b7cc7506
|
use fused multiply-add pointwise ops in chroma (#8279)
|
2025-05-30 18:09:54 -04:00 |
|
patientx
|
07b8d211e6
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-30 23:48:15 +03:00 |
|
comfyanonymous
|
704fc78854
|
Put ROCm version in tuple to make it easier to enable stuff based on it. (#8348)
|
2025-05-30 15:41:02 -04:00 |
|
patientx
|
c74742444d
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-29 18:29:06 +03:00 |
|
comfyanonymous
|
f2289a1f59
|
Delete useless file. (#8327)
|
2025-05-29 08:29:37 -04:00 |
|
patientx
|
46a997fb23
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-29 10:56:01 +03:00 |
|
comfyanonymous
|
5e5e46d40c
|
Not really tested WAN Phantom Support. (#8321)
|
2025-05-28 23:46:15 -04:00 |
|
comfyanonymous
|
1c1687ab1c
|
Support HiDream SimpleTuner loras. (#8318)
|
2025-05-28 18:47:15 -04:00 |
|
patientx
|
5b5165371e
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-28 01:06:44 +03:00 |
|
comfyanonymous
|
06c661004e
|
Memory estimation code can now take into account conds. (#8307)
|
2025-05-27 15:09:05 -04:00 |
|
patientx
|
8609a6dced
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-27 01:03:35 +03:00 |
|
comfyanonymous
|
89a84e32d2
|
Disable initial GPU load when novram is used. (#8294)
|
2025-05-26 16:39:27 -04:00 |
|
patientx
|
bbcb33ea72
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-26 16:26:39 +03:00 |
|
comfyanonymous
|
e5799c4899
|
Enable pytorch attention by default on AMD gfx1151 (#8282)
|
2025-05-26 04:29:25 -04:00 |
|
patientx
|
48bbdd0842
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-25 15:38:51 +03:00 |
|
comfyanonymous
|
a0651359d7
|
Return proper error if diffusion model not detected properly. (#8272)
|
2025-05-25 05:28:11 -04:00 |
|
patientx
|
9790aaac7b
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-24 14:00:54 +03:00 |
|
comfyanonymous
|
5a87757ef9
|
Better error if sageattention is installed but a dependency is missing. (#8264)
|
2025-05-24 06:43:12 -04:00 |
|
patientx
|
3b69a08c08
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-24 04:06:28 +03:00 |
|
comfyanonymous
|
0b50d4c0db
|
Add argument to explicitly enable fp8 compute support. (#8257)
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
|
2025-05-23 17:43:50 -04:00 |
|
patientx
|
3e49c3e2ff
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-24 00:01:56 +03:00 |
|
drhead
|
30b2eb8a93
|
create arange on-device (#8255)
|
2025-05-23 16:15:06 -04:00 |
|
patientx
|
c653935b37
|
Merge branch 'comfyanonymous:master' into master
|
2025-05-23 13:57:11 +03:00 |
|
LuXuxue
|
dc4958db54
|
add some architectures to utils.py
|
2025-05-23 13:54:03 +08:00 |
|
comfyanonymous
|
f85c08df06
|
Make VACE conditionings stackable. (#8240)
|
2025-05-22 19:22:26 -04:00 |
|
comfyanonymous
|
87f9130778
|
Revert "This doesn't seem to be needed on chroma. (#8209)" (#8210)
This reverts commit 7e84bf5373.
|
2025-05-20 05:39:55 -04:00 |
|