Commit Graph

1237 Commits

Author SHA1 Message Date
patientx
bb7e9ef812
Merge branch 'comfyanonymous:master' into master 2024-10-24 00:06:13 +03:00
comfyanonymous
66b0961a46 Fix ControlLora issue with last commit. 2024-10-23 17:02:40 -04:00
patientx
ca700c7638
Merge branch 'comfyanonymous:master' into master 2024-10-23 23:58:26 +03:00
comfyanonymous
754597c8a9 Clean up some controlnet code.
Remove self.device which was useless.
2024-10-23 14:19:05 -04:00
patientx
fd143ca944
Merge branch 'comfyanonymous:master' into master 2024-10-23 00:19:39 +03:00
comfyanonymous
915fdb5745 Fix lowvram edge case. 2024-10-22 16:34:50 -04:00
contentis
5a8a48931a
remove attention abstraction (#5324) 2024-10-22 14:02:38 -04:00
patientx
f0e8767deb
Merge branch 'comfyanonymous:master' into master 2024-10-22 13:39:15 +03:00
comfyanonymous
8ce2a1052c Optimizations to --fast and scaled fp8. 2024-10-22 02:12:28 -04:00
comfyanonymous
f82314fcfc Fix duplicate sigmas on beta scheduler. 2024-10-21 20:19:45 -04:00
comfyanonymous
0075c6d096 Mixed precision diffusion models with scaled fp8.
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
patientx
9fd46200ab
Merge branch 'comfyanonymous:master' into master 2024-10-21 12:23:49 +03:00
comfyanonymous
83ca891118 Support scaled fp8 t5xxl model. 2024-10-20 22:27:00 -04:00
patientx
5a425aeda1
Merge branch 'comfyanonymous:master' into master 2024-10-20 21:06:24 +03:00
comfyanonymous
f9f9faface Fixed model merging issue with scaled fp8. 2024-10-20 06:24:31 -04:00
patientx
1678ea8f9c
Merge branch 'comfyanonymous:master' into master 2024-10-20 10:20:06 +03:00
comfyanonymous
471cd3eace fp8 casting is fast on GPUs that support fp8 compute. 2024-10-20 00:54:47 -04:00
comfyanonymous
a68bbafddb Support diffusion models with scaled fp8 weights. 2024-10-19 23:47:42 -04:00
comfyanonymous
73e3a9e676 Clamp output when rounding weight to prevent Nan. 2024-10-19 19:07:10 -04:00
patientx
d4b509799f
Merge branch 'comfyanonymous:master' into master 2024-10-18 11:14:20 +03:00
comfyanonymous
67158994a4 Use the lowvram cast_to function for everything. 2024-10-17 17:25:56 -04:00
patientx
fc4acf26c3
Merge branch 'comfyanonymous:master' into master 2024-10-16 23:54:39 +03:00
comfyanonymous
0bedfb26af Revert "Fix Transformers FutureWarning (#5140)"
This reverts commit 95b7cf9bbe.
2024-10-16 12:36:19 -04:00
patientx
f143a803d6
Merge branch 'comfyanonymous:master' into master 2024-10-15 09:55:21 +03:00
comfyanonymous
f584758271 Cleanup some useless lines. 2024-10-14 21:02:39 -04:00
svdc
95b7cf9bbe
Fix Transformers FutureWarning (#5140)
* Update sd1_clip.py

Fix Transformers FutureWarning

* Update sd1_clip.py

Fix comment
2024-10-14 20:12:20 -04:00
patientx
a5e3eae103
Merge branch 'comfyanonymous:master' into master 2024-10-12 23:00:55 +03:00
comfyanonymous
3c60ecd7a8 Fix fp8 ops staying enabled. 2024-10-12 14:10:13 -04:00
comfyanonymous
7ae6626723 Remove useless argument. 2024-10-12 07:16:21 -04:00
patientx
eae1c15ab1
Merge branch 'comfyanonymous:master' into master 2024-10-12 11:02:28 +03:00
comfyanonymous
6632365e16 model_options consistency between functions.
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
Kadir Nar
ad07796777
🐛 Add device to variable c (#5210) 2024-10-11 20:37:50 -04:00
patientx
e4d24788f1
Merge branch 'comfyanonymous:master' into master 2024-10-11 00:22:45 +03:00
comfyanonymous
1b80895285 Make clip loader nodes support loading sd3 t5xxl in lower precision.
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
patientx
f9eab05f54
Merge branch 'comfyanonymous:master' into master 2024-10-10 10:30:17 +03:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 (#5121)
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention (#5191) 2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
patientx
21d9bef158
Merge branch 'comfyanonymous:master' into master 2024-10-09 11:22:05 +03:00
comfyanonymous
203942c8b2 Fix flux doras with diffusers keys. 2024-10-08 19:03:40 -04:00
patientx
df174b7cbd
Merge branch 'comfyanonymous:master' into master 2024-10-07 22:22:38 +03:00
comfyanonymous
8dfa0cc552 Make SD3 fast previews a little better. 2024-10-07 09:19:59 -04:00
patientx
445390eae2
Merge branch 'comfyanonymous:master' into master 2024-10-07 10:15:48 +03:00
comfyanonymous
e5ecdfdd2d Make fast previews for SDXL a little better by adding a bias. 2024-10-06 19:27:04 -04:00
comfyanonymous
7d29fbf74b Slightly improve the fast previews for flux by adding a bias. 2024-10-06 17:55:46 -04:00
patientx
dfa888ea5e
Merge branch 'comfyanonymous:master' into master 2024-10-05 21:09:30 +03:00
comfyanonymous
7d2467e830 Some minor cleanups. 2024-10-05 13:22:39 -04:00
patientx
613a320875
Merge branch 'comfyanonymous:master' into master 2024-10-05 00:01:10 +03:00
comfyanonymous
6f021d8aa0 Let --verbose have an argument for the log level. 2024-10-04 10:05:34 -04:00
patientx
9ec6b86747
Merge branch 'comfyanonymous:master' into master 2024-10-03 22:16:09 +03:00