patientx
3b96468e0c
Update README.md
2024-10-24 00:05:42 +03:00
patientx
ca700c7638
Merge branch 'comfyanonymous:master' into master
2024-10-23 23:58:26 +03:00
comfyanonymous
754597c8a9
Clean up some controlnet code.
...
Remove self.device which was useless.
2024-10-23 14:19:05 -04:00
patientx
fd143ca944
Merge branch 'comfyanonymous:master' into master
2024-10-23 00:19:39 +03:00
comfyanonymous
915fdb5745
Fix lowvram edge case.
2024-10-22 16:34:50 -04:00
contentis
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
patientx
f0e8767deb
Merge branch 'comfyanonymous:master' into master
2024-10-22 13:39:15 +03:00
comfyanonymous
8ce2a1052c
Optimizations to --fast and scaled fp8.
2024-10-22 02:12:28 -04:00
comfyanonymous
f82314fcfc
Fix duplicate sigmas on beta scheduler.
2024-10-21 20:19:45 -04:00
comfyanonymous
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
patientx
9fd46200ab
Merge branch 'comfyanonymous:master' into master
2024-10-21 12:23:49 +03:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
patientx
5a425aeda1
Merge branch 'comfyanonymous:master' into master
2024-10-20 21:06:24 +03:00
comfyanonymous
f9f9faface
Fixed model merging issue with scaled fp8.
2024-10-20 06:24:31 -04:00
patientx
1678ea8f9c
Merge branch 'comfyanonymous:master' into master
2024-10-20 10:20:06 +03:00
comfyanonymous
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
comfyanonymous
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
comfyanonymous
73e3a9e676
Clamp output when rounding weight to prevent Nan.
2024-10-19 19:07:10 -04:00
patientx
f1948ac144
Merge branch 'comfyanonymous:master' into master
2024-10-18 13:25:35 +03:00
comfyanonymous
518c0dc2fe
Add tooltips to LoraSave node.
2024-10-18 06:01:09 -04:00
patientx
0c33561160
Update README.md
2024-10-18 11:14:57 +03:00
patientx
d4b509799f
Merge branch 'comfyanonymous:master' into master
2024-10-18 11:14:20 +03:00
patientx
93751fd1bd
Update README.md
2024-10-18 11:14:12 +03:00
comfyanonymous
ce0542e10b
Add a note that python 3.13 is not yet supported to the README.
2024-10-17 19:27:37 -04:00
comfyanonymous
8473019d40
Pytorch can be shipped with numpy 2 now.
2024-10-17 19:15:17 -04:00
Xiaodong Xie
89f15894dd
Ignore more network related errors during websocket communication. ( #5269 )
...
Intermittent network issues during websocket communication should not crash ComfyUi process.
Co-authored-by: Xiaodong Xie <xie.xiaodong@frever.com>
2024-10-17 18:31:45 -04:00
comfyanonymous
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
patientx
fc4acf26c3
Merge branch 'comfyanonymous:master' into master
2024-10-16 23:54:39 +03:00
comfyanonymous
7390ff3b1e
Add missing import.
2024-10-16 14:58:30 -04:00
comfyanonymous
0bedfb26af
Revert "Fix Transformers FutureWarning ( #5140 )"
...
This reverts commit 95b7cf9bbe .
2024-10-16 12:36:19 -04:00
comfyanonymous
f71cfd2687
Add an experimental node to sharpen latents.
...
Can be used with LatentApplyOperationCFG for interesting results.
2024-10-16 05:25:31 -04:00
Alex "mcmonkey" Goodwin
c695c4af7f
Frontend Manager: avoid redundant gh calls for static versions ( #5152 )
...
* Frontend Manager: avoid redundant gh calls for static versions
* actually, removing old tmpdir isn't needed
I tested - downloader code handles this case well already
(also rmdir was wrong func anyway, needed shutil.rmtree if it had content)
* add code comment
2024-10-16 03:35:37 -04:00
patientx
a67cb80d33
Merge branch 'comfyanonymous:master' into master
2024-10-15 23:43:46 +03:00
comfyanonymous
0dbba9f751
Add some latent operation nodes.
...
This is a port of the ModelSamplerTonemapNoiseTest from the experiments
repo.
To replicate that node use LatentOperationTonemapReinhard and
LatentApplyOperationCFG together.
2024-10-15 15:00:36 -04:00
patientx
f143a803d6
Merge branch 'comfyanonymous:master' into master
2024-10-15 09:55:21 +03:00
comfyanonymous
f584758271
Cleanup some useless lines.
2024-10-14 21:02:39 -04:00
svdc
95b7cf9bbe
Fix Transformers FutureWarning ( #5140 )
...
* Update sd1_clip.py
Fix Transformers FutureWarning
* Update sd1_clip.py
Fix comment
2024-10-14 20:12:20 -04:00
patientx
b24af8b083
Merge branch 'comfyanonymous:master' into master
2024-10-13 23:35:06 +03:00
comfyanonymous
191a0d56b4
Switch default packaging workflows to python 3.12
2024-10-13 06:59:31 -04:00
patientx
a5e3eae103
Merge branch 'comfyanonymous:master' into master
2024-10-12 23:00:55 +03:00
comfyanonymous
3c60ecd7a8
Fix fp8 ops staying enabled.
2024-10-12 14:10:13 -04:00
comfyanonymous
7ae6626723
Remove useless argument.
2024-10-12 07:16:21 -04:00
patientx
eae1c15ab1
Merge branch 'comfyanonymous:master' into master
2024-10-12 11:02:28 +03:00
comfyanonymous
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
Kadir Nar
ad07796777
🐛 Add device to variable c ( #5210 )
2024-10-11 20:37:50 -04:00
patientx
e4d24788f1
Merge branch 'comfyanonymous:master' into master
2024-10-11 00:22:45 +03:00
comfyanonymous
1b80895285
Make clip loader nodes support loading sd3 t5xxl in lower precision.
...
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
patientx
f9eab05f54
Merge branch 'comfyanonymous:master' into master
2024-10-10 10:30:17 +03:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 ( #5121 )
...
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
Chenlei Hu
14eba07acd
Update web content to release v1.3.11 ( #5189 )
...
* Update web content to release v1.3.11
* nit
2024-10-09 22:37:04 -04:00