doctorpangloss
87bed08124
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-08-01 16:05:47 -07:00
chaObserv
61b08d4ba6
Replace manual x * sigmoid(x) with torch silu in VAE nonlinearity ( #9057 )
2025-07-30 19:25:56 -04:00
doctorpangloss
03e5430121
Improvements for Wan 2.2 support
...
- add xet support and add the xet cache to manageable directories
- xet is enabled by default
- fix logging to root in various places
- improve logging about model unloading and loading
- TorchCompileNode now supports the VAE
- torchaudio missing will cause less noise in the logs
- feature flags will assume to be supporting everything in the distributed progress context
- fixes progress notifications
2025-07-28 14:36:27 -07:00
doctorpangloss
693038738a
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-02-24 09:39:26 -08:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
doctorpangloss
a3452f6e6a
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-01-28 13:45:51 -08:00
comfyanonymous
96e2a45193
Remove useless code.
2025-01-23 05:56:23 -05:00
doctorpangloss
cf3c96e593
Cosmos support
2025-01-16 12:39:05 -08:00
doctorpangloss
631d9e44c6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-01-16 09:58:02 -08:00
comfyanonymous
008761166f
Optimize first attention block in cosmos VAE.
2025-01-15 21:48:46 -05:00
doctorpangloss
7655be873c
Updates to support Hunyuan Video
2024-12-25 22:39:12 -08:00
doctorpangloss
0fd407ae87
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-12-24 16:48:03 -08:00
comfyanonymous
4c5c4ddeda
Fix regression in VAE code on old pytorch versions.
2024-12-18 03:08:28 -05:00
comfyanonymous
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
doctorpangloss
2d1676c717
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-12-09 15:54:37 -08:00
Chenlei Hu
0fd4e6c778
Lint unused import ( #5973 )
...
* Lint unused import
* nit
* Remove unused imports
* revert fix_torch import
* nit
2024-12-09 15:24:39 -05:00
doctorpangloss
0549f35e85
Merge commit '39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd' of github.com:comfyanonymous/ComfyUI
...
- Improvements to tests
- Fixes model management
- Fixes issues with language nodes
2024-08-13 20:08:56 -07:00
Max Tretikov
bd59bae606
Fix compile_core in comfy.ldm.modules.diffusionmodules.mmdit
2024-06-14 14:43:55 -06:00
Max Tretikov
8b091f02de
Add xformer.ops imports
2024-06-14 14:09:46 -06:00
doctorpangloss
f69b6225c0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-20 12:06:35 -07:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
doctorpangloss
93cdef65a4
Merge upstream
2024-03-12 09:49:47 -07:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
doctorpangloss
369aeb598f
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
2024-01-03 16:00:36 -08:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
doctorpangloss
04d0ecd0d4
merge upstream
2023-10-17 17:49:31 -07:00
Benjamin Berman
d21655b5a2
merge upstream
2023-10-17 14:47:59 -07:00
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
doctorpangloss
e8b60dfc6e
merge upstream
2023-10-06 15:02:31 -07:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
doctorpangloss
db673f7728
merge upstream
2023-08-29 13:36:53 -07:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
Benjamin Berman
d25d5e75f0
wip make package structure coherent
2023-08-22 11:35:20 -07:00
comfyanonymous
95d796fc85
Faster VAE loading.
2023-07-29 16:28:30 -04:00
comfyanonymous
fa28d7334b
Remove useless code.
2023-06-23 12:35:26 -04:00
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2023-05-20 16:01:02 -04:00
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2023-05-20 15:07:21 -04:00
comfyanonymous
bae4fb4a9d
Fix imports.
2023-05-04 18:10:29 -04:00
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2023-04-15 19:04:33 -04:00
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2023-04-04 22:22:02 -04:00
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2023-03-22 02:45:18 -04:00
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2023-03-13 12:45:54 -04:00