Angel Tsvetkov
1e8a9cef85
Update model.py
...
Why print twice the same VAE?
Before:
Using pytorch attention in VAE
Using pytorch attention in VAE
After:
Using pytorch attention in VAE encoder (1/2) - Model: ae.safetensors
Using pytorch attention in VAE decoder (2/2) - Model: ae.safetensors
2025-08-02 16:38:43 +03:00
chaObserv
61b08d4ba6
Replace manual x * sigmoid(x) with torch silu in VAE nonlinearity ( #9057 )
2025-07-30 19:25:56 -04:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
comfyanonymous
96e2a45193
Remove useless code.
2025-01-23 05:56:23 -05:00
comfyanonymous
008761166f
Optimize first attention block in cosmos VAE.
2025-01-15 21:48:46 -05:00
comfyanonymous
4c5c4ddeda
Fix regression in VAE code on old pytorch versions.
2024-12-18 03:08:28 -05:00
comfyanonymous
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
Chenlei Hu
0fd4e6c778
Lint unused import ( #5973 )
...
* Lint unused import
* nit
* Remove unused imports
* revert fix_torch import
* nit
2024-12-09 15:24:39 -05:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
95d796fc85
Faster VAE loading.
2023-07-29 16:28:30 -04:00
comfyanonymous
fa28d7334b
Remove useless code.
2023-06-23 12:35:26 -04:00
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2023-05-20 16:01:02 -04:00
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2023-05-20 15:07:21 -04:00
comfyanonymous
bae4fb4a9d
Fix imports.
2023-05-04 18:10:29 -04:00
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2023-04-15 19:04:33 -04:00
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2023-04-04 22:22:02 -04:00
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2023-03-22 02:45:18 -04:00
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2023-03-13 12:45:54 -04:00
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2023-03-13 11:36:48 -04:00
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
1de86851b1
Try to fix memory issue.
2023-03-11 15:15:13 -05:00
comfyanonymous
cc8baf1080
Make VAE use common function to get free memory.
2023-03-05 14:20:07 -05:00
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2023-02-10 03:13:49 -05:00
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2023-02-09 12:43:29 -05:00
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2023-02-08 22:04:20 -05:00
comfyanonymous
5b4e312749
Use inplace operations for less OOM issues.
2023-02-08 22:04:13 -05:00
comfyanonymous
220afe3310
Initial commit.
2023-01-16 22:37:14 -05:00