doctorpangloss
8284ea2fca
WIP merge
2024-08-16 14:25:06 -07:00
comfyanonymous
33fb282d5c
Fix issue.
2024-08-14 02:51:47 -04:00
doctorpangloss
0549f35e85
Merge commit '39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd' of github.com:comfyanonymous/ComfyUI
...
- Improvements to tests
- Fixes model management
- Fixes issues with language nodes
2024-08-13 20:08:56 -07:00
doctorpangloss
8cdc246450
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-17 16:19:48 -07:00
comfyanonymous
bb1969cab7
Initial support for the stable audio open model.
2024-06-15 12:14:56 -04:00
Max Tretikov
8b091f02de
Add xformer.ops imports
2024-06-14 14:09:46 -06:00
Max Tretikov
6c53388619
Fix xformers import statements in comfy.ldm.modules.attention
2024-06-14 11:21:08 -06:00
doctorpangloss
cb557c960b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-31 07:42:11 -07:00
comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
doctorpangloss
b241ecc56d
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-21 11:38:24 -07:00
comfyanonymous
83d969e397
Disable xformers when tracing model.
2024-05-21 13:55:49 -04:00
doctorpangloss
f69b6225c0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-20 12:06:35 -07:00
comfyanonymous
1900e5119f
Fix potential issue.
2024-05-20 08:19:54 -04:00
comfyanonymous
0bdc2b15c7
Cleanup.
2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
doctorpangloss
3d98440fb7
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-16 14:28:49 -07:00
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
2024-05-16 04:09:41 -04:00
comfyanonymous
ec6f16adb6
Fix SAG.
2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
2024-05-14 12:47:31 -04:00
doctorpangloss
330ecb10b2
Merge with upstream. Remove TLS flags, because a third party proxy will do this better
2024-05-02 21:57:20 -07:00
comfyanonymous
2aed53c4ac
Workaround xformers bug.
2024-04-30 21:23:40 -04:00
doctorpangloss
93cdef65a4
Merge upstream
2024-03-12 09:49:47 -07:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
doctorpangloss
7520691021
Merge with master
2024-02-19 10:55:22 -08:00
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
doctorpangloss
82edb2ff0e
Merge with latest upstream.
2024-01-29 15:06:31 -08:00
comfyanonymous
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
doctorpangloss
369aeb598f
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
2024-01-03 16:00:36 -08:00
comfyanonymous
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
Benjamin Berman
01312a55a4
merge upstream
2023-12-03 20:41:13 -08:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00