doctorpangloss
a8d8bff548
Improve support for torch compilation and sage attention
2024-10-29 19:22:26 -07:00
doctorpangloss
76a80a65ea
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-29 15:35:39 -07:00
comfyanonymous
30c0c81351
Add a way to patch blocks in SD3.
2024-10-29 00:48:32 -04:00
comfyanonymous
13b0ff8a6f
Update SD3 code.
2024-10-28 21:58:52 -04:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
contentis
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
doctorpangloss
99f0fa8b50
Enable sage attention autodetection
2024-10-09 09:27:05 -07:00
doctorpangloss
388dad67d5
Fix pylint errors in attention
2024-10-09 09:26:02 -07:00
doctorpangloss
bbe2ed330c
Memory management and compilation improvements
...
- Experimental support for sage attention on Linux
- Diffusers loader now supports model indices
- Transformers model management now aligns with updates to ComfyUI
- Flux layers correctly use unbind
- Add float8 support for model loading in more places
- Experimental quantization approaches from Quanto and torchao
- Model upscaling interacts with memory management better
This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.
2024-10-09 09:13:47 -07:00
doctorpangloss
db423f8013
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-05 09:23:00 -07:00
Jedrzej Kosinski
f04229b84d
Add emb_patch support to UNetModel forward ( #4779 )
2024-09-04 14:35:15 -04:00
doctorpangloss
3f88282b6a
Fix absolute imports
2024-08-29 18:38:58 -07:00
doctorpangloss
fd503d8a96
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-29 16:37:30 -07:00
comfyanonymous
d31e226650
Unify RMSNorm code.
2024-08-28 16:56:38 -04:00
doctorpangloss
8284ea2fca
WIP merge
2024-08-16 14:25:06 -07:00
comfyanonymous
33fb282d5c
Fix issue.
2024-08-14 02:51:47 -04:00
doctorpangloss
0549f35e85
Merge commit '39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd' of github.com:comfyanonymous/ComfyUI
...
- Improvements to tests
- Fixes model management
- Fixes issues with language nodes
2024-08-13 20:08:56 -07:00
doctorpangloss
39c6335331
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-05 16:13:20 -07:00
comfyanonymous
3b71f84b50
ONNX tracing fixes.
2024-08-04 15:45:43 -04:00
doctorpangloss
0a1ae64b0b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-01 16:19:11 -07:00
doctorpangloss
34522e0914
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-07-30 11:11:45 -07:00
comfyanonymous
25853d0be8
Use common function for casting weights to input.
2024-07-30 10:49:14 -04:00
comfyanonymous
79040635da
Remove unnecessary code.
2024-07-30 05:01:34 -04:00
comfyanonymous
66d35c07ce
Improve artifacts on hydit, auraflow and SD3 on specific resolutions.
...
This breaks seeds for resolutions that are not a multiple of 16 in pixel
resolution by using circular padding instead of reflection padding but
should lower the amount of artifacts when doing img2img at those
resolutions.
2024-07-29 20:48:50 -04:00
comfyanonymous
10b43ceea5
Remove duplicate code.
2024-07-24 01:12:59 -04:00
doctorpangloss
a13088ccec
Merge upstream
2024-07-04 11:58:55 -07:00
comfyanonymous
f8f7568d03
Basic SD3 controlnet implementation.
...
Still missing the node to properly use it.
2024-06-27 18:43:11 -04:00
doctorpangloss
8cdc246450
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-17 16:19:48 -07:00
comfyanonymous
bb1969cab7
Initial support for the stable audio open model.
2024-06-15 12:14:56 -04:00
comfyanonymous
1281f933c1
Small optimization.
2024-06-15 02:44:38 -04:00
Max Tretikov
bd59bae606
Fix compile_core in comfy.ldm.modules.diffusionmodules.mmdit
2024-06-14 14:43:55 -06:00
Max Tretikov
8b091f02de
Add xformer.ops imports
2024-06-14 14:09:46 -06:00
Max Tretikov
6c53388619
Fix xformers import statements in comfy.ldm.modules.attention
2024-06-14 11:21:08 -06:00
Max Tretikov
05f4c2a5bc
Fix no-member errors in comfy.ldm.modules.ema
2024-06-14 09:01:38 -06:00
Max Tretikov
c69d4cae0a
Fix all undefined variables
2024-06-13 14:49:00 -06:00
doctorpangloss
cac6690481
Add known SD3 model files, merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-12 10:56:41 -07:00
comfyanonymous
605e64f6d3
Fix lowvram issue.
2024-06-12 10:39:33 -04:00
Dango233
73ce178021
Remove redundancy in mmdit.py ( #3685 )
2024-06-11 06:30:25 -04:00
doctorpangloss
a5d828be77
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-10 13:21:36 -07:00
comfyanonymous
8c4a9befa7
SD3 Support.
2024-06-10 14:06:23 -04:00
doctorpangloss
cb557c960b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-31 07:42:11 -07:00
comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
doctorpangloss
b241ecc56d
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-21 11:38:24 -07:00
comfyanonymous
83d969e397
Disable xformers when tracing model.
2024-05-21 13:55:49 -04:00
doctorpangloss
f69b6225c0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-20 12:06:35 -07:00
comfyanonymous
1900e5119f
Fix potential issue.
2024-05-20 08:19:54 -04:00
comfyanonymous
0bdc2b15c7
Cleanup.
2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
doctorpangloss
3d98440fb7
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-16 14:28:49 -07:00
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
2024-05-16 04:09:41 -04:00
comfyanonymous
ec6f16adb6
Fix SAG.
2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
2024-05-14 12:47:31 -04:00
doctorpangloss
330ecb10b2
Merge with upstream. Remove TLS flags, because a third party proxy will do this better
2024-05-02 21:57:20 -07:00
comfyanonymous
2aed53c4ac
Workaround xformers bug.
2024-04-30 21:23:40 -04:00
doctorpangloss
93cdef65a4
Merge upstream
2024-03-12 09:49:47 -07:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
doctorpangloss
915f2da874
Merge upstream
2024-02-29 20:48:27 -08:00
comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
2024-02-29 13:11:30 -05:00
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
doctorpangloss
7520691021
Merge with master
2024-02-19 10:55:22 -08:00
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
doctorpangloss
54d419d855
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-08 20:31:05 -08:00
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
2024-02-07 18:52:51 -05:00
doctorpangloss
82edb2ff0e
Merge with latest upstream.
2024-01-29 15:06:31 -08:00
comfyanonymous
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
doctorpangloss
369aeb598f
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
2024-01-03 16:00:36 -08:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00
comfyanonymous
79f73a4b33
Remove useless code.
2024-01-02 01:50:29 -05:00
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
2023-12-24 07:13:18 -05:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
Benjamin Berman
01312a55a4
merge upstream
2023-12-03 20:41:13 -08:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
2023-11-14 00:08:12 -05:00
comfyanonymous
794dd2064d
Fix typo.
2023-11-07 23:41:55 -05:00
comfyanonymous
a527d0c795
Code refactor.
2023-11-07 19:33:40 -05:00
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
2023-11-07 04:30:37 -05:00
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00