doctorpangloss
|
a5d828be77
|
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
|
2024-06-10 13:21:36 -07:00 |
|
comfyanonymous
|
8c4a9befa7
|
SD3 Support.
|
2024-06-10 14:06:23 -04:00 |
|
doctorpangloss
|
f69b6225c0
|
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
|
2024-05-20 12:06:35 -07:00 |
|
comfyanonymous
|
0bdc2b15c7
|
Cleanup.
|
2024-05-18 10:11:44 -04:00 |
|
comfyanonymous
|
98f828fad9
|
Remove unnecessary code.
|
2024-05-18 09:36:44 -04:00 |
|
doctorpangloss
|
3d98440fb7
|
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
|
2024-05-16 14:28:49 -07:00 |
|
comfyanonymous
|
bb4940d837
|
Only enable attention upcasting on models that actually need it.
|
2024-05-14 17:00:50 -04:00 |
|
doctorpangloss
|
93cdef65a4
|
Merge upstream
|
2024-03-12 09:49:47 -07:00 |
|
comfyanonymous
|
2a813c3b09
|
Switch some more prints to logging.
|
2024-03-11 16:34:58 -04:00 |
|
doctorpangloss
|
915f2da874
|
Merge upstream
|
2024-02-29 20:48:27 -08:00 |
|
comfyanonymous
|
cb7c3a2921
|
Allow image_only_indicator to be None.
|
2024-02-29 13:11:30 -05:00 |
|
comfyanonymous
|
b3e97fc714
|
Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
|
2024-02-28 12:10:11 -05:00 |
|
doctorpangloss
|
54d419d855
|
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
|
2024-02-08 20:31:05 -08:00 |
|
comfyanonymous
|
c661a8b118
|
Don't use numpy for calculating sigmas.
|
2024-02-07 18:52:51 -05:00 |
|
doctorpangloss
|
82edb2ff0e
|
Merge with latest upstream.
|
2024-01-29 15:06:31 -08:00 |
|
comfyanonymous
|
89507f8adf
|
Remove some unused imports.
|
2024-01-25 23:42:37 -05:00 |
|
doctorpangloss
|
369aeb598f
|
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
|
2024-01-03 16:00:36 -08:00 |
|
comfyanonymous
|
8c6493578b
|
Implement noise augmentation for SD 4X upscale model.
|
2024-01-03 14:27:11 -05:00 |
|
comfyanonymous
|
79f73a4b33
|
Remove useless code.
|
2024-01-02 01:50:29 -05:00 |
|
comfyanonymous
|
61b3f15f8f
|
Fix lowvram mode not working with unCLIP and Revision code.
|
2023-12-26 05:02:02 -05:00 |
|
comfyanonymous
|
d0165d819a
|
Fix SVD lowvram mode.
|
2023-12-24 07:13:18 -05:00 |
|
comfyanonymous
|
261bcbb0d9
|
A few missing comfy ops in the VAE.
|
2023-12-22 04:05:42 -05:00 |
|
comfyanonymous
|
77755ab8db
|
Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
|
2023-12-11 23:27:13 -05:00 |
|
comfyanonymous
|
31b0f6f3d8
|
UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
|
2023-12-04 11:10:00 -05:00 |
|
comfyanonymous
|
af365e4dd1
|
All the unet ops with weights are now handled by comfy.ops
|
2023-12-04 03:12:18 -05:00 |
|
Benjamin Berman
|
01312a55a4
|
merge upstream
|
2023-12-03 20:41:13 -08:00 |
|
comfyanonymous
|
50dc39d6ec
|
Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
|
2023-11-26 03:13:56 -05:00 |
|
comfyanonymous
|
871cc20e13
|
Support SVD img2vid model.
|
2023-11-23 19:41:33 -05:00 |
|
comfyanonymous
|
72741105a6
|
Remove useless code.
|
2023-11-21 17:27:28 -05:00 |
|
comfyanonymous
|
7e3fe3ad28
|
Make deep shrink behave like it should.
|
2023-11-16 15:26:28 -05:00 |
|
comfyanonymous
|
7ea6bb038c
|
Print warning when controlnet can't be applied instead of crashing.
|
2023-11-16 12:57:12 -05:00 |
|
comfyanonymous
|
94cc718e9c
|
Add a way to add patches to the input block.
|
2023-11-14 00:08:12 -05:00 |
|
comfyanonymous
|
794dd2064d
|
Fix typo.
|
2023-11-07 23:41:55 -05:00 |
|
comfyanonymous
|
a527d0c795
|
Code refactor.
|
2023-11-07 19:33:40 -05:00 |
|
comfyanonymous
|
2a23ba0b8c
|
Fix unet ops not entirely on GPU.
|
2023-11-07 04:30:37 -05:00 |
|
comfyanonymous
|
6ec3f12c6e
|
Support SSD1B model and make it easier to support asymmetric unets.
|
2023-10-27 14:45:15 -04:00 |
|
doctorpangloss
|
04d0ecd0d4
|
merge upstream
|
2023-10-17 17:49:31 -07:00 |
|
Benjamin Berman
|
d21655b5a2
|
merge upstream
|
2023-10-17 14:47:59 -07:00 |
|
comfyanonymous
|
d44a2de49f
|
Make VAE code closer to sgm.
|
2023-10-17 15:18:51 -04:00 |
|
comfyanonymous
|
23680a9155
|
Refactor the attention stuff in the VAE.
|
2023-10-17 03:19:29 -04:00 |
|
comfyanonymous
|
9a55dadb4c
|
Refactor code so model can be a dtype other than fp32 or fp16.
|
2023-10-13 14:41:17 -04:00 |
|
comfyanonymous
|
88733c997f
|
pytorch_attention_enabled can now return True when xformers is enabled.
|
2023-10-11 21:30:57 -04:00 |
|
comfyanonymous
|
1a4bd9e9a6
|
Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
|
2023-10-11 20:38:48 -04:00 |
|
doctorpangloss
|
e8b60dfc6e
|
merge upstream
|
2023-10-06 15:02:31 -07:00 |
|
comfyanonymous
|
afa2399f79
|
Add a way to set output block patches to modify the h and hsp.
|
2023-09-22 20:26:47 -04:00 |
|
comfyanonymous
|
1938f5c5fe
|
Add a force argument to soft_empty_cache to force a cache empty.
|
2023-09-04 00:58:18 -04:00 |
|
doctorpangloss
|
db673f7728
|
merge upstream
|
2023-08-29 13:36:53 -07:00 |
|
comfyanonymous
|
bed116a1f9
|
Remove optimization that caused border.
|
2023-08-29 11:21:36 -04:00 |
|
comfyanonymous
|
1c794a2161
|
Fallback to slice attention if xformers doesn't support the operation.
|
2023-08-27 22:24:42 -04:00 |
|
comfyanonymous
|
d935ba50c4
|
Make --bf16-vae work on torch 2.0
|
2023-08-27 21:33:53 -04:00 |
|