comfyanonymous
733ff9e5ab
Implement GLora.
2023-12-09 18:15:26 -05:00
comfyanonymous
4d9a8c68bd
Make lora code a bit cleaner.
2023-12-09 14:15:09 -05:00
comfyanonymous
242f0a57d9
Use own clip vision model implementation.
2023-12-09 11:56:31 -05:00
comfyanonymous
f54c2bf6fb
Cleanup.
2023-12-08 16:02:08 -05:00
comfyanonymous
d36e559dcf
Add linear_start and linear_end to model_config.sampling_settings
2023-12-08 02:49:30 -05:00
comfyanonymous
63349484b8
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
comfyanonymous
f75e370acd
Support attention masking in CLIP implementation.
2023-12-07 02:51:02 -05:00
comfyanonymous
782cf4f295
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
comfyanonymous
1f1ef695bb
Slightly faster lora applying.
2023-12-06 05:13:14 -05:00
comfyanonymous
39bb52dc2c
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
2620e3bc55
Fix memory issue with control loras.
2023-12-04 21:55:19 -05:00
comfyanonymous
e034f5709f
Fix control lora on fp8.
2023-12-04 13:47:41 -05:00
comfyanonymous
9002e58ce5
Less useless downcasting.
2023-12-04 12:53:46 -05:00
comfyanonymous
dfa7737afb
Use .itemsize to get dtype size for fp8.
2023-12-04 11:52:06 -05:00
comfyanonymous
080f5f4e84
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
03e65605de
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
comfyanonymous
31feea98b0
A different way of handling multiple images passed to SVD.
...
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]
now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous
3fa2174377
Support SD2.1 turbo checkpoint.
2023-11-30 19:27:03 -05:00
comfyanonymous
dc15d6da73
Use smart model management for VAE to decrease latency.
2023-11-28 04:58:51 -05:00
comfyanonymous
5b12f320a8
Add a function to load a unet from a state dict.
2023-11-27 17:41:29 -05:00
comfyanonymous
c675a1ba9f
.sigma and .timestep now return tensors on the same device as the input.
2023-11-27 16:41:33 -05:00
comfyanonymous
dcd91ab458
Try to free memory for both cond+uncond before inference.
2023-11-27 14:55:40 -05:00
comfyanonymous
3253675a9c
Tweak memory inference calculations a bit.
2023-11-27 14:04:16 -05:00
comfyanonymous
60f4b792c8
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
e4900be7f3
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
ac77dc020e
Fix importing diffusers unets.
2023-11-24 20:35:29 -05:00
comfyanonymous
eb17325740
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
5e4d60a231
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
d752c5f236
Make VAE memory estimation take dtype into account.
2023-11-22 18:17:19 -05:00
comfyanonymous
08b14daf24
Add sampling_settings so models can specify specific sampling settings.
2023-11-22 17:24:00 -05:00
comfyanonymous
2f876a465b
Allow controlling downscale and upscale methods in PatchModelAddDownscale.
2023-11-22 03:23:16 -05:00
comfyanonymous
cd49281bf8
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
212bb0e108
Allow model config to preprocess the vae state dict on load.
2023-11-21 16:29:18 -05:00
comfyanonymous
eeceef948d
Add taesd and taesdxl to VAELoader node.
...
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
613c21b071
Make it easy for models to process the unet state dict on load.
2023-11-20 23:17:53 -05:00
comfyanonymous
476120a5a8
percent_to_sigma now returns a float instead of a tensor.
2023-11-18 23:20:29 -05:00
comfyanonymous
c013b8e94c
Add some command line arguments to store text encoder weights in fp8.
...
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
848ff2f09d
Add support for loading SSD1B diffusers unet version.
...
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
comfyanonymous
7676c8f41a
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
comfyanonymous
3d07161f42
Fix potential issues.
2023-11-16 14:59:54 -05:00
comfyanonymous
87d2cb7b0b
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
comfyanonymous
5514c4b8ce
Invert the start and end percentages in the code.
...
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.
percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.
Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00
comfyanonymous
e5eeb33ac4
heunpp2 sampler.
2023-11-14 23:50:55 -05:00
comfyanonymous
35b7304ac5
Fix last pr.
2023-11-14 14:41:31 -05:00
comfyanonymous
6d974664f0
Merge branch 'master' of https://github.com/Jannchie/ComfyUI
2023-11-14 14:38:07 -05:00
comfyanonymous
95a0f5b46b
Make bislerp work on GPU.
2023-11-14 11:38:36 -05:00
comfyanonymous
702395d4ad
Clean up and refactor sampler code.
...
This should make it much easier to write custom nodes with kdiffusion type
samplers.
2023-11-14 00:39:34 -05:00
Jianqi Pan
58a23fc106
fix: adaptation to older versions of pytroch
2023-11-14 14:32:05 +09:00
comfyanonymous
f6333be536
Add a way to add patches to the input block.
2023-11-14 00:08:12 -05:00
comfyanonymous
9f546f0cb3
Disable xformers when it can't load properly.
2023-11-13 12:31:10 -05:00