comfyanonymous
1725369340
Implement modelspec metadata in CheckpointSave for SDXL and refiner.
2023-07-25 22:02:34 -04:00
comfyanonymous
60b9e237cf
Use bigger tiles when upscaling with model and fallback on OOM.
2023-07-24 19:47:32 -04:00
comfyanonymous
daac253452
Add a node to merge CLIP models.
2023-07-14 02:41:18 -04:00
comfyanonymous
a85b525a8a
Add a canny preprocessor node.
2023-07-13 13:26:48 -04:00
comfyanonymous
c5779f04aa
Fix merging not working when model2 of model merge node was a merge.
2023-07-08 22:31:10 -04:00
comfyanonymous
83818185f0
CLIPTextEncodeSDXL now works when prompts are of very different sizes.
2023-07-06 23:23:54 -04:00
comfyanonymous
35ef1e4992
Now the model merge blocks node will use the longest match.
2023-07-04 00:51:17 -04:00
comfyanonymous
1a9658ee64
Add text encode nodes to control the extra parameters in SDXL.
2023-07-03 19:11:36 -04:00
comfyanonymous
351fc92d3f
Move model merging nodes to advanced and add to readme.
2023-06-30 15:21:55 -04:00
comfyanonymous
95008c22cd
Add CheckpointSave node to save checkpoints.
...
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.
Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32
Anything that patches the model weights like merging or loras will be
saved.
The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
0db33017af
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
cd8d0b73c5
Fix last commits causing an issue with the text encoder lora.
2023-06-20 19:44:39 -04:00
comfyanonymous
ae0bbb2264
Add some nodes for basic model merging.
2023-06-20 19:17:03 -04:00
comfyanonymous
873b08bd0f
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
e34e147259
Round the mask values for bitwise operations.
2023-05-28 00:42:53 -04:00
space-nuko
712183e44e
Bitwise operations for masks
2023-05-27 21:48:49 -05:00
comfyanonymous
e94d69817b
Pull in latest upscale model code from chainner.
2023-05-23 22:26:50 -04:00
comfyanonymous
043e28bdc9
Fix padding in Blur.
2023-05-20 10:08:47 -04:00
BlenderNeko
389caf813c
improve sharpen and blur nodes
2023-05-20 15:23:28 +02:00
comfyanonymous
fbfda381d7
Enable safe loading for upscale models.
2023-05-14 15:10:40 -04:00
BlenderNeko
dd1be4e992
Make nodes map over input lists ( #579 )
...
* allow nodes to map over lists
* make work with IS_CHANGED and VALIDATE_INPUTS
* give list outputs distinct socket shape
* add rebatch node
* add batch index logic
* add repeat latent batch
* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
comfyanonymous
87743e7748
Make MaskToImage support masks with a batch size.
2023-05-10 10:03:30 -04:00
comfyanonymous
540fced680
Support softsign hypernetwork.
2023-05-05 00:16:57 -04:00
comfyanonymous
10ff210ffb
Refactor.
2023-05-03 17:48:35 -04:00
pythongosssss
eaeac55c0d
remove unused import
2023-05-03 18:21:23 +01:00
pythongosssss
d8017626fb
use comfy progress bar
2023-05-03 18:19:22 +01:00
pythongosssss
f6154607f9
Merge remote-tracking branch 'origin/master' into tiled-progress
2023-05-03 17:33:42 +01:00
pythongosssss
33b0ba6464
added progress to encode + upscale
2023-05-02 19:18:07 +01:00
comfyanonymous
00e088bbe7
Python 3.7 support.
2023-04-25 14:02:17 -04:00
comfyanonymous
df8cc5d9e3
Add hypernetwork example link to readme.
...
Move hypernetwork loader node to loaders.
2023-04-24 03:08:51 -04:00
comfyanonymous
5366170a23
Support all known hypernetworks.
2023-04-24 02:36:06 -04:00
comfyanonymous
e6771d0986
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
4df70d0f62
Fix model_management import so it doesn't get executed twice.
2023-04-15 19:04:33 -04:00
comfyanonymous
25d89b66a3
Fix for older python.
...
from: https://github.com/comfyanonymous/ComfyUI/discussions/476
2023-04-15 10:56:15 -04:00
comfyanonymous
fb39162c40
LatentCompositeMasked: negative x, y don't work.
2023-04-14 00:49:19 -04:00
comfyanonymous
3fe8074417
Refactor: move nodes_mask_convertion nodes to nodes_mask.
2023-04-14 00:21:01 -04:00
comfyanonymous
7f6c42d878
Merge branch 'image-to-mask' of https://github.com/missionfloyd/ComfyUI
...
# Conflicts:
# nodes.py
2023-04-14 00:15:48 -04:00
comfyanonymous
eb3fe3df91
Update comfy_extras/nodes_mask.py
...
Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com>
2023-04-14 00:12:15 -04:00
missionfloyd
cb25768a09
Move mask conversion to separate file
2023-04-13 03:11:17 -06:00
mligaintart
c80d164c5b
Adds masking to Latent Composite, and provides new masking utilities to
...
allow better compositing.
2023-04-06 15:18:20 -04:00
comfyanonymous
b09e510414
Rename and reorganize post processing nodes.
2023-04-04 22:54:33 -04:00
comfyanonymous
1f0a236864
Convert line endings to unix.
2023-04-04 13:56:13 -04:00
EllangoK
2313227ae6
use common_upcale in blend
2023-04-04 10:57:34 -04:00
EllangoK
aad55ad912
blend supports any size, dither -> quantize
2023-04-03 09:52:04 -04:00
EllangoK
f9f6352506
adds Blend, Blur, Dither, Sharpen nodes
2023-04-02 18:44:27 -04:00
comfyanonymous
b55667284c
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
d5bf8038c8
Merge T2IAdapterLoader and ControlNetLoader.
...
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
ee6cca1d0f
Add folder_paths so models can be in multiple paths.
2023-03-17 18:01:11 -04:00
comfyanonymous
a2136acb6c
Prevent model_management from being loaded twice.
2023-03-15 15:18:18 -04:00
comfyanonymous
9c4c183708
Put image upscaling nodes in image/upscaling category.
2023-03-11 18:10:36 -05:00