missionfloyd
bc54b69c59
Change transpose to torch
2023-04-10 20:23:15 -06:00
missionfloyd
70564aebb6
Name split node outputs
2023-04-10 02:38:46 -06:00
missionfloyd
8c16b98008
Image composite node
2023-04-09 22:36:39 -06:00
missionfloyd
d414157701
Fix transpose with non-square images
2023-04-09 19:19:14 -06:00
missionfloyd
97c7a402d7
Merge channels node
2023-04-09 19:08:17 -06:00
missionfloyd
6f7abd9497
Image split, getchannel nodes
2023-04-09 18:48:05 -06:00
missionfloyd
d36ad5d958
Add transpose and rotate nodes
2023-04-08 20:43:43 -06:00
comfyanonymous
871a76b77b
Rename and reorganize post processing nodes.
2023-04-04 22:54:33 -04:00
comfyanonymous
af291e6f69
Convert line endings to unix.
2023-04-04 13:56:13 -04:00
EllangoK
56196ab0f7
use common_upcale in blend
2023-04-04 10:57:34 -04:00
EllangoK
fa2febc062
blend supports any size, dither -> quantize
2023-04-03 09:52:04 -04:00
EllangoK
4c7a9dbcb6
adds Blend, Blur, Dither, Sharpen nodes
2023-04-02 18:44:27 -04:00
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
2e73367f45
Merge T2IAdapterLoader and ControlNetLoader.
...
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
e1a9e26968
Add folder_paths so models can be in multiple paths.
2023-03-17 18:01:11 -04:00
comfyanonymous
494cfe5444
Prevent model_management from being loaded twice.
2023-03-15 15:18:18 -04:00
comfyanonymous
c8f1acc4eb
Put image upscaling nodes in image/upscaling category.
2023-03-11 18:10:36 -05:00
comfyanonymous
9db2e97b47
Tiled upscaling with the upscale models.
2023-03-11 14:04:13 -05:00
comfyanonymous
905857edd8
Take some code from chainner to implement ESRGAN and other upscale models.
2023-03-11 13:09:28 -05:00
comfyanonymous
7ec1dd25a2
A tiny bit of reorganizing.
2023-03-06 01:30:17 -05:00
comfyanonymous
47acb3d73e
Implement support for t2i style model.
...
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.
Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models
StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00