Rattus
3597b27515
models: Use CoreModelPatcher
...
Use CoreModelPatcher for all internal ModelPatcher implementations. This drives
conditional use of the aimdo feature, while making sure custom node packs get
to keep ModelPatcher unchanged for the moment.
2026-01-13 19:58:06 +10:00
comfyanonymous
c881a1d689
Support the siglip 2 naflex model as a clip vision model. ( #11831 )
...
Not useful yet.
2026-01-12 17:05:54 -05:00
comfyanonymous
d622a61874
Refactor: move clip_preprocess to comfy.clip_model ( #11586 )
2025-12-31 17:38:36 -05:00
comfyanonymous
c9ebe70072
Some changes to the previous hunyuan PR. ( #9725 )
2025-09-04 20:39:02 -04:00
Yousef R. Gamaleldin
261421e218
Add Hunyuan 3D 2.1 Support ( #8714 )
2025-09-04 20:36:20 -04:00
comfyanonymous
3412d53b1d
USO style reference. ( #9677 )
...
Load the projector.safetensors file with the ModelPatchLoader node and use
the siglip_vision_patch14_384.safetensors "clip vision" model and the
USOStyleReferenceNode.
2025-09-02 15:36:22 -04:00
thot experiment
e2eed9eb9b
throw away alpha channel in clip vision preprocessor ( #7769 )
...
saves users having to explicitly discard the channel
2025-04-23 21:28:36 -04:00
comfyanonymous
3bfe4e5276
Support 512 siglip model.
2025-04-05 07:01:01 -04:00
comfyanonymous
50614f1b79
Fix regression with clip vision.
2025-03-17 13:56:11 -04:00
comfyanonymous
6dc7b0bfe3
Add support for giant dinov2 image encoder.
2025-03-17 05:53:54 -04:00
comfyanonymous
0bef826a98
Support llava clip vision model.
2025-03-06 00:24:43 -05:00
comfyanonymous
26fb2c68e8
Add a way to disable cropping in the CLIPVisionEncode node.
2024-11-28 20:24:47 -05:00
comfyanonymous
8f0009aad0
Support new flux model variants.
2024-11-21 08:38:23 -05:00
comfyanonymous
7d2467e830
Some minor cleanups.
2024-10-05 13:22:39 -04:00
喵哩个咪
855789403b
support clip-vit-large-patch14-336 ( #4042 )
...
* support clip-vit-large-patch14-336
* support clip-vit-large-patch14-336
2024-07-17 13:12:50 -04:00
comfyanonymous
6f7869f365
Get clip vision image size from config.
2024-07-17 13:05:38 -04:00
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
2024-03-10 12:14:23 -04:00
comfyanonymous
4871a36458
Cleanup some unused imports.
2024-01-21 21:51:22 -05:00
comfyanonymous
d76a04b6ea
Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
...
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
e45d920ae3
Don't resize clip vision image when the size is already good.
2023-12-16 03:06:10 -05:00
comfyanonymous
13e6d5366e
Switch clip vision to manual cast.
...
Make it use the same dtype as the text encoder.
2023-12-16 02:47:26 -05:00
comfyanonymous
719fa0866f
Set clip vision model in eval mode so it works without inference mode.
2023-12-15 18:53:08 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
174eba8e95
Use own clip vision model implementation.
2023-12-09 11:56:31 -05:00
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
comfyanonymous
723847f6b3
Faster clip image processing.
2023-10-26 01:53:01 -04:00
comfyanonymous
430a8334c5
Fix some potential issues.
2023-10-18 19:48:36 -04:00
comfyanonymous
ed58730658
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
e85be36bd2
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
4e89b2c25a
Put clip vision outputs on the CPU.
2023-08-28 16:26:11 -04:00
comfyanonymous
a094b45c93
Load clipvision model to GPU for faster performance.
2023-08-28 15:29:27 -04:00
comfyanonymous
76d53c4622
Add support for clip g vision model to CLIPVisionLoader.
2023-08-18 11:13:29 -04:00
comfyanonymous
58f0c616ed
Fix clip vision issue with old transformers versions.
2023-08-16 11:36:22 -04:00
comfyanonymous
ae270f79bc
Fix potential issue with batch size and clip vision.
2023-08-16 11:05:11 -04:00
comfyanonymous
9cc12c833d
CLIPVisionEncode can now encode multiple images.
2023-08-14 16:54:05 -04:00
comfyanonymous
9e37f4c7d5
Fix error with ClipVision loader node.
2023-06-23 01:08:05 -04:00
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
cd930d4e7f
pop clip vision keys after loading them.
2023-06-18 21:21:17 -04:00
comfyanonymous
bb1f45d6e8
Properly disable weight initialization in clip models.
2023-06-14 20:13:08 -04:00
comfyanonymous
fa2cca056c
Don't initialize CLIPVision weights to default values.
2023-06-14 12:57:02 -04:00
comfyanonymous
92eca60ec9
Fix for new transformers version.
2023-04-09 15:55:21 -04:00
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00