KarryCharon
3ee78c064b
fix mps miss import
2023-07-12 10:06:34 +08:00
comfyanonymous
4e9e417506
Move to litegraph.
2023-07-11 03:12:00 -04:00
comfyanonymous
644c223312
Merge branch 'hidpi-canvas' of https://github.com/EHfive/ComfyUI
2023-07-11 03:04:10 -04:00
comfyanonymous
870b4d6a25
Update litegraph to latest.
2023-07-11 03:00:52 -04:00
Huang-Huang Bao
81a48fa3f2
Scale graph canvas based on DPI factor
...
Similar to fixes in litegraph.js editor demo:
3ef215cf11/editor/js/code.js (L19-L28)
Also workarounds to address viewpoint problem of lightgrapgh.js in DPI scaling scenario.
Fixes #161
2023-07-11 14:47:58 +08:00
Dr.Lt.Data
1211003e34
feat/startup-script: Feature to avoid package installation errors when installing custom nodes. ( #856 )
...
* support startup script for installation without locking on windows
* modified: Instead of executing scripts from the startup-scripts directory, I will change it to execute the prestartup_script.py for each custom node.
2023-07-11 02:33:21 -04:00
comfyanonymous
6e99974161
Support SDXL embedding format with 2 CLIP.
2023-07-10 10:34:59 -04:00
comfyanonymous
c1c170f61a
Don't patch weights when multiplier is zero.
2023-07-09 17:46:56 -04:00
comfyanonymous
20e6d36cda
Fix annoyance with textbox unselecting in chromium.
2023-07-09 15:41:19 -04:00
comfyanonymous
55d00ccefd
latent2rgb matrix for SDXL.
2023-07-09 13:59:09 -04:00
comfyanonymous
42805fd416
Empty cache after model unloading for normal vram and lower.
2023-07-09 09:56:03 -04:00
comfyanonymous
7d69d770e1
Support loading clip_g from diffusers in CLIP Loader nodes.
2023-07-09 09:33:53 -04:00
comfyanonymous
c5779f04aa
Fix merging not working when model2 of model merge node was a merge.
2023-07-08 22:31:10 -04:00
comfyanonymous
f5530206ef
Merge branch 'bugfix/img-offset' of https://github.com/ltdrdata/ComfyUI
2023-07-08 03:45:37 -04:00
Dr.Lt.Data
49aacb2e98
fix: Image.ANTIALIAS is no longer available. ( #847 )
...
* modify deprecated api call
* prevent breaking old Pillow users
* change LANCZOS to BILINEAR
2023-07-08 02:36:48 -04:00
Dr.Lt.Data
6709b86b19
bugfix: image widget's was mis-aligned when node has multiline widget
2023-07-08 01:42:33 +09:00
comfyanonymous
3cd7423901
Merge branch 'Yaruze66-patch-1' of https://github.com/Yaruze66/ComfyUI
2023-07-07 01:55:10 -04:00
comfyanonymous
4685d2b07f
Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI
2023-07-07 01:52:25 -04:00
comfyanonymous
83818185f0
CLIPTextEncodeSDXL now works when prompts are of very different sizes.
2023-07-06 23:23:54 -04:00
comfyanonymous
9caaa09c71
Add arguments to run the VAE in fp16 or bf16 for testing.
2023-07-06 23:23:46 -04:00
comfyanonymous
60096aa89a
Fix 7z error when extracting package.
2023-07-06 04:18:36 -04:00
comfyanonymous
d3b3c94616
Fix bug with weights when prompt is long.
2023-07-06 02:43:40 -04:00
comfyanonymous
fa8010f038
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
comfyanonymous
c6391df3a5
Fix loras not working when loading checkpoint with config.
2023-07-05 19:42:24 -04:00
comfyanonymous
7e69827fcc
Add a conditioning concat node.
2023-07-05 17:40:22 -04:00
comfyanonymous
2ff6108df3
Support loading unet files in diffusers format.
2023-07-05 17:38:59 -04:00
comfyanonymous
56d999484b
Add gpu variations of the sde samplers that are less deterministic
...
but faster.
2023-07-05 01:39:38 -04:00
comfyanonymous
7ffb8dbe56
Add logit scale parameter so it's present when saving the checkpoint.
2023-07-04 23:01:28 -04:00
comfyanonymous
60bdf7c00b
Properly support SDXL diffusers loras for unet.
2023-07-04 21:15:23 -04:00
mara
386e66bd7f
Fix size check for conditioning mask
...
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2023-07-04 16:34:42 +02:00
comfyanonymous
06ce99e525
Fix issue with OSX.
2023-07-04 02:09:02 -04:00
comfyanonymous
35ef1e4992
Now the model merge blocks node will use the longest match.
2023-07-04 00:51:17 -04:00
comfyanonymous
558a13f5b2
ConditioningAverage now also averages the pooled output.
2023-07-03 21:44:37 -04:00
comfyanonymous
1a9658ee64
Add text encode nodes to control the extra parameters in SDXL.
2023-07-03 19:11:36 -04:00
comfyanonymous
fcee7e88db
Pass device to CLIP model.
2023-07-03 16:09:37 -04:00
comfyanonymous
be891646cd
Allow passing custom path to clip-g and clip-h.
2023-07-03 15:45:04 -04:00
comfyanonymous
fd93e324e8
Improvements for OSX.
2023-07-03 00:08:30 -04:00
Yaruze66
664b5f4c7e
Update extra_model_paths.yaml.example: add RealESRGAN path
2023-07-02 22:59:55 +05:00
comfyanonymous
033dc1f52a
Cleanup.
2023-07-02 11:58:23 -04:00
comfyanonymous
45e705696f
Add taesd weights to standalones.
2023-07-02 11:47:30 -04:00
comfyanonymous
280a4e3544
Switch to fp16 on some cards when the model is too big.
2023-07-02 10:00:57 -04:00
comfyanonymous
dd4abf1345
Add a --force-fp16 argument to force fp16 for testing.
2023-07-01 22:42:35 -04:00
comfyanonymous
1e24a78d85
--gpu-only now keeps the VAE on the device.
2023-07-01 15:22:40 -04:00
comfyanonymous
5ace1146c5
Lower latency by batching some text encoder inputs.
2023-07-01 15:07:39 -04:00
comfyanonymous
2ee0aa317c
Leave text_encoder on the CPU when it can handle it.
2023-07-01 14:38:51 -04:00
comfyanonymous
d5a7abe10d
Try to keep text encoders loaded and patched to increase speed.
...
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
e946dca0e1
Make highvram and normalvram shift the text encoders to vram and back.
...
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
32c62574aa
Fix nightly packaging.
2023-07-01 01:31:03 -04:00
comfyanonymous
351fc92d3f
Move model merging nodes to advanced and add to readme.
2023-06-30 15:21:55 -04:00
comfyanonymous
7dff6c094c
LoraLoader node now caches the lora file between executions.
2023-06-29 23:40:51 -04:00