Commit Graph

1384 Commits

Author SHA1 Message Date
doctorpangloss
fd503d8a96 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-08-29 16:37:30 -07:00
comfyanonymous
ea3f39bd69 InstantX depth flux controlnet. 2024-08-29 02:14:19 -04:00
comfyanonymous
b33cd61070 InstantX canny controlnet. 2024-08-28 19:02:50 -04:00
doctorpangloss
ccdbd957ef Fix pylint issues 2024-08-28 15:48:47 -07:00
doctorpangloss
9e8bb0b297 Add image tracing to SVG support using vtrace, python skia. The Skia library can be used for additional drawing tasks 2024-08-28 14:49:19 -07:00
doctorpangloss
46ffaa2f0d Fix Flux controlnets 2024-08-28 14:48:42 -07:00
comfyanonymous
d31e226650 Unify RMSNorm code. 2024-08-28 16:56:38 -04:00
comfyanonymous
38c22e631a Fix case where model was not properly unloaded in merging workflows. 2024-08-27 19:03:51 -04:00
doctorpangloss
54740d99d6 Upstream the chat templates 2024-08-27 12:58:40 -07:00
Chenlei Hu
6bbdcd28ae
Support weight padding on diff weight patch (#4576) 2024-08-27 13:55:37 -04:00
comfyanonymous
ab130001a8 Do RMSNorm in native type. 2024-08-27 02:41:56 -04:00
doctorpangloss
8615c86722 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-08-26 16:59:38 -07:00
doctorpangloss
27f4d70904 Fix pylint 2024-08-26 16:56:27 -07:00
doctorpangloss
f49bcd4f3c Upstream InstantX Union ControlNet support for Flux 2024-08-26 16:54:29 -07:00
comfyanonymous
2ca8f6e23d Make the stochastic fp8 rounding reproducible. 2024-08-26 15:12:06 -04:00
comfyanonymous
7985ff88b9 Use less memory in float8 lora patching by doing calculations in fp16. 2024-08-26 14:45:58 -04:00
comfyanonymous
c6812947e9 Fix potential memory leak. 2024-08-26 02:07:32 -04:00
doctorpangloss
48ca1a4910 Include Kijai fp8 nodes. LoRAs are not supported by nf4 2024-08-25 22:41:10 -07:00
doctorpangloss
69e6d52301 Fix tests 2024-08-25 19:55:18 -07:00
doctorpangloss
c4fe16252b Fix imports 2024-08-25 18:56:47 -07:00
doctorpangloss
7100603016 Register moves 2024-08-25 18:53:50 -07:00
doctorpangloss
5155a3e248 Merge WIP 2024-08-25 18:52:29 -07:00
doctorpangloss
d7b65c9f55 Add flux controlnet to known controlnets 2024-08-25 15:24:46 -07:00
Benjamin Berman
ad9c4a7237 Upstream nf4 nodes 2024-08-25 15:23:14 -07:00
comfyanonymous
9230f65823 Fix some controlnets OOMing when loading. 2024-08-25 05:54:29 -04:00
comfyanonymous
8ae23d8e80 Fix onnx export. 2024-08-23 17:52:47 -04:00
comfyanonymous
7df42b9a23 Fix dora. 2024-08-23 04:58:59 -04:00
comfyanonymous
5d8bbb7281 Cleanup. 2024-08-23 04:06:27 -04:00
comfyanonymous
2c1d2375d6 Fix. 2024-08-23 04:04:55 -04:00
Simon Lui
64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). (#4562) 2024-08-23 03:59:57 -04:00
Scorpinaus
9465b23432
Added SD15_Inpaint_Diffusers model support for unet_config_from_diffusers_unet function (#4565) 2024-08-23 03:57:08 -04:00
comfyanonymous
c0b0da264b Missing imports. 2024-08-22 17:20:51 -04:00
comfyanonymous
c26ca27207 Move calculate function to comfy.lora 2024-08-22 17:12:00 -04:00
comfyanonymous
7c6bb84016 Code cleanups. 2024-08-22 17:05:12 -04:00
comfyanonymous
c54d3ed5e6 Fix issue with models staying loaded in memory. 2024-08-22 15:58:20 -04:00
comfyanonymous
c7ee4b37a1 Try to fix some lora issues. 2024-08-22 15:32:18 -04:00
David
7b70b266d8
Generalize MacOS version check for force-upcast-attention (#4548)
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.

I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.

See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
comfyanonymous
8f60d093ba Fix issue. 2024-08-22 10:38:24 -04:00
comfyanonymous
843a7ff70c fp16 is actually faster than fp32 on a GTX 1080. 2024-08-21 23:23:50 -04:00
comfyanonymous
a60620dcea Fix slow performance on 10 series Nvidia GPUs. 2024-08-21 16:39:02 -04:00
comfyanonymous
015f73dc49 Try a different type of flux fp16 fix. 2024-08-21 16:17:15 -04:00
comfyanonymous
904bf58e7d Make --fast work on pytorch nightly. 2024-08-21 14:01:41 -04:00
Svein Ove Aas
5f50263088
Replace use of .view with .reshape (#4522)
When generating images with fp8_e4_m3 Flux and batch size >1, using --fast, ComfyUI throws a "view size is not compatible with input tensor's size and stride" error pointing at the first of these two calls to view.

As reshape is semantically equivalent to view except for working on a broader set of inputs, there should be no downside to changing this. The only difference is that it clones the underlying data in cases where .view would error out. I have confirmed that the output still looks as expected, but cannot confirm that no mutable use is made of the tensors anywhere.

Note that --fast is only marginally faster than the default.
2024-08-21 11:21:48 -04:00
comfyanonymous
76369e991c Indentation. 2024-08-20 23:02:45 -07:00
Xrvk
bd18041d25 Add Flux model support for InstantX style controlnet residuals (#4444)
* Add Flux model support for InstantX style controlnet residuals

* Refactor Flux controlnet residual step to a separate method

* Rollback minor change

* New format for applying controlnet residuals: input->double_blocks, output->single_blocks

* Adjust XLabs Flux controlnet to fit new syntax of applying Flux controlnet residuals

* Remove unnecessary import and minor style change
2024-08-20 23:02:45 -07:00
doctorpangloss
3e54f9da36 Fix torch_dtype issues, missing DualCLIPLoader known model support 2024-08-20 23:00:12 -07:00
comfyanonymous
03ec517afb Remove useless line, adjust windows default reserved vram. 2024-08-21 00:47:19 -04:00
doctorpangloss
540c43fae7 Typings 2024-08-20 21:25:16 -07:00
comfyanonymous
510f3438c1 Speed up fp8 matrix mult by using better code. 2024-08-20 22:53:26 -04:00
comfyanonymous
ea63b1c092 Simpletrainer lycoris format. 2024-08-20 12:05:13 -04:00