From 4d4621f53fbd529ddbcf25d4aa058cafa50150d0 Mon Sep 17 00:00:00 2001 From: patientx Date: Wed, 4 Jun 2025 22:30:12 +0300 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2e9173a35..3e1953abb 100644 --- a/README.md +++ b/README.md @@ -16,8 +16,8 @@ Windows-only version of ComfyUI which uses ZLUDA to get better performance with - [Credits](#credits) ## What's New? -* Added "CFZ Cudnn Toggle" node, it is for some of the audio models, not working with cudnn -which is enabled by default on new install method- to use it just connect it before ksampler -latent_image input or any latent input- disable cudnn, THEN after the vae decoding -which most of these problems occur- to re-enable cudnn , add it after vae-decoding, select audio_output and connect it save audio node of course enable cudnn now.This way within that workflow you are disabling cudnn when working with models that are not compatible with it, so instead of completely disabling it in comfy we can do it locally like this. -* Added an experiment of mine, "CFZ Checkpoint Loader", a very basic quantizer for models. It only works -reliably- with SDXL and variants aka noobai or illustrious etc. It only works on the unet aka main model so no clips or vae. BUT it gives around %23 to %29 less vram usage with sdxl models. The generation time slows around 5 to 10 percent at most. This is especially good for low vram folks, 6GB - 8GB it could even be helpful for 4GB I don't know. Feel free to copy- modify- improve it, and try it with nvidia gpu's as well. Of course this fork is AMD only but you can take it and try it anywhere. Just you know I am not actively working on it, and besides SDXL cannot guarantee any vram improvements let alone a working node :) NOTE: It doesn't need any special packages or hardware so it probably would work with any gpu. Again, don't ask me to add x etc. +* Added "CFZ Cudnn Toggle" node, it is for some of the audio models, not working with cudnn -which is enabled by default on new install method- to use it just connect it before ksampler -latent_image input or any latent input- disable cudnn, THEN after the vae decoding -which most of these problems occur- to re-enable cudnn , add it after vae-decoding, select audio_output and connect it save audio node of course enable cudnn now.This way within that workflow you are disabling cudnn when working with models that are not compatible with it, so instead of completely disabling it in comfy we can do it locally like this. +* "CFZ Checkpoint Loader"was broken, it might corrupt the models if you load with it and quit halfway, I completely redone it and it now works outside the checkpoint loading so doesn't touch the file and when it does quantize the model, it makes a copy and quantizes it. Please delete the "cfz_checkpoint_loader.py" and use the newly added "cfz_patcher.py" it got three seperate nodes and much safer and better. * BOTH of these nodes are inside "cfz" folder, to use them copy them into custom_nodes, they would appear next time you open comfy, to find them searh for "cfz" you will see both nodes. * flash-attention download error fixed, also added sage-attention fix, especially for vae out of memory errors that occurs a lot with sage-attention enabled. NOTE : this doesn't require any special packages or hardware as far as I know so it could work with everything. * `install-n.bat` now not only installs everything needed for MIOPEN and Flash-Attention use, it also automates installing triton (only supported for python 3.10.x and 3.11.x) and flash-attention. So if you especially have 6000+ gpu , have HIP 6.2.4 and libraries if necessary, try it. But beware, there are lots of errors yet to be unsolved. So it is still not the default installed version.