The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Go to file
2025-02-15 17:14:13 +03:00
.ci Replace print with logging (#6138) 2024-12-20 16:24:55 -05:00
.github Update the python version in some workflows. 2025-01-29 15:54:13 -05:00
api_server Remove unused GET /files API endpoint (#6714) 2025-02-05 18:48:36 -05:00
app [i18n] Add /i18n endpoint to provide all custom node translations (#6558) 2025-01-22 17:15:45 -05:00
comfy Merge branch 'comfyanonymous:master' into master 2025-02-14 13:50:23 +03:00
comfy_execution Remove duplicate calls to INPUT_TYPES (#6249) 2024-12-29 20:06:49 -05:00
comfy_extras Add a node to set the model compute dtype for debugging. 2025-02-15 04:15:37 -05:00
custom_nodes Lint unused import (#5973) 2024-12-09 15:24:39 -05:00
input LoadLatent and SaveLatent should behave like the LoadImage and SaveImage. 2023-05-18 00:09:12 -04:00
models Update folder paths: "clip" -> "text_encoders" 2024-11-02 15:35:38 -04:00
notebooks Enforce all pyflake lint rules (#6033) 2024-12-12 19:29:37 -05:00
output Initial commit. 2023-01-16 22:37:14 -05:00
script_examples Lint unused import (#5973) 2024-12-09 15:24:39 -05:00
tests Fix and enforce no trailing whitespace. 2024-12-31 03:16:37 -05:00
tests-unit Open yaml files with utf-8 encoding for extra_model_paths.yaml (#6807) 2025-02-13 20:39:04 -05:00
utils Open yaml files with utf-8 encoding for extra_model_paths.yaml (#6807) 2025-02-13 20:39:04 -05:00
web Update frontend to v1.9.17 (#6814) 2025-02-15 04:32:47 -05:00
.gitattributes Tell github not to count the web directory in language stats. 2024-08-15 13:48:56 -04:00
.gitignore Add .venv to gitignore (#4756) 2024-09-09 04:31:18 -04:00
basic-flux-workflow.json Add files via upload 2025-01-07 02:19:30 +03:00
better-flux-workflow.json Add files via upload 2025-01-07 02:19:30 +03:00
CODEOWNERS Add utils/ to web server developer codeowner (#6570) 2025-01-22 17:16:54 -05:00
comfyui_version.py Bump ComfyUI version to v0.3.14 2025-02-05 15:48:13 -05:00
comfyui.bat removed pytorch garbage collection settings. 2024-12-30 15:03:01 +03:00
comfyui.png Add files via upload 2024-11-21 10:46:38 +03:00
CONTRIBUTING.md Add CONTRIBUTING.md (#3910) 2024-07-01 13:51:00 -04:00
cuda_malloc.py Update cuda_malloc.py 2024-08-30 12:19:50 +03:00
deepcache-sample.json Update deepcache-sample.json 2025-02-01 01:59:43 +03:00
execution.py Fix and enforce no trailing whitespace. 2024-12-31 03:16:37 -05:00
extra_model_paths.yaml.example add 'is_default' to model paths config (#4979) 2024-09-19 08:59:55 -04:00
fix_torch.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
fix-update.bat Create fix-update.bat 2024-09-14 16:14:34 +03:00
fixforrx580.bat Update fixforrx580.bat 2024-11-30 20:24:50 +07:00
fluxguide.md Update fluxguide.md 2025-01-07 14:02:01 +03:00
folder_paths.py allow searching for new .pt2 extension, which can contain AOTI compiled modules (#6689) 2025-02-03 17:07:35 -05:00
install.bat Update install.bat 2025-02-07 12:49:41 +03:00
latent_preview.py Convert latents_ubyte to 8-bit unsigned int before converting to CPU (#6300) 2025-01-28 08:22:54 -05:00
LICENSE Initial commit. 2023-01-16 22:37:14 -05:00
main.py Print ComfyUI version. 2025-01-25 15:03:57 -05:00
new_updater.py Replace print with logging (#6138) 2024-12-20 16:24:55 -05:00
node_helpers.py Add a node to set the model compute dtype for debugging. 2025-02-15 04:15:37 -05:00
nodes.py Document which text encoder to use for lumina 2. 2025-02-08 06:57:25 -05:00
patchzluda2.bat Update patchzluda2.bat 2025-02-07 12:49:11 +03:00
patchzluda.bat Update patchzluda.bat 2025-02-07 12:48:02 +03:00
pyproject.toml Bump ComfyUI version to v0.3.14 2025-02-05 15:48:13 -05:00
pytest.ini Execution Model Inversion (#2666) 2024-08-15 11:21:11 -04:00
README.md Merge branch 'comfyanonymous:master' into master 2025-02-04 18:16:21 +03:00
requirements.txt Add minimum numpy version to requirements.txt 2025-01-15 20:19:56 -05:00
server.py Fix incorrect Content-Type for WebP images (#6752) 2025-02-11 04:48:35 -05:00
start-tunableop-novram.bat Update start-tunableop-novram.bat 2024-08-30 16:07:10 +03:00
testzluda.py Add files via upload 2025-02-15 17:14:13 +03:00

ComfyUI-ZLUDA

Windows-only version of ComfyUI which uses ZLUDA to get better performance with AMD GPUs.

Table of Contents

What's New?

  • Changed how ZLUDA is patched into the Comfyui itself. Now code is in external file , much cleaner and easier for me the change if it becomes necessary.

  • Added a way to use any zluda you want (to use with HIP versions you want to use such as 6.1 - 6.2) After installing, close the app, run patchzluda2.bat. It will ask for url of the zluda build you want to use. You can choose from them here, lshyqqtiger's ZLUDA Fork then you can use patchzluda2.bat, run it paste the link via right click (A correct link would be like this, https://github.com/lshqqytiger/ZLUDA/releases/download/rel.d60bddbc870827566b3d2d417e00e1d2d8acc026/ZLUDA-windows-rocm6-amd64.zip) After pasting press enter and it would patch that zluda into comfy for you.

  • Reverted zluda version back to 3.8.4. After updating try running patchzluda and if still have problems, delete venv and re-run install.bat.

  • U̶p̶d̶a̶t̶e̶d̶ Z̶L̶U̶D̶A̶ v̶e̶r̶s̶i̶o̶n̶ t̶o̶ 3̶.8̶.5̶. I̶f̶ y̶o̶u̶ h̶a̶v̶e̶ a̶l̶r̶e̶a̶d̶y̶ i̶n̶s̶t̶a̶l̶l̶e̶d̶ c̶o̶m̶f̶y̶u̶i̶-̶z̶l̶u̶d̶a̶, y̶o̶u̶ c̶a̶n̶ u̶p̶d̶a̶t̶e̶ z̶l̶u̶d̶a̶ w̶i̶t̶h̶ r̶u̶n̶n̶i̶n̶g̶ ̶p̶a̶t̶c̶h̶z̶l̶u̶d̶a̶.b̶a̶t̶̶ o̶n̶c̶e̶. O̶f̶ c̶o̶u̶r̶s̶e̶, r̶e̶m̶e̶m̶b̶e̶r̶ t̶h̶e̶ f̶i̶r̶s̶t̶ t̶i̶m̶e̶ f̶o̶r̶ e̶v̶e̶r̶y̶ t̶y̶p̶e̶ o̶f̶ m̶o̶d̶e̶l̶ w̶o̶u̶l̶d̶ t̶a̶k̶e̶ e̶x̶t̶r̶a̶ t̶i̶m̶e̶.

  • Added a "small flux guide." This aims to use low vram and provides the very basic necessary files needed to get flux generation running. HERE

  • Added --reserve-vram with the value of 0.9 to commandline options that run with the app on startup. Greatly helps reduce using too much memory on generations.

  • Changed start.bat to comfyui.bat because there is already a windows command by that name, which creates some problems. Also added fix-update.bat which solves the problem that causes not being able to update to the latest version.

Important

📢 REGARDING KEEPING THE APP UP TO DATE

Avoid using the update function from the manager, instead use git pull, which we are doing on every start if start.bat is used. (App Already Does It Every Time You Open It, If You Are Using comfyui.bat, So This Way It Is Always Up To Date With Whatever Is On My GitHub Page)

Only use comfy manager to update the extensions (Manager -> Custom Nodes Manager -> Set Filter To Installed -> Click Check Update On The Bottom Of The Window) otherwise it breaks the basic installation, and in that case run install.bat once again.

📢 REGARDING RX 480-580 AND SIMILAR GPUS

After a while we need to keep updating certain packages to keep up with the original app and certain requirements of some models etc. So, torch is changed over time , but this gave gpu's prior to rdna some negative performance. So for these gpu's please use fixforrx580.bat if you are having memory problems too much slowdown etc. This is not mandatory and it won't just make your gpu faster then before but it would be less problematic then using latest torch that we use with other gpu's.

📢 RANDOM TIPS & TRICKS REGARDING USING AMD GPU'S

  • The generation speed is slower than nvidia counterparts we all know and accept it, but most of the time the out of memory situation at the end with VAE decoding is worse. To overcome this use "--cpu-vae". Add it to commandline_args on comfyui.bat. You can now decode using your system memory (the more the memory the better) and your cpu power. This might be slower but at least it works.

Dependencies

If coming from the very start, you need :

  1. Git: Download from https://git-scm.com/download/win. During installation don't forget to check the box for "Use Git from the Windows Command line and also from 3rd-party-software" to add Git to your system's PATH.

  2. Python (3.10.11 3.11 also works, but 3.10 is used by most popular nodes atm): Install the latest release from python.org. Don't Use Windows Store Version. If you have that installed, uninstall and please install from python.org. During installation remember to check the box for "Add Python to PATH when you are at the "Customize Python" screen.

  3. Visual C++ Runtime: Download vc_redist.x64.exe and install it.

  4. Install HIP SDK 5.7.1 from HERE the correct version, "Windows 10 & 11 5.7.1 HIP SDK"

    *** (*** this app installs zluda for 5.7.1 by default, if you want to use 6.1 or 6.2 you have to get the latest zluda link from lshyqqtiger's ZLUDA Fork then you can use patchzluda2.bat, run it paste the link via right click (a correct link would be like this, https://github.com/lshqqytiger/ZLUDA/releases/download/rel.d60bddbc870827566b3d2d417e00e1d2d8acc026/ZLUDA-windows-rocm6-amd64.zip press enter and it would patch that zluda into comfy for you. Of course this also would mean you have to change the variables below from "5.7" to "6.x" where needed) ***

  5. Add the system variable HIP_PATH, value: C:\\Program Files\\AMD\\ROCm\\5.7\\ (This is the default folder, if you have installed it on another drive, change if necessary)

    1. Check the variables on the lower part (System Variables), there should be a variable called: HIP_PATH.
    2. Also check the variables on the lower part (System Variables), there should be a variable called: "Path". Double-click it and click "New" add this: C:\Program Files\AMD\ROCm\5.7\bin
  6. If you have an AMD GPU below 6800 (6700,6600 etc.), download the recommended library files for your gpu from Brknsoul Repository

    1. Go to folder "C:\Program Files\AMD\ROCm\5.7\bin\rocblas", there would be a "library" folder, backup the files inside to somewhere else.
    2. Open your downloaded optimized library archive and put them inside the library folder (overwriting if necessary): "C:\Program Files\AMD\ROCm\5.7\bin\rocblas\library"
  7. Reboot your system.

Setup (Windows-Only)

Open a cmd prompt. (Powershell doesn't work, you have to use command prompt.)

git clone https://github.com/patientx/ComfyUI-Zluda
cd ComfyUI-Zluda
install.bat

to start for later use (or create a shortcut to) :

comfyui.bat

also for later when you need to repatch zluda (maybe a torch update etc.) you can use:

patchzluda.bat
  • The first generation would take around 10-15 minutes, there won't be any progress or indicator on the webui or cmd window, just wait. Zluda creates a database for use with generation with your gpu.

Note

This might happen with torch changes , zluda version changes and / or gpu driver changes.

Troubleshooting

  • DO NOT use non-english characters as folder names to put comfyui-zluda under.
  • Wipe your pip cache "C:\Users\USERNAME\AppData\Local\pip\cache" You can also do this when venv is active with : pip cache purge
  • xformers isn't usable with zluda so any nodes / packages that require it doesn't work. Flash attention doesn't work. And lastly using codeformer for face restoration gives "Failed inference: CUDA driver error: unknown error" You should use gfpgan / gpen / restoreformer or other face restoration models.
  • Have the latest drivers installed for your amd gpu. Also, Remove Any Nvidia Drivers you might have from previous nvidia gpu's.
  • If you see zluda errors make sure these three files are inside "ComfyUI-Zluda\venv\Lib\site-packages\torch\lib" cublas64_11.dll (231kb) cusparse64_11.dll (199kb) nvrtc64_112_0.dll (129kb) If they are there but much bigger in size please run : patchzluda.bat
  • If for some reason you can't solve with these and want to start from zero, delete "venv" folder and re-run install.bat
  • If you can't git pull to the latest version, run these commands, git fetch --all and then git reset --hard origin/master now you can git pull
  • Problems with caffe2_nvrtc.dll: if you are sure you properly installed hip and can see it on path, please DON'T use python from windows store, use the link provided or 3.11 from the official website. After uninstalling python from windows store and installing the one from the website, be sure the delete venv folder, and run install.bat once again.
  • rocBLAS-error: If you have an integrated GPU by AMD (e.g. AMD Radeon(TM) Graphics) you need to add HIP_VISIBLE_DEVICES=1 to your environment variables. Otherwise it will default to using your iGPU, which will most likely not work. This behavior is caused by a bug in the ROCm-driver.

Example Workflows

Shortcuts

Keybind Explanation
Ctrl + Enter Queue up current graph for generation
Ctrl + Shift + Enter Queue up current graph as first for generation
Ctrl + Alt + Enter Cancel current generation
Ctrl + Z/Ctrl + Y Undo/Redo
Ctrl + S Save workflow
Ctrl + O Load workflow
Ctrl + A Select all nodes
Alt + C Collapse/uncollapse selected nodes
Ctrl + M Mute/unmute selected nodes
Ctrl + B Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through)
Delete/Backspace Delete selected nodes
Ctrl + Backspace Delete the current graph
Space Move the canvas around when held and moving the cursor
Ctrl/Shift + Click Add clicked node to selection
Ctrl + C/Ctrl + V Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes)
Ctrl + C/Ctrl + Shift + V Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes)
Shift + Drag Move multiple selected nodes at the same time
Ctrl + D Load default graph
Alt + + Canvas Zoom in
Alt + - Canvas Zoom out
Ctrl + Shift + LMB + Vertical drag Canvas Zoom in/out
P Pin/Unpin selected nodes
Ctrl + G Group selected nodes
Q Toggle visibility of the queue
H Toggle visibility of history
R Refresh graph
F Show/Hide menu
. Fit view to selection (Whole graph when nothing is selected)
Double-Click LMB Open node quick search palette
Shift + Drag Move multiple wires at once
Ctrl + Alt + LMB Disconnect all wires from clicked slot

Credits