Commit Graph

2426 Commits

Author SHA1 Message Date
doctorpangloss
a79ccd625f bf16 selection for AMD 2024-05-22 22:45:15 -07:00
doctorpangloss
35cf996b68 ROCm 6.0 seems to require get_device_name to be called before memory methods in order to return valid data 2024-05-22 22:09:07 -07:00
doctorpangloss
bb159a219e Fix pip 22 bug, prefer vendor index URLs over PyPi 2024-05-22 21:26:44 -07:00
doctorpangloss
c0fc1d1458 Add pytorch-triton-rocm to dependencies when targeting AMD because accelerate needs to find it in the rocm repo 2024-05-22 21:21:23 -07:00
doctorpangloss
0fcd07962f Add Intel, AMD Linux Dockerfiles, improve error messages on AMD 2024-05-22 21:16:34 -07:00
doctorpangloss
b241ecc56d Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-21 11:38:24 -07:00
doctorpangloss
5b9eadc165 WIP documentation work 2024-05-21 11:38:20 -07:00
comfyanonymous
83d969e397 Disable xformers when tracing model. 2024-05-21 13:55:49 -04:00
doctorpangloss
f69b6225c0 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-20 12:06:35 -07:00
comfyanonymous
1900e5119f Fix potential issue. 2024-05-20 08:19:54 -04:00
comfyanonymous
276f8fce9f Print error when node is missing. 2024-05-20 07:04:08 -04:00
Dr.Lt.Data
4bc1884478
Provide a better error message when attempting to execute the workflow with a missing node. (#3517) 2024-05-20 06:58:46 -04:00
comfyanonymous
09e069ae6c Log the pytorch version. 2024-05-20 06:22:29 -04:00
comfyanonymous
11a2ad5110 Fix controlnet not upcasting on models that have it enabled. 2024-05-19 17:58:03 -04:00
comfyanonymous
4ae1515f14 Slightly faster latent2rgb previews. 2024-05-19 17:42:35 -04:00
comfyanonymous
f37a47110b Make --preview-method auto default to the fast latent2rgb previews. 2024-05-19 11:45:36 -04:00
comfyanonymous
0bdc2b15c7 Cleanup. 2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9 Remove unnecessary code. 2024-05-18 09:36:44 -04:00
doctorpangloss
519cddcefc Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-17 14:04:44 -07:00
doctorpangloss
3f69a6cf1e Skip test related to freezing time 2024-05-17 14:01:55 -07:00
comfyanonymous
1c4af5918a Better error message if the webcam node doesn't work. 2024-05-17 14:02:09 -04:00
pythongosssss
91590adf04
Add webcam node (#3497)
* Add webcam node

* unused import
2024-05-17 13:16:08 -04:00
doctorpangloss
acd98ceef3 Further fixes to unit tests when running in GitHub 2024-05-17 10:12:23 -07:00
doctorpangloss
cb45b86b63 Patch torch device code here 2024-05-17 07:19:15 -07:00
doctorpangloss
4eb66f8a0a Fix clip clone bug 2024-05-17 07:17:33 -07:00
comfyanonymous
19300655dd Don't automatically switch to lowvram mode on GPUs with low memory. 2024-05-17 00:31:32 -04:00
doctorpangloss
b318b4cc28 Update tests to support CPU in GitHub 2024-05-16 15:31:55 -07:00
doctorpangloss
87d1f30902 Some base nodes now have unit tests 2024-05-16 15:01:51 -07:00
doctorpangloss
3d98440fb7 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-05-16 14:28:49 -07:00
doctorpangloss
5a9055fe05 Tokenizers are now shallow cloned when CLIP is cloned. This allows nodes to add vocab to the tokenizer, as some checkpoints and LoRAs may require. 2024-05-16 12:39:19 -07:00
comfyanonymous
46daf0a9a7 Add debug options to force on and off attention upcasting. 2024-05-16 04:09:41 -04:00
comfyanonymous
58f8388020 More proper fix for #3484. 2024-05-16 00:11:01 -04:00
comfyanonymous
2d41642716 Fix lowvram dora issue. 2024-05-15 02:47:40 -04:00
doctorpangloss
8741cb3ce8 LLM support in ComfyUI
- Currently uses `transformers`
 - Supports model management and correctly loading and unloading models
   based on what your machine can support
 - Includes a Text Diffusers 2 workflow to demonstrate text rendering in
   SD1.5
2024-05-14 17:30:23 -07:00
comfyanonymous
ec6f16adb6 Fix SAG. 2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837 Only enable attention upcasting on models that actually need it. 2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c Refactor attention upcasting code part 1. 2024-05-14 12:47:31 -04:00
doctorpangloss
0ee2f3bf15 Move advanced samplers into a place where it will be found 2024-05-13 19:36:27 -07:00
comfyanonymous
2de3b69b30 Support saving some more modelspec types. 2024-05-13 21:54:11 -04:00
doctorpangloss
355f2aef3a Fix parameters and user agent for ImageRequestParameter. 2024-05-13 17:59:02 -07:00
doctorpangloss
78e340e2d8 Traces now include the arguments for executing a node, wherever it makes sense to do so. 2024-05-13 15:48:16 -07:00
doctorpangloss
d11aed87ba OpenAPI ImageRequestParameter node uses a Chrome user-agent to facilitate external URLs better 2024-05-13 15:03:34 -07:00
freakabcd
cf6e1efb69
Show message on error when loading wf from file (works on drag and drop) (#3466) 2024-05-13 15:22:22 -04:00
comfyanonymous
ece5acb8e8 Fix nightly package workflow. 2024-05-12 16:05:10 -04:00
comfyanonymous
794a357f7a Update the nightly workflow. 2024-05-12 07:24:12 -04:00
shawnington
22edd3add5
Fix to LoadImage Node for #3416 HDR images loading additional smaller… (#3454)
* Fix to LoadImage Node for #3416 HDR images loading additional smaller images. 

Added a blocking if statement  in the ImageSequence.Iterator that checks if subsequent images after the first match dimensionally, and prevent them from being appended to output_images if they do not match. 

This does not fix or change current behavior for PIL 10.2.0 where the images are loaded at the same size, but it does for 10.3.0 where they are loaded at their correct smaller sizes.

* added list of excluded formats that should return 1 image

added an explicit check for the image format so that additional formats can be added to the list that have problematic behavior.
2024-05-12 07:07:38 -04:00
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. (#3459)
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.

* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous
fa6dd7e5bb Fix lowvram issue with saving checkpoints.
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous
49c20cdc70 No longer necessary. 2024-05-12 05:34:43 -04:00
comfyanonymous
e1489ad257 Fix issue with lowvram mode breaking model saving. 2024-05-11 21:55:20 -04:00