Commit Graph

3758 Commits

Author SHA1 Message Date
doctorpangloss
fb4ea2dc6d 16 bit color support for TIFF and PNG, 16 and 32 bit floating point
support in EXR
2025-02-21 17:55:02 -08:00
doctorpangloss
42f75cadc0 Generators 2025-02-20 15:00:07 -08:00
Benjamin Berman
0cfde0ad6d Fix pylint issues 2025-02-18 20:23:09 -08:00
Benjamin Berman
83ae94b96c Fix absolute import 2025-02-18 20:09:56 -08:00
Benjamin Berman
1e74a4cf08 Fix absolute imports, fix linting issue with dataclass 2025-02-18 19:59:09 -08:00
Benjamin Berman
ffc6a7fd38 Use spawn multiprocessing context to fix Linux ProcessPool issues 2025-02-18 19:46:57 -08:00
doctorpangloss
e65faca817 Distributed setup now defaults to panicking when out of memory now, to facilitate graceful recovery 2025-02-18 15:07:02 -08:00
doctorpangloss
3ddec8ae90 Better support for process pool executors
- --panics-when=torch.cuda.OutOfMemory will now correctly panic and
   exit the worker, giving it time to reply that the execution failed
   and better dealing with irrecoverable out-of-memory errors
 - --executor-factory=ProcessPoolExecutor will use a process instead of
   a thread to execute comfyui workflows when using the worker. When
   this process panics and exits, it will be correctly replaced, making
   a more robust worker
2025-02-18 14:37:20 -08:00
doctorpangloss
684d180446 Users can now configure their workers to panic if they have out of memory exceptions, which occur due to complex failures in custom nodes 2025-02-18 10:57:23 -08:00
doctorpangloss
d04288ce8d ImagePadForOutpaint now correctly returns a MaskBatch 2025-02-16 15:39:36 -08:00
doctorpangloss
d404ab3185 Fix images None issue 2025-02-15 17:06:27 -08:00
doctorpangloss
7bf9a86fc1 Fix another None images here 2025-02-15 17:00:46 -08:00
doctorpangloss
3a3e31a0a5 Fix unexpected None type for images 2025-02-15 16:52:08 -08:00
doctorpangloss
d4218f3f19 Fix NOFLAG not present on python 3.10 (?) 2025-02-14 16:54:55 -08:00
doctorpangloss
87a4af84ae Fix regexp match expand returning wrong type 2025-02-14 16:03:46 -08:00
doctorpangloss
0ca30c3c87 export_custom_nodes now handles abstract base classes better 2025-02-14 15:36:51 -08:00
doctorpangloss
f4e65590b8 Fix subfolder being None when images are viewed 2025-02-14 07:20:58 -08:00
doctorpangloss
31b6b53236 Quality of life improvements
- export_custom_nodes() finds all the classes that inherit from
   CustomNode and exports them correctly for custom node discovery to
   find
 - regular expressions
 - additional string formatting and parsing nodes
2025-02-12 14:12:10 -08:00
doctorpangloss
cf08b11132 Colour package added to requirements 2025-02-12 12:48:51 -08:00
doctorpangloss
0a1b118cd4 try to free disk space using better script 2025-02-07 07:23:01 -08:00
doctorpangloss
713c3f06af Use vanilla container 2025-02-07 07:05:48 -08:00
doctorpangloss
3fa9e98d02 Try to run locally, remove unused workflows, add compose file 2025-02-07 06:53:18 -08:00
doctorpangloss
ef74b9fdda More graceful health check handling of this connection not being ready 2025-02-06 11:08:09 -08:00
doctorpangloss
4c72ef5bac Try to free more disk space in github actions runner 2025-02-06 10:43:58 -08:00
doctorpangloss
dfa3a36f83 maximize build space 2025-02-06 10:05:43 -08:00
doctorpangloss
7e122202f6 set image name 2025-02-06 09:56:31 -08:00
doctorpangloss
49fcfaedce Fix github actions docker image workflow 2025-02-06 09:48:41 -08:00
doctorpangloss
f4647a03e7 Add Docker build action 2025-02-06 09:25:56 -08:00
doctorpangloss
f346e3c82e Optimize Dockerfile 2025-02-06 09:22:02 -08:00
doctorpangloss
5b3eb2e51c Fix torch.zeroes error 2025-02-06 09:00:10 -08:00
doctorpangloss
3f1f427ff4 Distinct Seed and Seed64 input specs. numpy only supports 32 bit seeds 2025-02-05 14:08:09 -08:00
doctorpangloss
6ab1aa1e8a Improving MLLM/VLLM support and fixing bugs
- fix #29 str(model) no longer raises exceptions like with
   HyVideoModelLoader
 - don't try to format CUDA tensors because that can sometimes raise
   exceptions
 - cudaAllocAsync has been disabled for now due to 2.6.0 bugs
 - improve florence2 support
 - add support for paligemma 2. This requires the fix for transformers
   that is currently staged in another repo, install with
   `uv pip install --no-deps "transformers@git+https://github.com/zucchini-nlp/transformers.git#branch=paligemma-fix-kwargs"`
 - triton has been updated
 - fix missing __init__.py files
2025-02-05 14:02:28 -08:00
doctorpangloss
dcac115f68 Revert "Update logging when models are loaded"
This reverts commit 0d15a091c2.
2025-02-04 15:18:00 -08:00
doctorpangloss
80db9a8e25 Florence2 2025-02-04 15:17:14 -08:00
doctorpangloss
ce3583ad42 relax numpy requirements 2025-02-04 09:40:22 -08:00
doctorpangloss
1a24ceef79 Updates for torch 2.6.0, prepare Anthropic nodes, accept multiple logging levels 2025-02-04 09:27:18 -08:00
Benjamin Berman
fac670da89
Merge pull request #28 from leszko/patch-2
Update logging when models are loaded
2025-02-04 08:28:30 -08:00
Rafał Leszko
0d15a091c2
Update logging when models are loaded
The "Loaded " log was logged even if no model were actually loaded into VRAM
2025-02-04 14:44:12 +01:00
doctorpangloss
1488f2c59b Logger should check attributes on current sys.stdout, which may have been overwritten 2025-01-31 10:58:11 -08:00
doctorpangloss
c1b173e62a Update to cu124 for torch 2.6.0 support 2025-01-31 09:13:47 -08:00
doctorpangloss
95a12f42e2 Fix pylint errors (they were real, as they usually are) 2025-01-28 17:16:15 -08:00
doctorpangloss
d24098cd9b Fix mask uploads 2025-01-28 16:34:59 -08:00
doctorpangloss
044dff6887 Updates and fixes
- Update to latest triton
 - Fix huggingface hub automatic downloads
 - Latest transformers may require updating huggingface llava models
 - Compiling flux with fp8 weights is not supported
2025-01-28 16:22:09 -08:00
doctorpangloss
a3452f6e6a Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-01-28 13:45:51 -08:00
comfyanonymous
13fd4d6e45 More friendly error messages for corrupted safetensors files. 2025-01-28 09:41:09 -05:00
Bradley Reynolds
1210d094c7
Convert latents_ubyte to 8-bit unsigned int before converting to CPU (#6300)
* Convert latents_ubyte to 8-bit unsigned int before converting to CPU

* Only convert to unint8 if directml_enabled
2025-01-28 08:22:54 -05:00
comfyanonymous
255edf2246 Lower minimum ratio of loaded weights on Nvidia. 2025-01-27 05:26:51 -05:00
comfyanonymous
4f011b9a00 Better CLIPTextEncode error when clip input is None. 2025-01-26 06:04:57 -05:00
comfyanonymous
67feb05299 Remove redundant code. 2025-01-25 19:04:53 -05:00
comfyanonymous
6d21740346 Print ComfyUI version. 2025-01-25 15:03:57 -05:00