doctorpangloss
f4e65590b8
Fix subfolder being None when images are viewed
2025-02-14 07:20:58 -08:00
doctorpangloss
31b6b53236
Quality of life improvements
...
- export_custom_nodes() finds all the classes that inherit from
CustomNode and exports them correctly for custom node discovery to
find
- regular expressions
- additional string formatting and parsing nodes
2025-02-12 14:12:10 -08:00
doctorpangloss
ef74b9fdda
More graceful health check handling of this connection not being ready
2025-02-06 11:08:09 -08:00
doctorpangloss
5b3eb2e51c
Fix torch.zeroes error
2025-02-06 09:00:10 -08:00
doctorpangloss
3f1f427ff4
Distinct Seed and Seed64 input specs. numpy only supports 32 bit seeds
2025-02-05 14:08:09 -08:00
doctorpangloss
6ab1aa1e8a
Improving MLLM/VLLM support and fixing bugs
...
- fix #29 str(model) no longer raises exceptions like with
HyVideoModelLoader
- don't try to format CUDA tensors because that can sometimes raise
exceptions
- cudaAllocAsync has been disabled for now due to 2.6.0 bugs
- improve florence2 support
- add support for paligemma 2. This requires the fix for transformers
that is currently staged in another repo, install with
`uv pip install --no-deps "transformers@git+https://github.com/zucchini-nlp/transformers.git#branch=paligemma-fix-kwargs "`
- triton has been updated
- fix missing __init__.py files
2025-02-05 14:02:28 -08:00
doctorpangloss
dcac115f68
Revert "Update logging when models are loaded"
...
This reverts commit 0d15a091c2 .
2025-02-04 15:18:00 -08:00
doctorpangloss
80db9a8e25
Florence2
2025-02-04 15:17:14 -08:00
doctorpangloss
1a24ceef79
Updates for torch 2.6.0, prepare Anthropic nodes, accept multiple logging levels
2025-02-04 09:27:18 -08:00
Rafał Leszko
0d15a091c2
Update logging when models are loaded
...
The "Loaded " log was logged even if no model were actually loaded into VRAM
2025-02-04 14:44:12 +01:00
doctorpangloss
1488f2c59b
Logger should check attributes on current sys.stdout, which may have been overwritten
2025-01-31 10:58:11 -08:00
doctorpangloss
95a12f42e2
Fix pylint errors (they were real, as they usually are)
2025-01-28 17:16:15 -08:00
doctorpangloss
d24098cd9b
Fix mask uploads
2025-01-28 16:34:59 -08:00
doctorpangloss
044dff6887
Updates and fixes
...
- Update to latest triton
- Fix huggingface hub automatic downloads
- Latest transformers may require updating huggingface llava models
- Compiling flux with fp8 weights is not supported
2025-01-28 16:22:09 -08:00
doctorpangloss
a3452f6e6a
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-01-28 13:45:51 -08:00
comfyanonymous
13fd4d6e45
More friendly error messages for corrupted safetensors files.
2025-01-28 09:41:09 -05:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
comfyanonymous
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
comfyanonymous
14ca5f5a10
Remove useless code.
2025-01-24 06:15:54 -05:00
comfyanonymous
96e2a45193
Remove useless code.
2025-01-23 05:56:23 -05:00
Chenlei Hu
dfa2b6d129
Remove unused function lcm in conds.py ( #6572 )
2025-01-23 05:54:09 -05:00
comfyanonymous
d6bbe8c40f
Remove support for python 3.8.
2025-01-22 17:04:30 -05:00
doctorpangloss
e5044799da
Fix forward versus backward slash in these params
2025-01-22 13:58:24 -08:00
doctorpangloss
2d4e579503
Fix pylint error
2025-01-22 10:41:23 -08:00
doctorpangloss
b1bcf082af
Native Ideogram support
2025-01-22 10:32:04 -08:00
chaObserv
e857dd48b8
Add gradient estimation sampler ( #6554 )
2025-01-22 05:29:40 -05:00
comfyanonymous
fb2ad645a3
Add FluxDisableGuidance node to disable using the guidance embed.
2025-01-20 14:50:24 -05:00
comfyanonymous
d8a7a32779
Cleanup old TODO.
2025-01-20 03:44:13 -05:00
Sergii Dymchenko
ebf038d4fa
Use torch.special.expm1 ( #6388 )
...
* Use `torch.special.expm1`
This function provides greater precision than `exp(x) - 1` for small values of `x`.
Found with TorchFix https://github.com/pytorch-labs/torchfix/
* Use non-alias
2025-01-19 04:54:32 -05:00
catboxanon
b1a02131c9
Remove comfy.samplers self-import ( #6506 )
2025-01-18 17:49:51 -05:00
comfyanonymous
507199d9a8
Uni pc sampler now works with audio and video models.
2025-01-18 05:27:58 -05:00
comfyanonymous
2f3ab40b62
Add warning when using old pytorch versions.
2025-01-17 18:47:27 -05:00
comfyanonymous
0aa2368e46
Fix some cosmos fp8 issues.
2025-01-16 17:45:37 -05:00
doctorpangloss
a9347c6713
Fix pylint error
2025-01-16 14:09:37 -08:00
comfyanonymous
cca96a85ae
Fix cosmos VAE failing with videos longer than 121 frames.
2025-01-16 16:30:06 -05:00
doctorpangloss
005459ee77
Release version 0.3.11
2025-01-16 12:40:34 -08:00
doctorpangloss
cf3c96e593
Cosmos support
2025-01-16 12:39:05 -08:00
doctorpangloss
631d9e44c6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-01-16 09:58:02 -08:00
comfyanonymous
31831e6ef1
Code refactor.
2025-01-16 07:23:54 -05:00
comfyanonymous
88ceb28e20
Tweak hunyuan memory usage factor.
2025-01-16 06:31:03 -05:00
comfyanonymous
23289a6a5c
Clean up some debug lines.
2025-01-16 04:24:39 -05:00
comfyanonymous
9d8b6c1f46
More accurate memory estimation for cosmos and hunyuan video.
2025-01-16 03:48:40 -05:00
comfyanonymous
6320d05696
Slightly lower hunyuan video memory usage.
2025-01-16 00:23:01 -05:00
comfyanonymous
25683b5b02
Lower cosmos diffusion model memory usage.
2025-01-15 23:46:42 -05:00
comfyanonymous
4758fb64b9
Lower cosmos VAE memory usage by a bit.
2025-01-15 22:57:52 -05:00
comfyanonymous
008761166f
Optimize first attention block in cosmos VAE.
2025-01-15 21:48:46 -05:00
comfyanonymous
cba58fff0b
Remove unsafe embedding load for very old pytorch.
2025-01-15 04:32:23 -05:00
comfyanonymous
2feb8d0b77
Force safe loading of files in torch format on pytorch 2.4+
...
If this breaks something for you make an issue.
2025-01-15 03:50:27 -05:00
Pam
c78a45685d
Rewrite res_multistep sampler and implement res_multistep_cfg_pp sampler. ( #6462 )
2025-01-14 18:20:06 -05:00
comfyanonymous
3aaabb12d4
Implement Cosmos Image/Video to World (Video) diffusion models.
...
Use CosmosImageToVideoLatent to set the input image/video.
2025-01-14 05:14:10 -05:00