Commit Graph

2048 Commits

Author SHA1 Message Date
doctorpangloss
1d901e32eb Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 13:20:31 -07:00
doctorpangloss
fb7846db50 Merge branch 'master' of github.com:hiddenswitch/ComfyUI 2025-06-17 13:02:48 -07:00
doctorpangloss
7d5a9f17a4 Add more known models 2025-06-17 12:47:48 -07:00
doctorpangloss
adb68f5623 fix logging in model_downloader, remove nf4 flux support since it is broken and unused 2025-06-17 12:14:08 -07:00
doctorpangloss
3d0306b89f more fixes from pylint 2025-06-17 11:36:41 -07:00
doctorpangloss
d79d7a7e08 fix imports and other basic problems 2025-06-17 11:19:48 -07:00
doctorpangloss
e8cf6489d2 modify hook breaker to work 2025-06-17 10:39:51 -07:00
doctorpangloss
cdcbdaecb0 move hook breaker 2025-06-17 10:39:27 -07:00
doctorpangloss
3bf0b01e47 del hook breaker 2025-06-17 10:39:09 -07:00
doctorpangloss
2b293de1b1 move comfyui_version to the right place so it's registered as a move 2025-06-17 10:36:20 -07:00
doctorpangloss
a5cd8fae7e rm __init__.py which has the version 2025-06-17 10:35:45 -07:00
doctorpangloss
82388d51a2 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 10:35:10 -07:00
chaObserv
8e81c507d2
Multistep DPM++ SDE samplers for RF (#8541)
Include alpha in sampling and minor refactoring
2025-06-16 14:47:10 -04:00
comfyanonymous
e1c6dc720e
Allow setting min_length with tokenizer_data. (#8547) 2025-06-16 13:43:52 -04:00
comfyanonymous
7ea79ebb9d
Add correct eps to ltxv rmsnorm. (#8542) 2025-06-15 12:21:25 -04:00
comfyanonymous
d6a2137fc3
Support Cosmos predict2 image to video models. (#8535)
Use the CosmosPredict2ImageToVideoLatent node.
2025-06-14 21:37:07 -04:00
chaObserv
53e8d8193c
Generalize SEEDS samplers (#8529)
Restore VP algorithm for RF and refactor noise_coeffs and half-logSNR calculations
2025-06-14 16:58:16 -04:00
comfyanonymous
29596bd53f
Small cosmos attention code refactor. (#8530) 2025-06-14 05:02:05 -04:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
comfyanonymous
c69af655aa
Uncap cosmos predict2 res and fix mem estimation. (#8518) 2025-06-13 07:30:18 -04:00
comfyanonymous
251f54a2ad
Basic initial support for cosmos predict2 text to image 2B and 14B models. (#8517) 2025-06-13 07:05:23 -04:00
pythongosssss
50c605e957
Add support for sqlite database (#8444)
* Add support for sqlite database

* fix
2025-06-11 16:43:39 -04:00
comfyanonymous
8a4ff747bd
Fix mistake in last commit. (#8496)
* Move to right place.
2025-06-11 15:13:29 -04:00
comfyanonymous
af1eb58be8
Fix black images on some flux models in fp16. (#8495) 2025-06-11 15:09:11 -04:00
Benjamin Berman
cc9a0935a4 fix issue when trying to normalize paths for comparison purposes and the node contains an integer list value 2025-06-11 07:23:57 -07:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. (#8485) 2025-06-10 13:06:24 -04:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. (#8474)
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 (#8464) 2025-06-08 14:15:34 -04:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. (#8453) 2025-06-07 10:01:15 -04:00
comfyanonymous
3b4b171e18
Alternate fix for #8435 (#8442) 2025-06-06 09:43:27 -04:00
Benjamin Berman
d94b0cce93 better logging 2025-06-05 20:59:36 -07:00
comfyanonymous
4248b1618f
Let chroma TE work on regular flux. (#8429) 2025-06-05 10:07:17 -04:00
comfyanonymous
fb4754624d
Make the casting in lists the same as regular inputs. (#8373) 2025-06-01 05:39:54 -04:00
comfyanonymous
19e45e9b0e
Make it easier to pass lists of tensors to models. (#8358) 2025-05-31 20:00:20 -04:00
drhead
08b7cc7506
use fused multiply-add pointwise ops in chroma (#8279) 2025-05-30 18:09:54 -04:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. (#8348) 2025-05-30 15:41:02 -04:00
comfyanonymous
f2289a1f59
Delete useless file. (#8327) 2025-05-29 08:29:37 -04:00
comfyanonymous
5e5e46d40c
Not really tested WAN Phantom Support. (#8321) 2025-05-28 23:46:15 -04:00
comfyanonymous
1c1687ab1c
Support HiDream SimpleTuner loras. (#8318) 2025-05-28 18:47:15 -04:00
comfyanonymous
06c661004e
Memory estimation code can now take into account conds. (#8307) 2025-05-27 15:09:05 -04:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. (#8294) 2025-05-26 16:39:27 -04:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 (#8282) 2025-05-26 04:29:25 -04:00
comfyanonymous
a0651359d7
Return proper error if diffusion model not detected properly. (#8272) 2025-05-25 05:28:11 -04:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. (#8264) 2025-05-24 06:43:12 -04:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. (#8257)
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
drhead
30b2eb8a93
create arange on-device (#8255) 2025-05-23 16:15:06 -04:00
comfyanonymous
f85c08df06
Make VACE conditionings stackable. (#8240) 2025-05-22 19:22:26 -04:00
comfyanonymous
87f9130778
Revert "This doesn't seem to be needed on chroma. (#8209)" (#8210)
This reverts commit 7e84bf5373.
2025-05-20 05:39:55 -04:00
comfyanonymous
7e84bf5373
This doesn't seem to be needed on chroma. (#8209) 2025-05-20 05:29:23 -04:00
doctorpangloss
6d8deff056 Switch to uv for packaging 2025-05-19 15:58:08 -07:00