mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-12-16 01:37:04 +08:00
|
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
* Apply cond slice fix * Add FreeNoise * Update context_windows.py * Add option to retain condition by indexes for each window This allows for example Wan/HunyuanVideo image to video to "work" by using the initial start frame for each window, otherwise windows beyond first will be pure T2V generations. * Update context_windows.py * Allow splitting multiple conds into different windows * Add handling for audio_embed * whitespace * Allow freenoise to work on other dims, handle 4D batch timestep Refactor Freenoise function. And fix batch handling as timesteps seem to be expanded to batch size now. * Disable experimental options for now So that the Freenoise and bugfixes can be merged first --------- Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com> Co-authored-by: ozbayb <17261091+ozbayb@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| audio_encoders | ||
| cldm | ||
| comfy_types | ||
| extra_samplers | ||
| image_encoders | ||
| k_diffusion | ||
| ldm | ||
| sd1_tokenizer | ||
| t2i_adapter | ||
| taesd | ||
| text_encoders | ||
| weight_adapter | ||
| checkpoint_pickle.py | ||
| cli_args.py | ||
| clip_config_bigg.json | ||
| clip_model.py | ||
| clip_vision_config_g.json | ||
| clip_vision_config_h.json | ||
| clip_vision_config_vitl_336_llava.json | ||
| clip_vision_config_vitl_336.json | ||
| clip_vision_config_vitl.json | ||
| clip_vision_siglip_384.json | ||
| clip_vision_siglip_512.json | ||
| clip_vision.py | ||
| conds.py | ||
| context_windows.py | ||
| controlnet.py | ||
| diffusers_convert.py | ||
| diffusers_load.py | ||
| float.py | ||
| gligen.py | ||
| hooks.py | ||
| latent_formats.py | ||
| lora_convert.py | ||
| lora.py | ||
| model_base.py | ||
| model_detection.py | ||
| model_management.py | ||
| model_patcher.py | ||
| model_sampling.py | ||
| nested_tensor.py | ||
| ops.py | ||
| options.py | ||
| patcher_extension.py | ||
| pixel_space_convert.py | ||
| quant_ops.py | ||
| rmsnorm.py | ||
| sample.py | ||
| sampler_helpers.py | ||
| samplers.py | ||
| sd1_clip_config.json | ||
| sd1_clip.py | ||
| sd.py | ||
| sdxl_clip.py | ||
| supported_models_base.py | ||
| supported_models.py | ||
| utils.py | ||