ComfyUI/comfy
Raphael Walker 61b50720d0
Add support for attention masking in Flux (#5942)
* fix attention OOM in xformers

* allow passing attention mask in flux attention

* allow an attn_mask in flux

* attn masks can be done using replace patches instead of a separate dict

* fix return types

* fix return order

* enumerate

* patch the right keys

* arg names

* fix a silly bug

* fix xformers masks

* replace match with if, elif, else

* mask with image_ref_size

* remove unused import

* remove unused import 2

* fix pytorch/xformers attention

This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.

* fix mask shapes
2024-12-16 18:21:17 -05:00
..
cldm Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
comfy_types [Developer Experience] Add node typing (#5676) 2024-12-04 15:01:00 -05:00
extra_samplers Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
k_diffusion Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
ldm Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
sd1_tokenizer
t2i_adapter
taesd
text_encoders Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
checkpoint_pickle.py
cli_args.py Add option to inference the diffusion model in fp32 and fp64. 2024-11-25 05:00:23 -05:00
clip_config_bigg.json
clip_model.py Support new flux model variants. 2024-11-21 08:38:23 -05:00
clip_vision_config_g.json
clip_vision_config_h.json
clip_vision_config_vitl_336.json
clip_vision_config_vitl.json
clip_vision_siglip_384.json Support new flux model variants. 2024-11-21 08:38:23 -05:00
clip_vision.py Add a way to disable cropping in the CLIPVisionEncode node. 2024-11-28 20:24:47 -05:00
conds.py
controlnet.py Enforce all pyflake lint rules (#6033) 2024-12-12 19:29:37 -05:00
diffusers_convert.py
diffusers_load.py load_unet -> load_diffusion_model with a model_options argument. 2024-08-12 23:20:57 -04:00
float.py Clamp output when rounding weight to prevent Nan. 2024-10-19 19:07:10 -04:00
gligen.py Lint and fix undefined names (1/N) (#6028) 2024-12-12 18:55:26 -05:00
hooks.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
latent_formats.py Fast previews for ltxv. 2024-11-28 06:46:15 -05:00
lora_convert.py Support new flux model variants. 2024-11-21 08:38:23 -05:00
lora.py ModelPatcher Overhaul and Hook Support (#5583) 2024-12-02 14:51:02 -05:00
model_base.py Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00
model_detection.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
model_management.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
model_patcher.py Make ModelPatcher class clone function work with inheritance. 2024-12-03 13:57:57 -05:00
model_sampling.py Make timestep ranges more usable on rectified flow models. 2024-12-05 16:40:58 -05:00
ops.py clamp input (#5928) 2024-12-07 14:00:31 -05:00
options.py
patcher_extension.py Enforce all pyflake lint rules (#6033) 2024-12-12 19:29:37 -05:00
sample.py
sampler_helpers.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
samplers.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
sd1_clip_config.json
sd1_clip.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
sd.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
sdxl_clip.py Long CLIP L support for SDXL, SD3 and Flux. 2024-09-15 07:59:38 -04:00
supported_models_base.py Mixed precision diffusion models with scaled fp8. 2024-10-21 18:12:51 -04:00
supported_models.py Lint all unused variables (#5989) 2024-12-12 17:59:16 -05:00
utils.py Add support for attention masking in Flux (#5942) 2024-12-16 18:21:17 -05:00