ComfyUI/comfy/ldm
Rattus 32abdc4628 model: remove unused give_pre_end
According to git grep, this is not used now, and was not used in the
initial commit that introduced it (see below).

This semantic is difficult to implement temporal roll VAE for (and would
defeat the purpose). Rather than implement the complex if, just delete
the unused feature.

(venv) rattus@rattus-box2:~/ComfyUI$ git log --oneline
220afe33 (HEAD) Initial commit.
(venv) rattus@rattus-box2:~/ComfyUI$ git grep give_pre
comfy/ldm/modules/diffusionmodules/model.py:                 resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,
comfy/ldm/modules/diffusionmodules/model.py:        self.give_pre_end = give_pre_end
comfy/ldm/modules/diffusionmodules/model.py:        if self.give_pre_end:

(venv) rattus@rattus-box2:~/ComfyUI$ git co origin/master
Previous HEAD position was 220afe33 Initial commit.
HEAD is now at 9d8a8179 Enable async offloading by default on Nvidia. (#10953)
(venv) rattus@rattus-box2:~/ComfyUI$ git grep give_pre
comfy/ldm/modules/diffusionmodules/model.py:                 resolution, z_channels, give_pre_end=False, tanh_out=False, use_linear_attn=False,
comfy/ldm/modules/diffusionmodules/model.py:        self.give_pre_end = give_pre_end
comfy/ldm/modules/diffusionmodules/model.py:        if self.give_pre_end:
2025-11-30 09:04:49 +10:00
..
ace Remove useless code. (#10223) 2025-10-05 15:41:19 -04:00
audio Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
aura Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
cascade Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
chroma block info (#10843) 2025-11-24 10:30:40 -08:00
chroma_radiance Use same code for chroma and flux blocks so that optimizations are shared. (#10746) 2025-11-14 01:28:05 -05:00
cosmos Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
flux block info (#10841) 2025-11-26 20:28:44 -08:00
genmo Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hidream Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3d Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3dv2_1 Fix issue on old torch. (#9791) 2025-09-10 00:23:47 -04:00
hunyuan_video hunyuan upsampler: rework imports 2025-11-30 09:04:49 +10:00
hydit Change cosmos and hydit models to use the native RMSNorm. (#7934) 2025-05-04 06:26:20 -04:00
lightricks Lower ltxv mem usage to what it was before previous pr. (#10643) 2025-11-04 22:47:35 -05:00
lumina Make the ScaleRope node work on Z Image and Lumina. (#10994) 2025-11-29 18:00:55 -05:00
mmaudio/vae Implement the mmaudio VAE. (#10300) 2025-10-11 22:57:23 -04:00
models Flux 2 (#10879) 2025-11-25 10:50:19 -05:00
modules model: remove unused give_pre_end 2025-11-30 09:04:49 +10:00
omnigen Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
pixart Remove windows line endings. (#8866) 2025-07-11 02:37:51 -04:00
qwen_image block info (#10842) 2025-11-24 10:38:38 -08:00
wan Use single apply_rope function across models (#10547) 2025-11-04 20:10:11 -05:00
common_dit.py add RMSNorm to comfy.ops 2025-04-14 18:00:33 -04:00
util.py Fix and enforce new lines at the end of files. 2024-12-30 04:14:59 -05:00