Update fluxguide.md

This commit is contained in:
patientx 2025-01-07 02:29:41 +03:00 committed by GitHub
parent 62629474f6
commit 14e251e25d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -5,8 +5,13 @@
</div>
- under models\unet folder : your main flux model , it could be dev, schnell or any custom model based on those. I suggest : https://civitai.com/models/686814/jib-mix-flux?modelVersionId=1193229
- under models\clip folder : two necessary clip models, one small (https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors) and one big -this is the t5 model which is what seperates flux and sd3 and other
- newer models from previous ones such as sd 1.5, sdxl etc.- (https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/t5xxl_fp8_e4m3fn_scaled.safetensors) Keep in mind these are smaller versions of said models (in clip_l's case the one I shared is actually a optimised clip_l) , you can still use full t5 but this smaller versions let's use use less vram thus allowing the main model stay on vram so generate stuff faster.
- under models\clip folder : two necessary clip models, one small
(https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors)
and one big -this is the t5 model which is what seperates flux and sd3 and other newer models from previous ones such as sd 1.5, sdxl etc.-
(https://huggingface.co/comfyanonymous/flux_text_encoders/blob/main/t5xxl_fp8_e4m3fn_scaled.safetensors)
Keep in mind these are smaller versions of said models (in clip_l's case the one I shared is actually a optimised clip_l) , you can still use full t5 but this smaller versions let's use use less vram thus allowing the main model stay on vram so generate stuff faster.
- under models\vae folder : the default flux vae (https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/ae.safetensors)