Huazhong Ji
c4bfdba330
Support ascend npu ( #5436 )
...
* support ascend npu
Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
patientx
1f9acbbfca
Merge branch 'comfyanonymous:master' into master
2024-12-26 23:22:48 +03:00
comfyanonymous
ee9547ba31
Improve temporal VAE Encode (Tiled) math.
2024-12-26 07:18:49 -05:00
patientx
49fa16cc7a
Merge branch 'comfyanonymous:master' into master
2024-12-25 14:05:18 +03:00
comfyanonymous
19a64d6291
Cleanup some mac related code.
2024-12-25 05:32:51 -05:00
comfyanonymous
b486885e08
Disable bfloat16 on older mac.
2024-12-25 05:18:50 -05:00
comfyanonymous
0229228f3f
Clean up the VAE dtypes code.
2024-12-25 04:50:34 -05:00
comfyanonymous
99a1fb6027
Make fast fp8 take a bit less peak memory.
2024-12-24 18:05:19 -05:00
patientx
077ebf7b17
Merge branch 'comfyanonymous:master' into master
2024-12-24 16:53:56 +03:00
comfyanonymous
73e04987f7
Prevent black images in VAE Decode (Tiled) node.
...
Overlap should be minimum 1 with tiling 2 for tiled temporal VAE decoding.
2024-12-24 07:36:30 -05:00
comfyanonymous
5388df784a
Add temporal tiling to VAE Encode (Tiled) node.
2024-12-24 07:10:09 -05:00
patientx
0f7b4f063d
Merge branch 'comfyanonymous:master' into master
2024-12-24 15:06:02 +03:00
comfyanonymous
bc6dac4327
Add temporal tiling to VAE Decode (Tiled) node.
...
You can now do tiled VAE decoding on the temporal direction for videos.
2024-12-23 20:03:37 -05:00
patientx
00afa8b34f
Merge branch 'comfyanonymous:master' into master
2024-12-23 11:36:49 +03:00
comfyanonymous
15564688ed
Add a try except block so if torch version is weird it won't crash.
2024-12-23 03:22:48 -05:00
Simon Lui
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. ( #6112 )
...
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
patientx
403a081215
Merge branch 'comfyanonymous:master' into master
2024-12-23 10:33:06 +03:00
comfyanonymous
e44d0ac7f7
Make --novram completely offload weights.
...
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.
Remove useless import.
2024-12-23 01:51:08 -05:00
comfyanonymous
56bc64f351
Comment out some useless code.
2024-12-22 23:51:14 -05:00
zhangp365
f7d83b72e0
fixed a bug in ldm/pixart/blocks.py ( #6158 )
2024-12-22 23:44:20 -05:00
comfyanonymous
80f07952d2
Fix lowvram issue with ltxv vae.
2024-12-22 23:20:17 -05:00
patientx
757335d901
Update supported_models.py
2024-12-23 02:54:49 +03:00
patientx
713eca2176
Update supported_models.py
2024-12-23 02:32:50 +03:00
patientx
e9d8cad2f0
Merge branch 'comfyanonymous:master' into master
2024-12-22 16:56:29 +03:00
comfyanonymous
57f330caf9
Relax minimum ratio of weights loaded in memory on nvidia.
...
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
patientx
88ae56dcf9
Merge branch 'comfyanonymous:master' into master
2024-12-21 15:52:28 +03:00
comfyanonymous
da13b6b827
Get rid of meshgrid warning.
2024-12-20 18:02:12 -05:00
comfyanonymous
c86cd58573
Remove useless code.
2024-12-20 17:50:03 -05:00
comfyanonymous
b5fe39211a
Remove some useless code.
2024-12-20 17:43:50 -05:00
patientx
4d64ade41f
Merge branch 'comfyanonymous:master' into master
2024-12-21 01:30:32 +03:00
comfyanonymous
e946667216
Some fixes/cleanups to pixart code.
...
Commented out the masking related code because it is never used in this
implementation.
2024-12-20 17:10:52 -05:00
patientx
37fc9a3ff2
Merge branch 'comfyanonymous:master' into master
2024-12-21 00:37:40 +03:00
Chenlei Hu
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
City
bddb02660c
Add PixArt model support ( #6055 )
...
* PixArt initial version
* PixArt Diffusers convert logic
* pos_emb and interpolation logic
* Reduce duplicate code
* Formatting
* Use optimized attention
* Edit empty token logic
* Basic PixArt LoRA support
* Fix aspect ratio logic
* PixArtAlpha text encode with conds
* Use same detection key logic for PixArt diffusers
2024-12-20 15:25:00 -05:00
patientx
07ea41ecc1
Merge branch 'comfyanonymous:master' into master
2024-12-20 13:06:18 +03:00
comfyanonymous
418eb7062d
Support new LTXV VAE.
2024-12-20 04:38:29 -05:00
patientx
ebf13dfe56
Merge branch 'comfyanonymous:master' into master
2024-12-20 10:05:56 +03:00
comfyanonymous
cac68ca813
Fix some more video tiled encode issues.
...
The downscale_ratio formula for the temporal had issues with some frame
numbers.
2024-12-19 23:14:03 -05:00
comfyanonymous
52c1d933b2
Fix tiled hunyuan video VAE encode issue.
...
Some shapes like 1024x1024 with tile_size 256 and overlap 64 had issues.
2024-12-19 22:55:15 -05:00
comfyanonymous
2dda7c11a3
More proper fix for the memory issue.
2024-12-19 16:21:56 -05:00
comfyanonymous
3ad3248ad7
Fix lowvram bug when using a model multiple times in a row.
...
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
patientx
778005af3d
Merge branch 'comfyanonymous:master' into master
2024-12-19 14:51:33 +03:00
comfyanonymous
c441048a4f
Make VAE Encode tiled node work with video VAE.
2024-12-19 05:31:39 -05:00
comfyanonymous
9f4b181ab3
Add fast previews for hunyuan video.
2024-12-18 18:24:23 -05:00
comfyanonymous
cbbf077593
Small optimizations.
2024-12-18 18:23:28 -05:00
patientx
43a0204b07
Merge branch 'comfyanonymous:master' into master
2024-12-18 15:17:15 +03:00
comfyanonymous
ff2ff02168
Support old diffusion-pipe hunyuan video loras.
2024-12-18 06:23:54 -05:00
patientx
947aba46c3
Merge branch 'comfyanonymous:master' into master
2024-12-18 12:33:32 +03:00
comfyanonymous
4c5c4ddeda
Fix regression in VAE code on old pytorch versions.
2024-12-18 03:08:28 -05:00
patientx
c062723ca5
Merge branch 'comfyanonymous:master' into master
2024-12-18 10:16:43 +03:00
comfyanonymous
37e5390f5f
Add: --use-sage-attention to enable SageAttention.
...
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
comfyanonymous
a4f59bc65e
Pick attention implementation based on device in llama code.
2024-12-18 01:30:20 -05:00
patientx
0e5fa013b2
Merge branch 'comfyanonymous:master' into master
2024-12-18 00:43:24 +03:00
comfyanonymous
ca457f7ba1
Properly tokenize the template for hunyuan video.
2024-12-17 16:22:02 -05:00
comfyanonymous
cd6f615038
Fix tiled vae not working with some shapes.
2024-12-17 16:22:02 -05:00
patientx
4ace4e9ecb
Merge branch 'comfyanonymous:master' into master
2024-12-17 19:55:56 +03:00
comfyanonymous
e4e1bff605
Support diffusion-pipe hunyuan video lora format.
2024-12-17 07:14:21 -05:00
patientx
dc574cdc47
Merge branch 'comfyanonymous:master' into master
2024-12-17 13:57:30 +03:00
comfyanonymous
d6656b0c0c
Support llama hunyuan video text encoder in scaled fp8 format.
2024-12-17 04:19:22 -05:00
comfyanonymous
f4cdedea62
Fix regression with ltxv VAE.
2024-12-17 02:17:31 -05:00
comfyanonymous
39b1fc4ccc
Adjust used dtypes for hunyuan video VAE and diffusion model.
2024-12-16 23:31:10 -05:00
comfyanonymous
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
comfyanonymous
19ee5d9d8b
Don't expand mask when not necessary.
...
Expanding seems to slow down inference.
2024-12-16 18:22:50 -05:00
Raphael Walker
61b50720d0
Add support for attention masking in Flux ( #5942 )
...
* fix attention OOM in xformers
* allow passing attention mask in flux attention
* allow an attn_mask in flux
* attn masks can be done using replace patches instead of a separate dict
* fix return types
* fix return order
* enumerate
* patch the right keys
* arg names
* fix a silly bug
* fix xformers masks
* replace match with if, elif, else
* mask with image_ref_size
* remove unused import
* remove unused import 2
* fix pytorch/xformers attention
This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.
* fix mask shapes
2024-12-16 18:21:17 -05:00
patientx
9704e3e617
Merge branch 'comfyanonymous:master' into master
2024-12-14 14:24:39 +03:00
comfyanonymous
e83063bf24
Support conv3d in PatchEmbed.
2024-12-14 05:46:04 -05:00
patientx
fd2eeb5e30
Merge branch 'comfyanonymous:master' into master
2024-12-13 16:09:33 +03:00
comfyanonymous
4e14032c02
Make pad_to_patch_size function work on multi dim.
2024-12-13 07:22:05 -05:00
patientx
3218ed8559
Merge branch 'comfyanonymous:master' into master
2024-12-13 11:21:47 +03:00
Chenlei Hu
563291ee51
Enforce all pyflake lint rules ( #6033 )
...
* Enforce F821 undefined-name
* Enforce all pyflake lint rules
2024-12-12 19:29:37 -05:00
Chenlei Hu
2cddbf0821
Lint and fix undefined names (1/N) ( #6028 )
2024-12-12 18:55:26 -05:00
Chenlei Hu
60749f345d
Lint and fix undefined names (3/N) ( #6030 )
2024-12-12 18:49:40 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
patientx
5d059779d3
Merge branch 'comfyanonymous:master' into master
2024-12-12 15:26:42 +03:00
comfyanonymous
fd5dfb812c
Set initial load devices for te and model to mps device on mac.
2024-12-12 06:00:31 -05:00
patientx
0759f96414
Merge branch 'comfyanonymous:master' into master
2024-12-11 23:05:47 +03:00
comfyanonymous
7a7efe8424
Support loading some checkpoint files with nested dicts.
2024-12-11 08:04:54 -05:00
patientx
3b1247c8af
Merge branch 'comfyanonymous:master' into master
2024-12-11 12:14:46 +03:00
comfyanonymous
44db978531
Fix a few things in text enc code for models with no eos token.
2024-12-10 23:07:26 -05:00
patientx
3254013ec2
Merge branch 'comfyanonymous:master' into master
2024-12-11 00:42:55 +03:00
comfyanonymous
1c8d11e48a
Support different types of tokenizers.
...
Support tokenizers without an eos token.
Pass full sentences to tokenizer for more efficient tokenizing.
2024-12-10 15:03:39 -05:00
patientx
be4b0b5515
Merge branch 'comfyanonymous:master' into master
2024-12-10 15:06:50 +03:00
catboxanon
23827ca312
Add cond_scale to sampler_post_cfg_function ( #5985 )
2024-12-09 20:13:18 -05:00
patientx
7788049e2e
Merge branch 'comfyanonymous:master' into master
2024-12-10 01:10:54 +03:00
Chenlei Hu
0fd4e6c778
Lint unused import ( #5973 )
...
* Lint unused import
* nit
* Remove unused imports
* revert fix_torch import
* nit
2024-12-09 15:24:39 -05:00
comfyanonymous
e2fafe0686
Make CLIP set last layer node work with t5 models.
2024-12-09 03:57:14 -05:00
Haoming
fbf68c4e52
clamp input ( #5928 )
2024-12-07 14:00:31 -05:00
patientx
025d9ed896
Merge branch 'comfyanonymous:master' into master
2024-12-07 00:34:25 +03:00
comfyanonymous
8af9a91e0c
A few improvements to #5937 .
2024-12-06 05:49:15 -05:00
Michael Kupchick
005d2d3a13
ltxv: add noise to guidance image to ensure generated motion. ( #5937 )
2024-12-06 05:46:08 -05:00
comfyanonymous
1e21f4c14e
Make timestep ranges more usable on rectified flow models.
...
This breaks some old workflows but should make the nodes actually useful.
2024-12-05 16:40:58 -05:00
patientx
d35238f60a
Merge branch 'comfyanonymous:master' into master
2024-12-04 23:52:16 +03:00
Chenlei Hu
48272448ad
[Developer Experience] Add node typing ( #5676 )
...
* [Developer Experience] Add node typing
* Shim StrEnum
* nit
* nit
* nit
2024-12-04 15:01:00 -05:00
patientx
743d281f78
Merge branch 'comfyanonymous:master' into master
2024-12-03 23:53:57 +03:00
comfyanonymous
452179fe4f
Make ModelPatcher class clone function work with inheritance.
2024-12-03 13:57:57 -05:00
patientx
b826d3e8c2
Merge branch 'comfyanonymous:master' into master
2024-12-03 14:51:59 +03:00
comfyanonymous
c1b92b719d
Some optimizations to euler a.
2024-12-03 06:11:52 -05:00
comfyanonymous
57e8bf6a9f
Fix case where a memory leak could cause crash.
...
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00
patientx
1d7cbcdcb2
Update model_management.py
2024-12-03 01:35:06 +03:00
patientx
9dea868c65
Reapply "Merge branch 'comfyanonymous:master' into master"
...
This reverts commit f3968d1611 .
2024-12-03 00:45:31 +03:00