Giter Club home page Giter Club logo

comfyui_tgate's Introduction

ComfyUI_TGate

English | 简体中文

ComfyUI reference implementation for T-GATE.

T-GATE could brings 10%-50% speed up for different diffusion models, only slightly reduces the quality of the generated images and maintains the original composition.

Some monkey patch is used for current implementation. If any error occurs, make sure you have the latest version.

If my work helps you, consider giving it a star.

Some of my other projects that may help you.

🌟 Changelog

  • [2024.5.23]: updated to latest ComfyUI version
  • [2024.5.15]:
    • Fix sdxl x, y batch error. If you encounter some errors, you can try TGate Apply(Deprecated) node.
  • [2024.5.06]:
    • Add use_cpu_cache, reduce some GPU OOM promblem.
    • Refactor apply node, Deprecated old monkey patching method node. The legacy one will be removed after few version.
    • Add a simple node and advanced node.
  • [2024.4.30] 🔧 Fixed an cond-only sampling bug that caused animatediff error. Thanks pamparamm.
  • [2024.4.29] 🔧 TL,DR: Improved performance and T-GATE only works where it needs to work.
    • Fixed a bug that caused TGateApply to affect other places where the model is used, even if TGateApply is turned off.
    • Fixed cross attntion results not being cached correctly causing performance to be slightly lower than the git patch version.
  • [2024.4.26] 🎉 Native version release(NO NEED git patch anymore!).
  • [2024.4.18] Initial repo.

📚 Example workflows

The examples directory has workflow example. There are images generated with and without T-GATE in the assets folder.

example

Origin result T-GATE result
origin_result tgate_result

T-GATE result image comes from the workflow included in the example image.

Compare to AutomaticCFG

AutomaticCFG is another ComfyUI plugin: Your CFG won't be your CFG anymore. It is turned into a way to guide the CFG/final intensity/brightness/saturation, and it adds a 30% speed increase.

env: T4-8G

Origin T-GATE 0.5 AutomaticCFG T-GATE 0.35 AutomaticCFG fatest
result origin_result tgate_result auto_cfg_boost tgate_0_35 auto_cfg_fatest
speed 4.59it/s 5.68it/s 5.62it/s 6.13it/s 6.13it/s

T-GATE performs best when maintaining the original composition. However, if you don't need to maintain composition, AutomaticCFG fatest also brings about the same performance improvement.

📗 INSTALL

git clone https://github.com/JettHu/ComfyUI_TGate
# that's all!

📙 Major Features

  • Training-Free.
  • Friendly support CNN-based U-Net, Transformer, and Consistency Model
  • 10%-50% speed up for different diffusion models.

📖 Nodes reference

TGate Apply

Inputs

  • model, model loaded by Load Checkpoint and other MODEL loaders.

Configuration parameters

  • start_at, this is the percentage of steps. Defines at what percentage point of the generation to start use the T-GATE cache.
  • use_cpu_cache: If multiple batches (animatediff) cause GPU OOM, you can set it to true, and T-GATE performance will decrease.

TGate Apply Advanced

Inputs

  • model, model loaded by Load Checkpoint and other MODEL loaders.

Configuration parameters

  • start_at, this is the percentage of steps. Defines at what percentage point of the generation to start use the T-GATE cache.
  • only_cross_attention, [RECOMMEND] default is True, the effect is to cache only the output of cross-attention, ref to issues
  • use_cpu_cache: If multiple batches (animatediff) cause GPU OOM, you can set it to true, and T-GATE performance will decrease.

Optional configuration

  • self_attn_start_at, only takes effect when only_cross_attention is false, percentage of steps too. Defines at what percentage point of the generation to start use the T-GATE cache on latent self attnention.

TGate Apply(Deprecated)

This node is already deprecated, and will be removed after few version.

Inputs

  • model, model loaded by Load Checkpoint and other MODEL loaders.

Configuration parameters

  • start_at, this is the percentage of steps. Defines at what percentage point of the generation to start use the T-GATE cache.
  • only_cross_attention, [RECOMMEND] default is True, the effect is to cache only the output of cross-attention, ref to issues
  • use_cpu_cache: If multiple batches (animatediff) cause GPU OOM, you can set it to true, and T-GATE performance will decrease.

Optional configuration

  • self_attn_start_at, only takes effect when only_cross_attention is false, percentage of steps too. Defines at what percentage point of the generation to start use the T-GATE cache on latent self attnention.

🚀 Performance (from T-GATE)

Model MACs Param Latency Zero-shot 10K-FID on MS-COCO
SD-1.5 16.938T 859.520M 7.032s 23.927
SD-1.5 w/ TGATE 9.875T 815.557M 4.313s 20.789
SD-2.1 38.041T 865.785M 16.121s 22.609
SD-2.1 w/ TGATE 22.208T 815.433 M 9.878s 19.940
SD-XL 149.438T 2.570B 53.187s 24.628
SD-XL w/ TGATE 84.438T 2.024B 27.932s 22.738
Pixart-Alpha 107.031T 611.350M 61.502s 38.669
Pixart-Alpha w/ TGATE 65.318T 462.585M 37.867s 35.825
DeepCache (SD-XL) 57.888T - 19.931s 23.755
DeepCache w/ TGATE 43.868T - 14.666s 23.999
LCM (SD-XL) 11.955T 2.570B 3.805s 25.044
LCM w/ TGATE 11.171T 2.024B 3.533s 25.028
LCM (Pixart-Alpha) 8.563T 611.350M 4.733s 36.086
LCM w/ TGATE 7.623T 462.585M 4.543s 37.048

The latency is tested on a 1080ti commercial card.

The MACs and Params are calculated by calflops.

The FID is calculated by PytorchFID.

📝 TODO

  • Result image quality is inconsistent with origin. Now cache attn2 (cross_attention) only.
  • Implement a native version and no longer rely on git patch
  • Fully compatible with animatediff. Currently, both plugins hook comfy.samplers.sampling_function, T-Gate does not perform correctly. refer to
  • compatible with TiledDiffusion. issue #11

🔍 Common promblem

  • For apple silicon users using the mps backend, torch and macos versions may cause some problems. refer to issue comment.

  • Fixed in 2024.4.29. Unable to properly remove T-Gate effects. The situation in the picture below is bypass the node after apply.

2024.4.26-29 Updated on 2024.4.29
before_fixed after_fixed

comfyui_tgate's People

Contributors

jetthu avatar pamparamm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

comfyui_tgate's Issues

Error when working with SDXL model, no problem with SD15 model

Error occurred when executing KSamplerAdvanced:

File "O:\comfyui\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "O:\comfyui\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "O:\comfyui\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "O:\comfyui\ComfyUI\nodes.py", line 1378, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
File "O:\comfyui\ComfyUI\nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "O:\comfyui\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "O:\comfyui\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 313, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "O:\comfyui\ComfyUI\comfy\sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 761, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 663, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 650, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 629, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "O:\comfyui\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "O:\comfyui\ComfyUI\comfy\k_diffusion\sampling.py", line 618, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 272, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 616, in call
return self.predict_noise(*args, **kwargs)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 619, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "O:\comfyui\ComfyUI\custom_nodes\ComfyUI_TGate\TGate.py", line 165, in wrapper
return fn(
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 258, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "O:\comfyui\ComfyUI\comfy\samplers.py", line 216, in calc_cond_batch
output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "O:\comfyui\ComfyUI\custom_nodes\ComfyUI_TGate\TGate.py", line 237, in call
outputs.append(apply_model(input_x_chunks[i], timestep_chunks[i], **c))
File "O:\comfyui\ComfyUI\comfy\model_base.py", line 97, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "O:\comfyui\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "O:\comfyui\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "O:\comfyui\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 844, in forward
assert y.shape[0] == x.shape[0]

Incompatibility with cfg++

When used together with cfg++ sampler (e.g. Euler A cfg ++), the "start_at" argument no longer work properly (will actually start much earlier) and during the TGated steps the image will get very dark.

When using SDXL+animatediff+Tgate, the following error occurs:

Error occurred when executing SamplerCustom:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 12.81 GiB
Requested : 480.00 MiB
Device limit : 14.75 GiB
Free (according to CUDA): 27.06 MiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

File "/kaggle/working/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/kaggle/working/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/kaggle/working/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/kaggle/working/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 374, in sample
samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/control_reference.py", line 47, in refcn_sample
return orig_comfy_sample(model, *args, **kwargs)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 419, in motion_sample
latents = orig_comfy_sample(model, noise, *args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/sample.py", line 42, in sample_custom
samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 663, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 650, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 629, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-sampler-lcm-alternative/sampler_lcm_alt.py", line 32, in sample_lcm_cycle
return sample_lcm_backbone(model, x, sigmas, extra_args, callback, disable, noise_sampler, loop_control, ancestral)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-sampler-lcm-alternative/sampler_lcm_alt.py", line 38, in sample_lcm_backbone
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 272, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 616, in call
return self.predict_noise(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 619, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 456, in evolved_sampling_function
cond_pred, uncond_pred = sliding_calc_conds_batch(model, [cond, uncond
], x, timestep, model_options)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 593, in sliding_calc_conds_batch
sub_conds_out = calc_cond_uncond_batch_wrapper(model, sub_conds, sub_x, sub_timestep, model_options)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 653, in calc_cond_uncond_batch_wrapper
return comfy.samplers.calc_cond_batch(model, conds, x_in, timestep, model_options)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 218, in calc_cond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
File "/kaggle/working/ComfyUI/comfy/model_base.py", line 97, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 885, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 130, in forward_timestep_embed
x = layer(x, emb)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 230, in forward
return checkpoint(
File "/kaggle/working/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint
return func(*inputs)
File "/kaggle/working/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 243, in _forward
h = self.in_layers(x)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/ops.py", line 94, in forward
return super().forward(*args, **kwargs)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 171, in groupnorm_mm_forward
input = group_norm(input, self.num_groups, weight, bias, self.eps)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2561, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)

error

QQ图片20240418235352
How can I do it?please

Failure when used with Ultimate SD Upscale

Depending on tile size, I get this with Ultimate SD Upscale + T-Gate:

!!! Exception during processing!!! The size of tensor a (4608) must match the size of tensor b (4352) at non-singleton dimension 1
Traceback (most recent call last):
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/nodes.py", line 125, in upscale
    processed = script.run(p=sdprocessing, _=None, tile_width=tile_width, tile_height=tile_height,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py", line 553, in run
    upscaler.process()
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py", line 136, in process
    self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py", line 243, in start
    return self.linear_process(p, image, rows, cols)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py", line 178, in linear_process
    processed = processing.process_images(p)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale/modules/processing.py", line 122, in process_images
    (samples,) = common_ksampler(p.model, p.seed, p.steps, p.cfg, p.sampler_name,
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/nodes.py", line 1314, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 313, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/control_reference.py", line 47, in refcn_sample
    return orig_comfy_sample(model, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/sample.py", line 37, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1441, in KSampler_sample
    return _KSampler_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 761, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1464, in sample
    return _sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 663, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 650, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 629, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 534, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/sampling.py", line 154, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 272, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 987, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 1037, in predict_noise
    out = super().predict_noise(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 619, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_TGate/TGate.py", line 148, in wrapper
    return fn(
           ^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 258, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/.patches.py", line 89, in calc_cond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-TiledDiffusion/tiled_diffusion.py", line 553, in __call__
    x_tile_out = model_function(x_tile, t_tile, **c_tile)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/model_base.py", line 97, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 850, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 44, in forward_timestep_embed
    x = layer(x, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/attention.py", line 633, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfy312/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-layerdiffusion/lib_layerdiffusion/attention_sharing.py", line 253, in forward
    return func(self, x, context, transformer_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/attention.py", line 460, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint
    return func(*inputs)
           ^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_TGate/TGate.py", line 126, in tgate_forward
    x += n
RuntimeError: The size of tensor a (4608) must match the size of tensor b (4352) at non-singleton dimension 1

I was able to complete one run with tile size 1024x1024, but not a second run. The first tile processes and then that error occurs. With a bunch of other tile sizes I tried, that error occurs on the first tile.

TGateApply node won't work: 'ModelPatcher' object has no attribute 'set_model_transformer_function'

Testing this node and nothing worked. It is simply a default T2I flow with an additional TGateApply. Everything updated to latest versions.

Snipaste_2024-04-18_18-38-11 Snipaste_2024-04-18_18-37-51 Snipaste_2024-04-18_18-36-27

Exception says:

Error occurred when executing TGateApply: 'ModelPatcher' object has no attribute 'set_model_transformer_function'

File "/root/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
File "/root/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/root/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/root/ComfyUI/custom_nodes/ComfyUI_TGate/TGate.py", line 180, in apply_tgate model_clone.set_model_transformer_function(

Error reported when using TGate

Traceback (most recent call last):
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 713, in sample
samples, images, gifs, preview = process_latent_image(model, seed, steps, cfg, sampler_name, scheduler,
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 533, in process_latent_image
samples = KSampler().sample(model, seed, steps, cfg, sampler_name, scheduler, positive, negative,
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\nodes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 248, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 755, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 657, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 644, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 623, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 272, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 610, in call
return self.predict_noise(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 613, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_TGate\TGate.py", line 145, in wrapper
return fn(
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 258, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\samplers.py", line 218, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\model_base.py", line 97, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 850, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\ldm\modules\attention.py", line 633, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\ldm\modules\attention.py", line 460, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint
return func(*inputs)
File "C:\Users\80677\Desktop\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_TGate\TGate.py", line 87, in tgate_forward
uncond, cond = n.chunk(2)
ValueError: not enough values to unpack (expected 2, got 1)

After updating, there is an error when using animatediff: “The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 0”

Traceback (most recent call last):
File "/kaggle/working/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/kaggle/working/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/kaggle/working/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/kaggle/working/ComfyUI/nodes.py", line 1378, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
File "/kaggle/working/ComfyUI/nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/control_reference.py", line 47, in refcn_sample
return orig_comfy_sample(model, *args, **kwargs)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 383, in motion_sample
latents = orig_comfy_sample(model, noise, *args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 755, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 657, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 644, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 623, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/k_diffusion/sampling.py", line 745, in sample_lcm
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 272, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 610, in call
return self.predict_noise(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 613, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 424, in evolved_sampling_function
cond_pred, uncond_pred = comfy.samplers.calc_cond_batch(model, [cond, uncond
], x, timestep, model_options)
File "/kaggle/working/ComfyUI/comfy/samplers.py", line 218, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "/kaggle/working/ComfyUI/comfy/model_base.py", line 97, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 850, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 109, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/ldm/modules/attention.py", line 633, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/kaggle/venv/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/kaggle/working/ComfyUI/comfy/ldm/modules/attention.py", line 460, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "/kaggle/working/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint
return func(*inputs)
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI_TGate/TGate.py", line 124, in tgate_forward
x += n

error when git apply

git apply tgate.patch
error: comfy/ldm/modules/attention.py: No such file or directory
error: comfy/model_patcher.py: No such file or directory
error: comfy/samplers.py: No such file or directory

new error

QQ图片20240419121855
the
I used the default workflow,during the sampling process, it gradually turns black until it is completely black. What is the problem

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.