Giter Club home page Giter Club logo

comfyui_ella's Introduction

Note

Officially implemented here: https://github.com/TencentQQGYLab/ComfyUI-ELLA

ComfyUI_ELLA

ComfyUI Implementaion of ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment

image

Note

As per the ELLA developers / researchers, only the SD 1.5 checkpoint is released.

Installation

To install, simply navigate to custom_nodes, and inside that directory, do git clone https://github.com/ExponentialML/ComfyUI_ELLA.git

Models

These models must be placed in the corresponding directories under models.

Example: ComfyUI/models/ella/model_file

The directories should be created automatically. If they are not, please create them manually.

  1. Place the ELLA Model under a new folder ella: https://huggingface.co/QQGYLab/ELLA/blob/main/ella-sd1.5-tsc-t5xl.safetensors

  2. Create a folder called t5_model. Navigate to that folder (you must be in that directory), and git clone https://huggingface.co/google/flan-t5-xl to download the t5 model.

  3. If you don't wish to use git, you can dowload each indvididually file manually by creating a folder t5_model/flan-t5-xl, then download every file from here, although I recommend git as it's easier.

In summary, you should have the following model directory structure:

  • ComfyUI/models/ella/ella-sd1.5-tsc-t5xl.safetensors
  • ComfyUI/models/t5_model/flan-t5-xl/all_downloaded_t5_models

Usage

Workflow

To get started quickly, a workflow is provided in the workflow directory.

ELLA Loader

image

  • ella_model: The path to the ella checkpoint file.
  • t5_model: The path to the t5 model folder.

ELLA Text Encode

image

  • ella: The loaded model using the ELLA Loader.
  • text: Conditioning prompt. All weighting and such should be 1:1 with all condiioning nodes.
  • sigma: The required sigma for the prompt. It must be the same as the KSampler settings. Without the workflow, initially this will be a float. You can simply right click the node, convert sigma to input, then use the Get Sigma node.

Support

All conditioning nodes should be supported, as well as prompt weighting and ControlNet.

image

Attribution

Thanks to the following for open sourcing. Please follow their respective licensing.

comfyui_ella's People

Contributors

exponentialml avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

comfyui_ella's Issues

Reference of models

Seems that this node doesn't work when ella and t5_model folders are referenced in extra model paths. This may be causing issues with certain installations of Comfy.

Whatever I do, Ella doesn't recognise SentencePiece library

"Error occurred when executing LoadElla:

T5Tokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment. Please note that you may need to restart your runtime after installation."

I have installed the SentencePiece library, and really don't understand what the issue is. I am using this in ComfyUI portable. I followed another thread that spoke about issues with SentencePiece, but adding in a requirements.txt and installing it that way didn't make a difference.

How can I assure SentencePiece is indeed recognised in Comfy's environment?

"addmm_impl_cpu_" not implemented for 'Half'

I get the following error. Does anyone have any ideas what could be causing this?
I cloned Kijai/flan-t5-xl-encoder-only-bf16 and installed the few requirements that were initially not found.

Error occurred when executing ELLATextEncode:

"addmm_impl_cpu_" not implemented for 'Half'

  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/custom_nodes/ComfyUI_ELLA/ella.py", line 99, in encode
    cond = t5(text)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/custom_nodes/ComfyUI_ELLA/ella_model/model.py", line 268, in forward
    outputs = self.model(text_input_ids, attention_mask=attention_mask)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1980, in forward
    encoder_outputs = self.encoder(
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1115, in forward
    layer_outputs = layer_module(
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 695, in forward
    self_attention_outputs = self.layer[0](
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 602, in forward
    attention_output = self.SelfAttention(
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 521, in forward
    query_states = shape(self.q(hidden_states))  # (batch_size, n_heads, seq_length, dim_per_head)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/user/StabilityMatrix/Data/Packages/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)

Semantic improvement effects are unstable

This plugin's semantic understanding is unstable when using it, for example, like "There's a red cube on top of a blue sphere." this prompt word, it's hard to draw a perfect picture with this plugin, while another plugin that utilizes ella on github can understand and generate the correct picture perfectly. "ComfyUI-ELLA-wrapper" can perfectly understand and generate the correct picture, I do not know if this problem can be solved!

why i can't get simga

Error occurred when executing ELLATextEncode:

expected np.ndarray (got float)

File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella.py", line 100, in encode
cond_ella = ella(cond, timesteps=torch.from_numpy(sigma))
屏幕截图(56)

there is no chioce to connect the get sigma options

RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

got prompt
[rgthree] Using rgthree's optimized recursive execution.
model_type EPS
Using split attention in VAE
Using split attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.82s/it]
Some weights of the model checkpoint at /Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/models/t5_model/flan-t5-xl were not used when initializing T5EncoderModel: ['decoder.block.2.layer.2.layer_norm.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.4.layer.1.layer_norm.weight', 'decoder.block.5.layer.1.EncDecAttention.v.weight', 'decoder.block.23.layer.0.SelfAttention.v.weight', 'lm_head.weight', 'decoder.block.16.layer.0.SelfAttention.o.weight', 'decoder.block.16.layer.0.layer_norm.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.9.layer.0.SelfAttention.o.weight', 'decoder.block.8.layer.2.layer_norm.weight', 'decoder.block.0.layer.1.EncDecAttention.q.weight', 'decoder.block.20.layer.2.DenseReluDense.wo.weight', 'decoder.block.5.layer.0.SelfAttention.q.weight', 'decoder.block.16.layer.0.SelfAttention.k.weight', 'decoder.block.4.layer.0.SelfAttention.v.weight', 'decoder.block.20.layer.1.layer_norm.weight', 'decoder.block.2.layer.1.layer_norm.weight', 'decoder.block.21.layer.0.SelfAttention.o.weight', 'decoder.block.19.layer.1.EncDecAttention.o.weight', 'decoder.block.22.layer.2.layer_norm.weight', 'decoder.block.7.layer.1.EncDecAttention.o.weight', 'decoder.block.11.layer.1.EncDecAttention.v.weight', 'decoder.block.22.layer.1.EncDecAttention.k.weight', 'decoder.block.3.layer.1.EncDecAttention.o.weight', 'decoder.block.15.layer.1.EncDecAttention.v.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.13.layer.1.EncDecAttention.v.weight', 'decoder.block.9.layer.0.layer_norm.weight', 'decoder.block.23.layer.0.SelfAttention.o.weight', 'decoder.block.6.layer.1.EncDecAttention.q.weight', 'decoder.block.20.layer.0.SelfAttention.k.weight', 'decoder.block.15.layer.1.EncDecAttention.q.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.18.layer.1.EncDecAttention.q.weight', 'decoder.block.19.layer.0.SelfAttention.v.weight', 'decoder.block.4.layer.0.layer_norm.weight', 'decoder.block.5.layer.1.EncDecAttention.k.weight', 'decoder.block.11.layer.0.SelfAttention.v.weight', 'decoder.block.22.layer.0.SelfAttention.o.weight', 'decoder.block.23.layer.1.EncDecAttention.o.weight', 'decoder.block.14.layer.0.SelfAttention.v.weight', 'decoder.block.21.layer.2.DenseReluDense.wo.weight', 'decoder.block.8.layer.1.EncDecAttention.o.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.9.layer.1.EncDecAttention.q.weight', 'decoder.block.1.layer.1.EncDecAttention.q.weight', 'decoder.block.13.layer.2.DenseReluDense.wo.weight', 'decoder.block.2.layer.1.EncDecAttention.k.weight', 'decoder.block.4.layer.1.EncDecAttention.o.weight', 'decoder.block.6.layer.0.SelfAttention.v.weight', 'decoder.block.4.layer.2.DenseReluDense.wo.weight', 'decoder.block.19.layer.1.EncDecAttention.q.weight', 'decoder.block.5.layer.2.DenseReluDense.wo.weight', 'decoder.block.17.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.3.layer.0.SelfAttention.o.weight', 'decoder.block.10.layer.1.EncDecAttention.v.weight', 'decoder.block.7.layer.0.SelfAttention.k.weight', 'decoder.block.10.layer.0.SelfAttention.k.weight', 'decoder.block.20.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.12.layer.1.EncDecAttention.k.weight', 'decoder.block.12.layer.0.SelfAttention.q.weight', 'decoder.embed_tokens.weight', 'decoder.block.3.layer.2.DenseReluDense.wo.weight', 'decoder.block.7.layer.0.SelfAttention.q.weight', 'decoder.block.17.layer.2.DenseReluDense.wo.weight', 'decoder.block.13.layer.1.layer_norm.weight', 'decoder.block.12.layer.1.EncDecAttention.q.weight', 'decoder.block.9.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.16.layer.1.EncDecAttention.o.weight', 'decoder.block.7.layer.1.EncDecAttention.q.weight', 'decoder.block.1.layer.0.SelfAttention.q.weight', 'decoder.block.6.layer.0.SelfAttention.k.weight', 'decoder.block.9.layer.0.SelfAttention.v.weight', 'decoder.block.16.layer.2.layer_norm.weight', 'decoder.block.0.layer.0.SelfAttention.o.weight', 'decoder.block.5.layer.1.EncDecAttention.q.weight', 'decoder.block.10.layer.2.layer_norm.weight', 'decoder.block.20.layer.0.SelfAttention.o.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.5.layer.2.layer_norm.weight', 'decoder.block.12.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.3.layer.0.SelfAttention.v.weight', 'decoder.block.22.layer.0.layer_norm.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.0.layer.2.layer_norm.weight', 'decoder.block.8.layer.0.SelfAttention.q.weight', 'decoder.block.4.layer.0.SelfAttention.q.weight', 'decoder.block.0.layer.1.EncDecAttention.o.weight', 'decoder.block.1.layer.1.EncDecAttention.k.weight', 'decoder.block.19.layer.0.SelfAttention.o.weight', 'decoder.block.7.layer.1.layer_norm.weight', 'decoder.block.7.layer.0.SelfAttention.v.weight', 'decoder.block.18.layer.2.DenseReluDense.wo.weight', 'decoder.block.5.layer.1.EncDecAttention.o.weight', 'decoder.block.0.layer.1.EncDecAttention.v.weight', 'decoder.block.13.layer.0.SelfAttention.q.weight', 'decoder.block.20.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.0.layer.0.SelfAttention.k.weight', 'decoder.block.12.layer.0.SelfAttention.o.weight', 'decoder.block.17.layer.2.layer_norm.weight', 'decoder.block.10.layer.1.EncDecAttention.o.weight', 'decoder.block.13.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.3.layer.2.layer_norm.weight', 'decoder.block.21.layer.1.EncDecAttention.o.weight', 'decoder.block.11.layer.2.DenseReluDense.wo.weight', 'decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'decoder.block.17.layer.0.SelfAttention.v.weight', 'decoder.block.15.layer.2.DenseReluDense.wo.weight', 'decoder.block.21.layer.0.SelfAttention.q.weight', 'decoder.block.10.layer.0.layer_norm.weight', 'decoder.block.20.layer.1.EncDecAttention.o.weight', 'decoder.block.15.layer.0.SelfAttention.q.weight', 'decoder.block.1.layer.1.EncDecAttention.v.weight', 'decoder.block.22.layer.1.EncDecAttention.q.weight', 'decoder.block.20.layer.0.SelfAttention.v.weight', 'decoder.block.4.layer.1.EncDecAttention.q.weight', 'decoder.block.1.layer.0.SelfAttention.o.weight', 'decoder.block.5.layer.1.layer_norm.weight', 'decoder.block.15.layer.2.layer_norm.weight', 'decoder.block.19.layer.0.SelfAttention.k.weight', 'decoder.block.8.layer.0.SelfAttention.v.weight', 'decoder.block.8.layer.0.layer_norm.weight', 'decoder.block.2.layer.1.EncDecAttention.o.weight', 'decoder.block.10.layer.1.layer_norm.weight', 'decoder.block.4.layer.0.SelfAttention.o.weight', 'decoder.block.12.layer.1.layer_norm.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.18.layer.1.EncDecAttention.k.weight', 'decoder.block.16.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.11.layer.0.SelfAttention.q.weight', 'decoder.block.14.layer.1.layer_norm.weight', 'decoder.block.23.layer.1.layer_norm.weight', 'decoder.block.1.layer.0.SelfAttention.k.weight', 'decoder.block.12.layer.0.SelfAttention.k.weight', 'decoder.block.13.layer.0.layer_norm.weight', 'decoder.block.16.layer.1.EncDecAttention.k.weight', 'decoder.block.2.layer.0.SelfAttention.v.weight', 'decoder.block.9.layer.2.DenseReluDense.wo.weight', 'decoder.block.16.layer.1.EncDecAttention.v.weight', 'decoder.block.21.layer.0.SelfAttention.k.weight', 'decoder.block.10.layer.1.EncDecAttention.q.weight', 'decoder.block.23.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.10.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.22.layer.0.SelfAttention.v.weight', 'decoder.block.19.layer.1.layer_norm.weight', 'decoder.block.23.layer.0.SelfAttention.q.weight', 'decoder.block.22.layer.0.SelfAttention.k.weight', 'decoder.block.8.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.19.layer.0.SelfAttention.q.weight', 'decoder.block.10.layer.1.EncDecAttention.k.weight', 'decoder.block.7.layer.2.layer_norm.weight', 'decoder.block.8.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.5.layer.0.SelfAttention.o.weight', 'decoder.block.3.layer.1.layer_norm.weight', 'decoder.block.17.layer.1.EncDecAttention.v.weight', 'decoder.block.3.layer.0.SelfAttention.k.weight', 'decoder.block.2.layer.0.SelfAttention.k.weight', 'decoder.block.23.layer.0.layer_norm.weight', 'decoder.block.1.layer.1.layer_norm.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.9.layer.0.SelfAttention.q.weight', 'decoder.block.13.layer.2.layer_norm.weight', 'decoder.block.14.layer.1.EncDecAttention.q.weight', 'decoder.block.12.layer.1.EncDecAttention.v.weight', 'decoder.block.0.layer.0.layer_norm.weight', 'decoder.block.21.layer.0.SelfAttention.v.weight', 'decoder.block.14.layer.2.DenseReluDense.wo.weight', 'decoder.block.20.layer.1.EncDecAttention.v.weight', 'decoder.block.0.layer.0.SelfAttention.v.weight', 'decoder.block.3.layer.0.SelfAttention.q.weight', 'decoder.block.8.layer.1.layer_norm.weight', 'decoder.block.14.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.9.layer.1.EncDecAttention.k.weight', 'decoder.block.12.layer.1.EncDecAttention.o.weight', 'decoder.block.22.layer.1.EncDecAttention.o.weight', 'decoder.block.18.layer.2.DenseReluDense.wi_1.weight', 'decoder.final_layer_norm.weight', 'decoder.block.0.layer.1.EncDecAttention.k.weight', 'decoder.block.14.layer.1.EncDecAttention.o.weight', 'decoder.block.15.layer.1.layer_norm.weight', 'decoder.block.6.layer.1.EncDecAttention.o.weight', 'decoder.block.19.layer.1.EncDecAttention.v.weight', 'decoder.block.10.layer.2.DenseReluDense.wo.weight', 'decoder.block.6.layer.2.layer_norm.weight', 'decoder.block.19.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.17.layer.0.SelfAttention.o.weight', 'decoder.block.6.layer.0.layer_norm.weight', 'decoder.block.1.layer.0.SelfAttention.v.weight', 'decoder.block.1.layer.1.EncDecAttention.o.weight', 'decoder.block.4.layer.0.SelfAttention.k.weight', 'decoder.block.11.layer.1.EncDecAttention.k.weight', 'decoder.block.15.layer.0.layer_norm.weight', 'decoder.block.17.layer.1.EncDecAttention.q.weight', 'decoder.block.17.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.17.layer.0.SelfAttention.k.weight', 'decoder.block.9.layer.1.EncDecAttention.v.weight', 'decoder.block.21.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.4.layer.1.EncDecAttention.v.weight', 'decoder.block.23.layer.2.DenseReluDense.wo.weight', 'decoder.block.0.layer.2.DenseReluDense.wo.weight', 'decoder.block.10.layer.0.SelfAttention.q.weight', 'decoder.block.13.layer.1.EncDecAttention.o.weight', 'decoder.block.8.layer.1.EncDecAttention.k.weight', 'decoder.block.15.layer.1.EncDecAttention.o.weight', 'decoder.block.14.layer.1.EncDecAttention.v.weight', 'decoder.block.13.layer.0.SelfAttention.v.weight', 'decoder.block.8.layer.1.EncDecAttention.q.weight', 'decoder.block.18.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.19.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.22.layer.1.EncDecAttention.v.weight', 'decoder.block.14.layer.0.SelfAttention.q.weight', 'decoder.block.7.layer.1.EncDecAttention.k.weight', 'decoder.block.13.layer.1.EncDecAttention.k.weight', 'decoder.block.14.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.13.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.21.layer.2.layer_norm.weight', 'decoder.block.6.layer.0.SelfAttention.o.weight', 'decoder.block.8.layer.1.EncDecAttention.v.weight', 'decoder.block.14.layer.0.SelfAttention.o.weight', 'decoder.block.18.layer.0.layer_norm.weight', 'decoder.block.20.layer.2.layer_norm.weight', 'decoder.block.17.layer.0.SelfAttention.q.weight', 'decoder.block.11.layer.0.SelfAttention.o.weight', 'decoder.block.3.layer.1.EncDecAttention.k.weight', 'decoder.block.16.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.2.layer.0.SelfAttention.o.weight', 'decoder.block.17.layer.1.EncDecAttention.o.weight', 'decoder.block.22.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.17.layer.0.layer_norm.weight', 'decoder.block.9.layer.1.layer_norm.weight', 'decoder.block.3.layer.0.layer_norm.weight', 'decoder.block.7.layer.0.layer_norm.weight', 'decoder.block.21.layer.1.layer_norm.weight', 'decoder.block.9.layer.2.layer_norm.weight', 'decoder.block.5.layer.0.SelfAttention.v.weight', 'decoder.block.22.layer.2.DenseReluDense.wo.weight', 'decoder.block.15.layer.0.SelfAttention.o.weight', 'decoder.block.20.layer.1.EncDecAttention.k.weight', 'decoder.block.6.layer.1.EncDecAttention.v.weight', 'decoder.block.2.layer.1.EncDecAttention.q.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.14.layer.1.EncDecAttention.k.weight', 'decoder.block.19.layer.0.layer_norm.weight', 'decoder.block.21.layer.0.layer_norm.weight', 'decoder.block.18.layer.2.layer_norm.weight', 'decoder.block.6.layer.2.DenseReluDense.wo.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.23.layer.1.EncDecAttention.v.weight', 'decoder.block.10.layer.0.SelfAttention.o.weight', 'decoder.block.0.layer.1.layer_norm.weight', 'decoder.block.10.layer.0.SelfAttention.v.weight', 'decoder.block.11.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.23.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.11.layer.1.EncDecAttention.q.weight', 'decoder.block.7.layer.1.EncDecAttention.v.weight', 'decoder.block.12.layer.0.layer_norm.weight', 'decoder.block.23.layer.2.layer_norm.weight', 'decoder.block.1.layer.2.layer_norm.weight', 'decoder.block.12.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.15.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.16.layer.1.layer_norm.weight', 'decoder.block.11.layer.0.SelfAttention.k.weight', 'decoder.block.13.layer.1.EncDecAttention.q.weight', 'decoder.block.21.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.1.layer.2.DenseReluDense.wo.weight', 'decoder.block.21.layer.1.EncDecAttention.k.weight', 'decoder.block.19.layer.2.DenseReluDense.wo.weight', 'decoder.block.8.layer.0.SelfAttention.o.weight', 'decoder.block.18.layer.1.EncDecAttention.v.weight', 'decoder.block.19.layer.1.EncDecAttention.k.weight', 'decoder.block.22.layer.1.layer_norm.weight', 'decoder.block.1.layer.0.layer_norm.weight', 'decoder.block.11.layer.0.layer_norm.weight', 'decoder.block.8.layer.0.SelfAttention.k.weight', 'decoder.block.17.layer.1.layer_norm.weight', 'decoder.block.15.layer.0.SelfAttention.k.weight', 'decoder.block.12.layer.2.DenseReluDense.wo.weight', 'decoder.block.13.layer.0.SelfAttention.o.weight', 'decoder.block.21.layer.1.EncDecAttention.v.weight', 'decoder.block.11.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.6.layer.1.layer_norm.weight', 'decoder.block.16.layer.0.SelfAttention.q.weight', 'decoder.block.20.layer.0.layer_norm.weight', 'decoder.block.8.layer.2.DenseReluDense.wo.weight', 'decoder.block.12.layer.0.SelfAttention.v.weight', 'decoder.block.18.layer.0.SelfAttention.v.weight', 'decoder.block.16.layer.1.EncDecAttention.q.weight', 'decoder.block.15.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.4.layer.1.EncDecAttention.k.weight', 'decoder.block.2.layer.0.layer_norm.weight', 'decoder.block.9.layer.1.EncDecAttention.o.weight', 'decoder.block.2.layer.1.EncDecAttention.v.weight', 'decoder.block.15.layer.1.EncDecAttention.k.weight', 'decoder.block.9.layer.0.SelfAttention.k.weight', 'decoder.block.11.layer.1.layer_norm.weight', 'decoder.block.14.layer.0.SelfAttention.k.weight', 'decoder.block.20.layer.0.SelfAttention.q.weight', 'decoder.block.7.layer.2.DenseReluDense.wo.weight', 'decoder.block.16.layer.2.DenseReluDense.wo.weight', 'decoder.block.11.layer.1.EncDecAttention.o.weight', 'decoder.block.0.layer.0.SelfAttention.q.weight', 'decoder.block.20.layer.1.EncDecAttention.q.weight', 'decoder.block.18.layer.1.layer_norm.weight', 'decoder.block.18.layer.1.EncDecAttention.o.weight', 'decoder.block.21.layer.1.EncDecAttention.q.weight', 'decoder.block.2.layer.2.DenseReluDense.wo.weight', 'decoder.block.3.layer.1.EncDecAttention.q.weight', 'decoder.block.6.layer.0.SelfAttention.q.weight', 'decoder.block.15.layer.0.SelfAttention.v.weight', 'decoder.block.12.layer.2.layer_norm.weight', 'decoder.block.3.layer.1.EncDecAttention.v.weight', 'decoder.block.10.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.18.layer.0.SelfAttention.q.weight', 'decoder.block.14.layer.2.layer_norm.weight', 'decoder.block.5.layer.0.SelfAttention.k.weight', 'decoder.block.22.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.6.layer.1.EncDecAttention.k.weight', 'decoder.block.23.layer.1.EncDecAttention.q.weight', 'decoder.block.13.layer.0.SelfAttention.k.weight', 'decoder.block.5.layer.0.layer_norm.weight', 'decoder.block.2.layer.0.SelfAttention.q.weight', 'decoder.block.17.layer.1.EncDecAttention.k.weight', 'decoder.block.18.layer.0.SelfAttention.k.weight', 'decoder.block.18.layer.0.SelfAttention.o.weight', 'decoder.block.22.layer.0.SelfAttention.q.weight', 'decoder.block.11.layer.2.layer_norm.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.7.layer.0.SelfAttention.o.weight', 'decoder.block.14.layer.0.layer_norm.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.4.layer.2.layer_norm.weight', 'decoder.block.19.layer.2.layer_norm.weight', 'decoder.block.16.layer.0.SelfAttention.v.weight', 'decoder.block.9.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.23.layer.1.EncDecAttention.k.weight', 'decoder.block.23.layer.0.SelfAttention.k.weight']

  • This IS expected if you are initializing T5EncoderModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing T5EncoderModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    !!! Exception during processing !!!
    Traceback (most recent call last):
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/custom_nodes/ComfyUI_ELLA/ella.py", line 100, in encode
    cond_ella = ella(cond, timesteps=torch.from_numpy(sigma))
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/custom_nodes/ComfyUI_ELLA/ella_model/model.py", line 321, in forward
    encoder_hidden_states = self.connector(
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/custom_nodes/ComfyUI_ELLA/ella_model/model.py", line 229, in forward
    latents = p_block(x, latents, timestep_embedding=timestep_embedding)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/custom_nodes/ComfyUI_ELLA/ella_model/model.py", line 176, in forward
    normed_latents = self.ln_1(latents, timestep_embedding)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
    File "/Users/jasonhomme/Desktop/SDwebui/comfyui/ComfyUI/custom_nodes/ComfyUI_ELLA/ella_model/model.py", line 136, in forward
    x = self.norm(x) * (1 + scale) + shift
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 201, in forward
    return F.layer_norm(
    File "/opt/miniconda3/lib/python3.10/site-packages/torch/nn/functional.py", line 2546, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

Prompt executed in 80.20 seconds

Tensors must have same number of dimensions: got 3 and 2

I'm getting this errors wen executing ksampler

Error occurred when executing KSampler:

Tensors must have same number of dimensions: got 3 and 2

File "K:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\nodes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-AnimateLCM\animatediff\sampling.py", line 241, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 278, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample
return orig_comfy_sample(model, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1427, in KSampler_sample
return _KSampler_sample(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\samplers.py", line 755, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1450, in sample
return _sample(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\samplers.py", line 657, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\samplers.py", line 644, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\samplers.py", line 619, in inner_sample
self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\samplers.py", line 571, in process_conds
conds[k] = encode_model_conds(model.extra_conds, conds[k], noise, device, k, latent_image=latent_image, denoise_mask=denoise_mask, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\samplers.py", line 489, in encode_model_conds
out = model_function(**params)
^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\model_base.py", line 151, in extra_conds
adm = self.encode_adm(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "K:\ComfyUI\ComfyUI\comfy\model_base.py", line 334, in encode_adm
return torch.cat((clip_pooled.to(flat.device), flat), dim=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Error occurred when executing LoadElla: not a string

I am using the ELLA_Workflow.json file provided with the node
Initially I got an error that "sentencepiece" was not installed so I installed that, now I get

Error occurred when executing LoadElla:

not a string

File "C:\Programs\sd\ComfyUI_nightly\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\ComfyUI\custom_nodes\ComfyUI_ELLA\ella.py", line 68, in load_ella
t5_model = T5TextEmbedder(t5_path).to(self.device, self.dtype)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\ComfyUI\custom_nodes\ComfyUI_ELLA\ella_model\model.py", line 241, in init
self.tokenizer = T5Tokenizer.from_pretrained(pretrained_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 2028, in from_pretrained
return cls.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\python_embeded\Lib\site-packages\transformers\tokenization_utils_base.py", line 2260, in from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\python_embeded\Lib\site-packages\transformers\models\t5\tokenization_t5.py", line 166, in init
self.sp_model.Load(vocab_file)
File "C:\Programs\sd\ComfyUI_nightly\python_embeded\Lib\site-packages\sentencepiece_init
.py", line 961, in Load
return self.LoadFromFile(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Programs\sd\ComfyUI_nightly\python_embeded\Lib\site-packages\sentencepiece_init
.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

TypeError: expected np.ndarray (got float)

Using the provided workflow, but I am using Kijai's pruned model.

I am running latest Comfyui

I get the following error:

model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
clip missing: ['clip_l.transformer.text_projection.weight']
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
!!! Exception during processing !!!
Traceback (most recent call last):
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ezXY\autoCastPatch.py", line 299, in map_node_over_list
    return _map_node_over_list(obj, input_data_all, func, allow_interrupt)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella.py", line 100, in encode
    cond_ella = ella(cond, timesteps=torch.from_numpy(sigma))
                                     ^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected np.ndarray (got float)

expected np.ndarray (got int)

Hi, I have the latest version of ComfyUI, I updated transformers and other dependencies to the latest versions as well, but it didn't help. expected np.ndarray (got int) - in Encode node anyway

Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.09it/s]
!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SD\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella.py", line 100, in encode
cond_ella = ella(cond, timesteps=torch.from_numpy(sigma))
^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected np.ndarray (got int)

Error appears while processing ELLATextEncode

Hi!

After updating to the latest ComfyUI commit I get the following error message while the ELLATextEncode node is processing.

Is it an issue with the latest ComfyUI commit or could it be a issue with my workflow?
grafik

Kind regards!

!!! Exception during processing !!!
Traceback (most recent call last):
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella.py", line 100, in encode
    cond_ella = ella(cond, timesteps=torch.from_numpy(sigma))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella_model\model.py", line 321, in forward
    encoder_hidden_states = self.connector(
                            ^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella_model\model.py", line 229, in forward
    latents = p_block(x, latents, timestep_embedding=timestep_embedding)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella_model\model.py", line 176, in forward
    normed_latents = self.ln_1(latents, timestep_embedding)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella_model\model.py", line 136, in forward
    x = self.norm(x) * (1 + scale) + shift
        ^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\normalization.py", line 201, in forward
    return F.layer_norm(
           ^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py", line 2546, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

Prompt executed in 114.42 seconds

Error occurred when executing LoadElla: Error while deserializing header: HeaderTooLarge

Error occurred when executing LoadElla:

Error while deserializing header: HeaderTooLarge

File "E:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\ComfyUI\custom_nodes\ComfyUI_ELLA\ella.py", line 71, in load_ella
ella_state_dict = comfy.utils.load_torch_file(ella_path)
File "E:\ComfyUI\comfy\utils.py", line 14, in load_torch_file
sd = safetensors.torch.load_file(ckpt, device=device.type)
File "E:\ComfyUI\comfyui_venv\lib\site-packages\safetensors\torch.py", line 308, in load_file
with safe_open(filename, framework="pt", device=device) as f:

image

expected nd.array (got float)

When trying the simpliest example:

Error occurred when executing ELLATextEncode:

expected np.ndarray (got float)

File "/mnt/e/Projekty/_AI/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/mnt/e/Projekty/_AI/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/mnt/e/Projekty/_AI/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/mnt/e/Projekty/_AI/ComfyUI/custom_nodes/ComfyUI_ELLA/ella.py", line 100, in encode
cond_ella = ella(cond, timesteps=torch.from_numpy(sigma))

facing issue Error while deserializing header: HeaderTooLarge

!!! Exception during processing !!!
Traceback (most recent call last):
File "/teamspace/studios/this_studio/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/teamspace/studios/this_studio/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/teamspace/studios/this_studio/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/teamspace/studios/this_studio/ComfyUI/custom_nodes/ComfyUI_ELLA/ella.py", line 68, in load_ella
t5_model = T5TextEmbedder(t5_path).to(self.device, self.dtype)
File "/teamspace/studios/this_studio/ComfyUI/custom_nodes/ComfyUI_ELLA/ella_model/model.py", line 240, in init
self.model = T5EncoderModel.from_pretrained(pretrained_path)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3706, in from_pretrained
) = cls._load_pretrained_model(
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4091, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/transformers/modeling_utils.py", line 503, in load_state_dict
with safe_open(checkpoint_file, framework="pt") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

getting this error from the Get Sigma node

I can't figure out why I'm getting this error when I Que up.

Error occurred when executing GetSigma:

'ModelPatcher' object has no attribute 'model_sampling'

File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ELLA\ella.py", line 43, in calc_sigma
sampler = comfy.samplers.KSampler(model, steps=steps, device=device, sampler=sampler_name, scheduler=scheduler, denoise=1.0, model_options=model.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 657, in init
self.set_steps(steps, denoise)
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 678, in set_steps
self.sigmas = self.calculate_sigmas(steps).to(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 669, in calculate_sigmas
sigmas = calculate_sigmas_scheduler(self.model, self.scheduler, steps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 623, in calculate_sigmas_scheduler
sigmas = simple_scheduler(model, steps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 292, in simple_scheduler
s = model.model_sampling
^^^^^^^^^^^^^^^^^^^^

No requirements.txt file

There is no requirements.txt file and therefore required dependencies are not installed and I get the error:

Error occurred when executing LoadElla:

T5Tokenizer requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones
that match your environment. Please note that you may need to restart your runtime after installation.

Please add a requirements.txt file so that requirements are installed automatically into the venv. Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.