Giter Club home page Giter Club logo

comfyui_fizznodes's People

Contributors

bradenm avatar brainsucker avatar fizzledorf avatar karrycharon avatar maxtran96 avatar paero avatar wmsouza avatar xingren23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

comfyui_fizznodes's Issues

No idea of the fix

Terminal error
"
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "F:\Stable-Diffusion\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\Stable-Diffusion\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\Stable-Diffusion\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\Stable-Diffusion\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 77, in animate
animation_prompts = json.loads(inputText.strip())
File "json_init_.py", line 346, in loads
File "json\decoder.py", line 340, in decode
json.decoder.JSONDecodeError: Extra data: line 1 column 35 (char 34)
"
image

Batch Prompt Schedule don't work

Hi FizzleDorf, Batch Prompt Schedule don't work.
print_out :
got prompt

Max Frames: 61
frame index: 0
Current Prompt: High detail, a girl, yellow dress
Next Prompt: High detail, a girl, red dress
Strength : 1.0

Max Frames: 61
frame index: 1
Current Prompt: High detail, a girl, yellow dress
Next Prompt: High detail, a girl, red dress
Strength : 0.9166666666666666
......
Max Frames: 61
frame index: 11
Current Prompt: High detail, a girl, yellow dress
Next Prompt: High detail, a girl, red dress
Strength : 0.08333333333333337

Max Frames: 61
frame index: 12
Current Prompt: High detail, a girl, red dress
Next Prompt: High detail, a girl, red dress
Strength : 1.0

Max Frames: 61
frame index: 13
Current Prompt: High detail, a girl, red dress
Next Prompt: High detail, a girl, red dress
Strength : 0.9795918367346939

Max Frames: 61
frame index: 14
Current Prompt: High detail, a girl, red dress
Next Prompt: High detail, a girl, red dress
Strength : 0.9591836734693877

my prompt
17046875728436

the result
17046877562837

no yellow dress girl.all are red dress.

BatchFuncs.py error when enable 'print_output'

services-comfy-1  | ERROR:root:Traceback (most recent call last):
services-comfy-1  |   File "/app/execution.py", line 154, in recursive_execute
services-comfy-1  |     output_data, output_ui = get_output_data(obj, input_data_all)
services-comfy-1  |   File "/app/execution.py", line 84, in get_output_data
services-comfy-1  |     return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
services-comfy-1  |   File "/app/execution.py", line 77, in map_node_over_list
services-comfy-1  |     results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
services-comfy-1  |   File "/app/custom_nodes/ComfyUI_FizzNodes/ScheduledNodes.py", line 300, in animate
services-comfy-1  |     return (BatchInterpolatePromptsSDXL(animation_promptsG, animation_promptsL, max_frames, clip,  app_text_G, app_text_L, pre_text_G, pre_text_L, pw_a, pw_b, pw_c, pw_d, width, height, crop_w, crop_h, target_width, target_height, print_output,),)
services-comfy-1  |   File "/app/custom_nodes/ComfyUI_FizzNodes/BatchFuncs.py", line 438, in BatchInterpolatePromptsSDXL
services-comfy-1  |     for i in range(len(cur_prompt_series)):
services-comfy-1  | NameError: name 'cur_prompt_series' is not defined. Did you mean: 'cur_prompt_series_G'?

Typo in ComfyUI_FizzNodes/BatchFuncs.py line 438?
https://github.com/FizzleDorf/ComfyUI_FizzNodes/blame/5458adafb7cb42a6b194d1ac47a07a7af98f1957/BatchFuncs.py#L438

After updating the node to the latest version, I have such a mistake. How can I solve it?

Error occurred when executing KSampler:

sliding_sampling_function() got multiple values for argument 'model_options'

File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "H:\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))..............
....

How is BatchValueSchedule supposed to be working?

I have this simple workflow for Video to Video generation. Basically I just want to be able to define start_at_step for each frame of the video individually. This one complains that for start_at_step I'm giving it a list, instead of Int: '<' not supported between instances of 'list' and 'int'. How to use BatchValueSchedule?

image

BatchPromptSchedule "RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 3 in the list."

I am using the latest commit. And the error in the title is at this line.

It seems the introduction of group_size = 4 in this commit is causing this bug.

Here's the input to illustrate the bug:

"0" :"a girl",

"15" :"female, Russian Gopnik, sport jacket, tracksuit, background symbol graffiti, ruined city, concrete, portrait, long red hair, human thoughts art, elegant fantasy, intricate, crisp quality, Bela Magyar, 35mm film, 35mm photography, 8k uhd, hdr, ultra-detailed, Wet black and orange color inks line art dreamy female portrait with lot of lace filigrees on black canvas illustration described in the perfect fractal (style of Vassily Kandinsky)",

"35" :"a girl"

Before timestamp 15, clip returns torch.Size([1, 77, 768]) , and then it returns torch.Size([1, 154, 768]), and hence triggered the bug. I wonder why this commit is necessary? It says "VRAM usage improved", but this "optimization" doesn't reduce the total number of tensors stored in VRAM, which is still equal to max_frame. Should we remove this to fix the bug?

When I connect the Batch Value Schedule to the Load LoRa node, I get an error.

When I connected the Batch Value Schedule to the Load LoRa node's strenght, I got the error below. Could I be missing something?

image

missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "C:\Comfy\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable\ComfyUI\nodes.py", line 555, in load_lora
if strength_model == 0 and strength_clip == 0:
^^^^^^^^^^^^^^^^^^^
File "C:\Comfy\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pandas\core\generic.py", line 1519, in nonzero
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().

BatchPromptSchedule tensor match error

I'm getting this error when using the BatchPromptSchedule node and the prompts are different lengths:

"Error occurred when executing BatchPromptSchedule:
Sizes of tensors must match except in dimension 0. Expected size 154 but got size 77 for tensor number 18 in the list."

231017-131143-chrome

Do the prompts need to contain the same number of 77 token chunks, or is this a bug?

dtype: float64 has forbidden control characters.

`Error occurred when executing BatchPromptScheduleLatentInput:

Expression 0 1.25
1 1.25
2 1.25
3 1.25
4 1.24
5 1.24
6 1.24
7 1.24
8 1.24
9 1.24
10 1.24
11 1.24
12 1.24
13 1.24
14 1.24
15 1.24
16 1.24
17 1.24
18 1.24
19 1.24
20 1.24
21 1.25
22 1.25
23 1.25
24 1.25
25 1.25
26 1.25
27 1.25
28 1.25
29 1.25
30 1.25
31 1.25
dtype: float64 has forbidden control characters.`

I use the latest commit , but still show the error. It should have been fixed in the commit. I don't why it is here again.
Here is the last issue on this error #60

Passing batch scheduler through python breaks

I've been working with the websocket script in python for a few months now and have had great success in generating through workflows via scripting. After digging more into animation workflows, I'm running into an issue and I'm hoping someone has a solution.

When passing prompt data, it's written as a frame number and the prompt that occurs at that frame number followed by a comma. That continues until all desired frame numbers are entered each on their own line (return character) and the last line does not need a comma.

For reference:
"0" :"graveyard ",
"16" :"dark cold road, dead trees"

image

When the data from this node is passed using the websocket python example, the console shows errors that something is wrong with the JSON load due to expected characters. I'm assuming it does not like all the extra ( " , ; ) that exists and breaks the load injected.

I guess my question is does ComfyUI support a different method of listing "frame:prompt," data possibly with characters that won't break the rules of python? Included is some errors pulled from the generation.

data.json

image

FizzNode's addWeighted()'s zero-padding seem wrong

These 2 lines pads zero as the token embedding when the second prompt is longer.

However, when the embedding for the token output by clip is non-zero, as shown by the sum of its 786 dimensions being around -86 (the following output was obtained by turning debugger on at the above lines) :

t0.shape
torch.Size([1, 154, 768])

t0.sum(2)   # only the first 3 tokens of t0 are actual text. everything else are padding.
tensor([[-80.5575, -84.9922, -86.5110, -89.2761, -88.5500, -87.9479, -87.4741,
         -87.3303, -87.5545, -87.9854, -88.1384, -88.2550, -88.3028, -88.2968,
         -88.2977, -88.2566, -88.2179, -88.1763, -88.1114, -88.0569, -87.9825,
         -87.9123, -87.8530, -87.7786, -87.7417, -87.6993, -87.6719, -87.6595,
         -87.6190, -87.5953, -87.5536, -87.5174, -87.5034, -87.4509, -87.4293,
         -87.4009, -87.3871, -87.4080, -87.3753, -87.3691, -87.3749, -87.3128,
         -87.2906, -87.2220, -87.1898, -87.1448, -87.1336, -87.1117, -87.1122,
         -87.1306, -87.1080, -87.1134, -87.0603, -87.0418, -86.9933, -86.9530,
         -86.9266, -86.8840, -86.8813, -86.8558, -86.8131, -86.7775, -86.7166,
         -86.7124, -86.6881, -86.6805, -86.6832, -86.6497, -86.6852, -86.6795,
         -86.7144, -86.6964, -86.6483, -86.6575, -86.5954, -86.6392, -86.5924,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000]])

Therefore this padding seems incorrect. I think the correct way should be padding the original string with token used in CLIP, but I am not sure how to do that. thoughts?

local variable 'value' referenced before assignment

local variable 'value' referenced before assignment " error pops up

it says it couldn't get the value from ScheduledNodes.py
key_frame_series[i] = numexpr.evaluate(value) if not is_single_string else self.sanitize_value(value)

image

Im at a loss :(

Impact's "ConditionalStopIteration" & Max/Current Frame "noodles" not connecting to other integer nodes

Hello! Love these nodes btw!

Would you mind throwing a mention in the docs that the "ComfyUI Impact Pack" custom nodes by Dr.Lt.Data, can be used to stop iterations after a/the max frame threshold? I ran into a looping problem while reaching to set up a workflow and assume other's might too. (Example in Ref)

Also the Max & Current Frame, Integer outputs don't seem to be able to connect to other inputs, although I was able to use a Value Scheduler with "t" and "max_f" as a work around

Ref:
image

OMG I have discovered the Holy Grail!!!!!

Not an issue, but I just wanted to thank you for your nodes, particularly the prompt scheduler. I am currently developing a custom workflow along with "ComfyRoll." It is a variation on the Deforum Hybrid Video function. We were stymied as to how to do the prompt scheduling that we needed, until Suzie stumbled across your nodes! This is exactly what we needed and more! May much good fortune reign down upon you! Thanks for being awesome!

PromptSchedule crashing on invalid index to scalar variable

I read that you planned on releasing a full workflow for the example in the README.md. Is that coming soon? The partial example you do show does not work when I add the missing pieces, or at least when I am guessing are the missing pieces. I just get a PromptScheduler error no matter what I put in the Value_Schedule. I have tried a single entry of 0:(0.95) and 120 entries from Keyframe string generator.

Error occurred when executing PromptSchedule:

invalid index to scalar variable.

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/home/jw/store/src/ComfyUI/execution.py", line 154, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/home/jw/store/src/ComfyUI/execution.py", line 84, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/home/jw/store/src/ComfyUI/execution.py", line 77, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/home/jw/store/src/ComfyUI/custom_nodes/ComfyUI_FizzNodes/ScheduledNodes.py", line 81, in animate
    pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, app_text, pw_a,
  File "/home/jw/store/src/ComfyUI/custom_nodes/ComfyUI_FizzNodes/BatchFuncs.py", line 157, in interpolate_prompt_series
    cur_prompt_series[i] = prepare_batch_prompt(cur_prompt_series[i], max_frames, i, prompt_weight_1[i],
IndexError: invalid index to scalar variable.

In your examples, you use both 'pw_a' and `pw_a` (single quote vs backtick). What is the correct usage, or does it even matter?

The error begins around line 161 in BatchFuncs.py, but the value of prompt_weight_1[i] is being correctly read, so the problem is inside PromotSchedule, or so it appears.

I tried so many times, I don't know what went wrong, IndexError: invalid index to scalar variable.

D:\SD\Comfyui\ComfyUI_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py:129: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
nxt_prompt_series[f] = ''
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "D:\SD\Comfyui\ComfyUI_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\SD\Comfyui\ComfyUI_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\SD\Comfyui\ComfyUI_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\SD\Comfyui\ComfyUI_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 124, in animate
pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, app_text, pw_a, pw_b, pw_c, pw_d, print_output)
File "D:\SD\Comfyui\ComfyUI_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py", line 167, in interpolate_prompt_series
cur_prompt_series[index_offset] = prepare_batch_prompt(cur_prompt_series[i], max_frames, i, prompt_weight_1[i],
IndexError: invalid index to scalar variable.

ComfyUI Revision: 1699 [0cf4e869] | Released on '2023-11-17'

image

Prompt Schedule SDXL node missing 1 required positional argument: 'print_output'

Hi, I think the "Prompt Schedule SDXL" node broke somehow, the function seems to expect "print_output" but there's no corresponding toggle in the node ui.
The regular "Prompt Schedule" node has a toggle for print_output and works just fine.

My Error log:

Error occurred when executing PromptScheduleEncodeSDXL:

interpolate_prompts_SDXL() missing 1 required positional argument: 'print_output'

File "E:\ComfyUI\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyUI\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyUI\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 374, in animate
return (interpolate_prompts_SDXL(animation_promptsG, animation_promptsL, max_frames, current_frame, clip, app_text_G, app_text_L, pre_text_G, pre_text_L, pw_a, pw_b, pw_c, pw_d, width, height, crop_w, crop_h, target_width, target_height, ),)

FizzNodes not working since last ComfyUI update.

Since updating today - I get the following error with ComfyUI_FizzNodes

  File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\__init__.py", line 57, in <module>
    from .ScheduledNodes import ValueSchedule, PromptSchedule, PromptScheduleNodeFlow, PromptScheduleNodeFlowEnd, PromptScheduleEncodeSDXL #, PromptScheduleGLIGEN
  File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 11, in <module>
    from .ScheduleFuncs import check_is_number, interpolate_prompts, interpolate_prompts_SDXL, PoolAnimConditioning, PoolAnimConditioningGligen
ImportError: cannot import name 'PoolAnimConditioningGligen' from 'ComfyUI_FizzNodes.ScheduleFuncs' (D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduleFuncs.py)

Cannot import D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes module for custom nodes: cannot import name 'PoolAnimConditioningGligen' from 'ComfyUI_FizzNodes.ScheduleFuncs' (D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduleFuncs.py)

Getting a INT error when I'm connected to a float. . .

Hi There, I've been trying to get you value scheduler to work for the better part of the evening. I am attempting to control the float input of another node, (compyroll Lora Stacker) to control the model strength which is a float. When it initializes it gives this error. Seems curious since I'm not using the INT output. Have you any clue why this isn't working for me? BTW, I have been using your main scheduler in several workflows, and it is simply amazing!

Error occurred when executing ValueSchedule:

'int' object is not callable

File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 232, in animate
t = self.get_inbetweens(self.parse_key_frames(text, max_frames), max_frames)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 279, in parse_key_frames
frame = int(self.sanitize_value(frameParam[0])) if check_is_number(self.sanitize_value(frameParam[0].strip())) else int(numexpr.evaluate(frameParam[0].strip().replace("'","",1).replace('"',"",1)[::-1].replace("'","",1).replace('"',"",1)[::-1]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\numexpr\necompiler.py", line 943, in evaluate
raise e
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\numexpr\necompiler.py", line 851, in validate
_names_cache[expr_key] = getExprNames(ex, context)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\numexpr\necompiler.py", line 714, in getExprNames
ex = stringToExpression(text, {}, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\StableSwarm\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\numexpr\necompiler.py", line 300, in stringToExpression
ex = eval(c, names)
^^^^^^^^^^^^^^
File "", line 1, in

The strength don't change when using SXDL batch prompt schedule

{D4778267-5C78-49f5-8F76-157194E632AF}
print output:Max Frames: 36
Current Prompt G: (masterpiece,best quality:1.2),(energy flow,magic:1.3),mysterious,bird,shot of peahen, super-detailed intricate, taken by David LaChapelle, ligtning, bioluminescent, reflecting,smoke, particles,from side,a peacock spreads its tail feathers (ice:1.7)
Current Prompt L: (masterpiece,best quality:1.2),(energy flow,magic:1.3),mysterious,bird,shot of peahen, super-detailed intricate, taken by David LaChapelle, ligtning, bioluminescent, reflecting,smoke, particles,from side,a peacock spreads its tail feathers (ice:1.7)
Next Prompt G: (masterpiece,best quality:1.2),(energy flow,magic:1.3),mysterious,bird,shot of peahen, super-detailed intricate, taken by David LaChapelle, ligtning, bioluminescent, reflecting,smoke, particles,from side,a peacock spreads its tail feathers (lightning:1.7)
Next Prompt L : (masterpiece,best quality:1.2),(energy flow,magic:1.3),mysterious,bird,shot of peahen, super-detailed intricate, taken by David LaChapelle, ligtning, bioluminescent, reflecting,smoke, particles,from side,a peacock spreads its tail feathers (lightning:1.7)

Repeat 36 times. And I couldn't find any difference in it.

'float' object is not subscriptable

No matter what I do I get the following error. I am using a workflow that I executed successfully 4 weeks

Error occurred when executing BatchPromptSchedule:

'float' object is not subscriptable

File "F:\AI_Art\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\AI_Art\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\AI_Art\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ezXY\autoCastPatch.py", line 292, in map_node_over_list
return _map_node_over_list(obj, input_data_all, func, allow_interrupt)
File "F:\AI_Art\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\AI_Art\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 107, in animate
cur_prompt, nxt_prompt, weight = interpolate_prompt_series(animation_prompts, max_frames, pre_text,
File "F:\AI_Art\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py", line 126, in interpolate_prompt_series
cur_prompt_series[i] = prepare_batch_prompt(cur_prompt_series[i], max_frames, i, prompt_weight_1[i],

Values lower than 1.00 denoising sometimes produce error in ksampler?

I believe sometimes when I use a value lower than 1.0 in the ksampler for a video2video workflow I downloaded it spits out this error. It's the only thing I change, Any ideas?

Error occurred when executing BatchPromptSchedule:

Expecting property name enclosed in double quotes: line 1 column 2 (char 1)

File "C:\Users\NewPC\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\NewPC\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\NewPC\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\NewPC\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 120, in animate
animation_prompts = json.loads(inputText.strip())
File "json_init_.py", line 346, in loads
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 353, in raw_decode

Issue with "Expecting property name enclosed in double quotes" Error

Hi - I hope this post finds you well!

I'm writing to seek assistance with a challenging error that I've encountered while working on my AnimateDiffusion ComfyUI project. Any help or guidance would be greatly appreciated.

Error occurred when executing BatchPromptSchedule: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)

File "C:\Users\home\ComfyUI\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\home\ComfyUI\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\home\ComfyUI\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\home\ComfyUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 122, in animate
animation_prompts = json.loads(inputText.strip())
File "json\__init__.py", line 346, in loads
File "json\decoder.py", line 337, in decode
File "json\decoder.py", line 353, in raw_decode

Thanks so much!!

BUG: Text Node inputs fail

When I convert the text, app text or the pre to an input the connected text nodes toss an error about length or something. I've been using TTN Text node. I tried converting to STRING between but still fails.

When I convert text-string (WAS) the production begins but the end result is a black box/image.

Even WAS Text multiline > WAS string gives error

RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 154 but got size 77 for tensor number 5 in the list.

Loopback wave

Hi sorry not really an issue but was searching for a way to get loopback wave script for comfyui. I was pointed in this direction, just wanted to ask can I achieve similar batch image outputs like loopback waves scrips from auto1111 with this comfyui extension. I'm sure people will make tutorials soon all very nex and exciting SDXL

Cannot import E:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes module for custom nodes: No module named 'pandas._libs.pandas_parser'

Traceback (most recent call last):
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1735, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes_init
.py", line 57, in
from .ScheduledNodes import (
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 7, in
import pandas as pd
File "E:\comfyui\ComfyUI_windows_portable\python_embeded\lib\site-packages\pandas_init_.py", line 59, in
from pandas.core.api import (
File "E:\comfyui\ComfyUI_windows_portable\python_embeded\lib\site-packages\pandas\core\api.py", line 1, in
from pandas.libs import (
File "E:\comfyui\ComfyUI_windows_portable\python_embeded\lib\site-packages\pandas_libs_init
.py", line 16, in
import pandas._libs.pandas_parser # noqa: E501 # isort: skip # type: ignore[reportUnusedImport]
ModuleNotFoundError: No module named 'pandas._libs.pandas_parser'

Cannot import E:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes module for custom nodes: No module named 'pandas._libs.pandas_parser'

Using text box on pre_text leads to an error

Error:

Error occurred when executing BatchPromptSchedule:
Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 15 in the list.

Description:
I am using Derfuu_ComfyUI_ModdedNodes to write pre text in Batch Prompt Schedule node, but it gives me an error.

When i have the error:
-If i use the Batch Prompt Schedule node and write text in pre_text using a text box.
-If i use the same setup but on a simple Prompt Schedule node, it works fine, but gives less good result.
-If i use to write directly in Batch Prompt Schedule node in the pre_text box it will give an error.
-If i am not using Pre text all works fine
-If i write it in the main promt text box and copy to all frames it will give an error if the prompt is to long.
-It also works fine in all cases if i write on 4 words in a pre text- Example: If i write - "long hair, red hair" it is working, if i write -"1girl, solo, long grey hair, grey eyes, black sweater," it gives an error

BatchPromptSchedule has no effect

Kubuntu System, Comfyui shares venv with Automatic1111, RTX3060 12GB, 32GB RAM, xformers enabled

BatchPromptSchedule has no influence on the sampler. It seems as if only the last prompt is taken into account. I have tried it with different diffusion models, motion outputs as expected but there is no change in the generated images. I removed/reinstalled FizzNodes. This had no effect. There are no errors in the console output.

Any help would be appreciated.

I attached two examples and the outputs from my system
AnimateDiff_00062_
AnimateDiff_00065_
BatchPromptTest01
BatchPromptTest02

Its possible to create a LoRA weight batch scheduling node?

I'd really like to do the LoRA weight batch scheduling and I think a lot of people would too, because there are some Slider LoRAs that would work well with this function.

If this is already possible, I'd like to ask for help, as I've spent a lot of time researching and haven't found an answer.

Thanks for the great work.

Max Frames 9999

Hi! I'm working on a large render output with the batch scheduler but I'm running into a 9999 max frame error. Is there a way to exceed this?

Dropping prompt data in BatchPromptSchedule

This has happened to me previously, but I hadn't been able to replicate it. For some reason the BPS node runs properly the first time, then the BPS works with garbage scheduling data and then we load ADE. I'll be posting this issue in both locations, as I am not sure of which node might be causing this interaction. I've included a pretty length snippet, but I didn't want to leave anything out.

got prompt
model_type EPS
adm 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.diffusion_modelinput_blocks00bias', 'model_ema.diffusion_modelinput_blocks00weight', 'model_ema.diffusion_modelinput_blocks100emb_layers1bias', 'model_ema.diffusion_modelinput_blocks100emb_layers1weight', 'model_ema.diffusion_modelinput_blocks100in_layers0bias', 'model_ema.diffusion_modelinput_blocks100in_layers0weight', 'model_ema.diffusion_modelinput_blocks100in_layers2bias', 'model_ema.diffusion_modelinput_blocks100in_layers2weight', 'model_ema.diffusion_modelinput_blocks100out_layers0bias', 'model_ema.diffusion_modelinput_blocks100out_layers0weight', 'model_ema.diffusion_modelinput_blocks100out_layers3bias', 'model_ema.diffusion_modelinput_blocks100out_layers3weight', 'model_ema.diffusion_modelinput_blocks10emb_layers1bias', 'model_ema.diffusion_modelinput_blocks10emb_layers1weight', 'model_ema.diffusion_modelinput_blocks10in_layers0bias', 'model_ema.diffusion_modelinput_blocks10in_layers0weight', 'model_ema.diffusion_modelinput_blocks10in_layers2bias', 'model_ema.diffusion_modelinput_blocks10in_layers2weight', 'model_ema.diffusion_modelinput_blocks10out_layers0bias', 'model_ema.diffusion_modelinput_blocks10out_layers0weight', 'model_ema.diffusion_modelinput_blocks10out_layers3bias', 'model_ema.diffusion_modelinput_blocks10out_layers3weight', 'model_ema.diffusion_modelinput_blocks110emb_layers1bias', 'model_ema.diffusion_modelinput_blocks110emb_layers1weight', 'model_ema.diffusion_modelinput_blocks110in_layers0bias', 'model_ema.diffusion_modelinput_blocks110in_layers0weight', 'model_ema.diffusion_modelinput_blocks110in_layers2bias', 'model_ema.diffusion_modelinput_blocks110in_layers2weight', 'model_ema.diffusion_modelinput_blocks110out_layers0bias', 'model_ema.diffusion_modelinput_blocks110out_layers0weight', 'model_ema.diffusion_modelinput_blocks110out_layers3bias', 'model_ema.diffusion_modelinput_blocks110out_layers3weight', 'model_ema.diffusion_modelinput_blocks11normbias', 'model_ema.diffusion_modelinput_blocks11normweight', 'model_ema.diffusion_modelinput_blocks11proj_inbias', 'model_ema.diffusion_modelinput_blocks11proj_inweight', 'model_ema.diffusion_modelinput_blocks11proj_outbias', 'model_ema.diffusion_modelinput_blocks11proj_outweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0ffnet2bias', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0ffnet2weight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0norm1bias', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0norm1weight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0norm2bias', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0norm2weight', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0norm3bias', 'model_ema.diffusion_modelinput_blocks11transformer_blocks0norm3weight', 'model_ema.diffusion_modelinput_blocks20emb_layers1bias', 'model_ema.diffusion_modelinput_blocks20emb_layers1weight', 'model_ema.diffusion_modelinput_blocks20in_layers0bias', 'model_ema.diffusion_modelinput_blocks20in_layers0weight', 'model_ema.diffusion_modelinput_blocks20in_layers2bias', 'model_ema.diffusion_modelinput_blocks20in_layers2weight', 'model_ema.diffusion_modelinput_blocks20out_layers0bias', 'model_ema.diffusion_modelinput_blocks20out_layers0weight', 'model_ema.diffusion_modelinput_blocks20out_layers3bias', 'model_ema.diffusion_modelinput_blocks20out_layers3weight', 'model_ema.diffusion_modelinput_blocks21normbias', 'model_ema.diffusion_modelinput_blocks21normweight', 'model_ema.diffusion_modelinput_blocks21proj_inbias', 'model_ema.diffusion_modelinput_blocks21proj_inweight', 'model_ema.diffusion_modelinput_blocks21proj_outbias', 'model_ema.diffusion_modelinput_blocks21proj_outweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0ffnet2bias', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0ffnet2weight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0norm1bias', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0norm1weight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0norm2bias', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0norm2weight', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0norm3bias', 'model_ema.diffusion_modelinput_blocks21transformer_blocks0norm3weight', 'model_ema.diffusion_modelinput_blocks30opbias', 'model_ema.diffusion_modelinput_blocks30opweight', 'model_ema.diffusion_modelinput_blocks40emb_layers1bias', 'model_ema.diffusion_modelinput_blocks40emb_layers1weight', 'model_ema.diffusion_modelinput_blocks40in_layers0bias', 'model_ema.diffusion_modelinput_blocks40in_layers0weight', 'model_ema.diffusion_modelinput_blocks40in_layers2bias', 'model_ema.diffusion_modelinput_blocks40in_layers2weight', 'model_ema.diffusion_modelinput_blocks40out_layers0bias', 'model_ema.diffusion_modelinput_blocks40out_layers0weight', 'model_ema.diffusion_modelinput_blocks40out_layers3bias', 'model_ema.diffusion_modelinput_blocks40out_layers3weight', 'model_ema.diffusion_modelinput_blocks40skip_connectionbias', 'model_ema.diffusion_modelinput_blocks40skip_connectionweight', 'model_ema.diffusion_modelinput_blocks41normbias', 'model_ema.diffusion_modelinput_blocks41normweight', 'model_ema.diffusion_modelinput_blocks41proj_inbias', 'model_ema.diffusion_modelinput_blocks41proj_inweight', 'model_ema.diffusion_modelinput_blocks41proj_outbias', 'model_ema.diffusion_modelinput_blocks41proj_outweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0ffnet2bias', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0ffnet2weight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0norm1bias', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0norm1weight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0norm2bias', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0norm2weight', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0norm3bias', 'model_ema.diffusion_modelinput_blocks41transformer_blocks0norm3weight', 'model_ema.diffusion_modelinput_blocks50emb_layers1bias', 'model_ema.diffusion_modelinput_blocks50emb_layers1weight', 'model_ema.diffusion_modelinput_blocks50in_layers0bias', 'model_ema.diffusion_modelinput_blocks50in_layers0weight', 'model_ema.diffusion_modelinput_blocks50in_layers2bias', 'model_ema.diffusion_modelinput_blocks50in_layers2weight', 'model_ema.diffusion_modelinput_blocks50out_layers0bias', 'model_ema.diffusion_modelinput_blocks50out_layers0weight', 'model_ema.diffusion_modelinput_blocks50out_layers3bias', 'model_ema.diffusion_modelinput_blocks50out_layers3weight', 'model_ema.diffusion_modelinput_blocks51normbias', 'model_ema.diffusion_modelinput_blocks51normweight', 'model_ema.diffusion_modelinput_blocks51proj_inbias', 'model_ema.diffusion_modelinput_blocks51proj_inweight', 'model_ema.diffusion_modelinput_blocks51proj_outbias', 'model_ema.diffusion_modelinput_blocks51proj_outweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0ffnet2bias', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0ffnet2weight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0norm1bias', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0norm1weight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0norm2bias', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0norm2weight', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0norm3bias', 'model_ema.diffusion_modelinput_blocks51transformer_blocks0norm3weight', 'model_ema.diffusion_modelinput_blocks60opbias', 'model_ema.diffusion_modelinput_blocks60opweight', 'model_ema.diffusion_modelinput_blocks70emb_layers1bias', 'model_ema.diffusion_modelinput_blocks70emb_layers1weight', 'model_ema.diffusion_modelinput_blocks70in_layers0bias', 'model_ema.diffusion_modelinput_blocks70in_layers0weight', 'model_ema.diffusion_modelinput_blocks70in_layers2bias', 'model_ema.diffusion_modelinput_blocks70in_layers2weight', 'model_ema.diffusion_modelinput_blocks70out_layers0bias', 'model_ema.diffusion_modelinput_blocks70out_layers0weight', 'model_ema.diffusion_modelinput_blocks70out_layers3bias', 'model_ema.diffusion_modelinput_blocks70out_layers3weight', 'model_ema.diffusion_modelinput_blocks70skip_connectionbias', 'model_ema.diffusion_modelinput_blocks70skip_connectionweight', 'model_ema.diffusion_modelinput_blocks71normbias', 'model_ema.diffusion_modelinput_blocks71normweight', 'model_ema.diffusion_modelinput_blocks71proj_inbias', 'model_ema.diffusion_modelinput_blocks71proj_inweight', 'model_ema.diffusion_modelinput_blocks71proj_outbias', 'model_ema.diffusion_modelinput_blocks71proj_outweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0ffnet2bias', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0ffnet2weight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0norm1bias', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0norm1weight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0norm2bias', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0norm2weight', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0norm3bias', 'model_ema.diffusion_modelinput_blocks71transformer_blocks0norm3weight', 'model_ema.diffusion_modelinput_blocks80emb_layers1bias', 'model_ema.diffusion_modelinput_blocks80emb_layers1weight', 'model_ema.diffusion_modelinput_blocks80in_layers0bias', 'model_ema.diffusion_modelinput_blocks80in_layers0weight', 'model_ema.diffusion_modelinput_blocks80in_layers2bias', 'model_ema.diffusion_modelinput_blocks80in_layers2weight', 'model_ema.diffusion_modelinput_blocks80out_layers0bias', 'model_ema.diffusion_modelinput_blocks80out_layers0weight', 'model_ema.diffusion_modelinput_blocks80out_layers3bias', 'model_ema.diffusion_modelinput_blocks80out_layers3weight', 'model_ema.diffusion_modelinput_blocks81normbias', 'model_ema.diffusion_modelinput_blocks81normweight', 'model_ema.diffusion_modelinput_blocks81proj_inbias', 'model_ema.diffusion_modelinput_blocks81proj_inweight', 'model_ema.diffusion_modelinput_blocks81proj_outbias', 'model_ema.diffusion_modelinput_blocks81proj_outweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0ffnet2bias', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0ffnet2weight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0norm1bias', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0norm1weight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0norm2bias', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0norm2weight', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0norm3bias', 'model_ema.diffusion_modelinput_blocks81transformer_blocks0norm3weight', 'model_ema.diffusion_modelinput_blocks90opbias', 'model_ema.diffusion_modelinput_blocks90opweight', 'model_ema.diffusion_modelmiddle_block0emb_layers1bias', 'model_ema.diffusion_modelmiddle_block0emb_layers1weight', 'model_ema.diffusion_modelmiddle_block0in_layers0bias', 'model_ema.diffusion_modelmiddle_block0in_layers0weight', 'model_ema.diffusion_modelmiddle_block0in_layers2bias', 'model_ema.diffusion_modelmiddle_block0in_layers2weight', 'model_ema.diffusion_modelmiddle_block0out_layers0bias', 'model_ema.diffusion_modelmiddle_block0out_layers0weight', 'model_ema.diffusion_modelmiddle_block0out_layers3bias', 'model_ema.diffusion_modelmiddle_block0out_layers3weight', 'model_ema.diffusion_modelmiddle_block1normbias', 'model_ema.diffusion_modelmiddle_block1normweight', 'model_ema.diffusion_modelmiddle_block1proj_inbias', 'model_ema.diffusion_modelmiddle_block1proj_inweight', 'model_ema.diffusion_modelmiddle_block1proj_outbias', 'model_ema.diffusion_modelmiddle_block1proj_outweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0ffnet2bias', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0ffnet2weight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0norm1bias', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0norm1weight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0norm2bias', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0norm2weight', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0norm3bias', 'model_ema.diffusion_modelmiddle_block1transformer_blocks0norm3weight', 'model_ema.diffusion_modelmiddle_block2emb_layers1bias', 'model_ema.diffusion_modelmiddle_block2emb_layers1weight', 'model_ema.diffusion_modelmiddle_block2in_layers0bias', 'model_ema.diffusion_modelmiddle_block2in_layers0weight', 'model_ema.diffusion_modelmiddle_block2in_layers2bias', 'model_ema.diffusion_modelmiddle_block2in_layers2weight', 'model_ema.diffusion_modelmiddle_block2out_layers0bias', 'model_ema.diffusion_modelmiddle_block2out_layers0weight', 'model_ema.diffusion_modelmiddle_block2out_layers3bias', 'model_ema.diffusion_modelmiddle_block2out_layers3weight', 'model_ema.diffusion_modelout0bias', 'model_ema.diffusion_modelout0weight', 'model_ema.diffusion_modelout2bias', 'model_ema.diffusion_modelout2weight', 'model_ema.diffusion_modeloutput_blocks00emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks00emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks00in_layers0bias', 'model_ema.diffusion_modeloutput_blocks00in_layers0weight', 'model_ema.diffusion_modeloutput_blocks00in_layers2bias', 'model_ema.diffusion_modeloutput_blocks00in_layers2weight', 'model_ema.diffusion_modeloutput_blocks00out_layers0bias', 'model_ema.diffusion_modeloutput_blocks00out_layers0weight', 'model_ema.diffusion_modeloutput_blocks00out_layers3bias', 'model_ema.diffusion_modeloutput_blocks00out_layers3weight', 'model_ema.diffusion_modeloutput_blocks00skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks00skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks100emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks100emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks100in_layers0bias', 'model_ema.diffusion_modeloutput_blocks100in_layers0weight', 'model_ema.diffusion_modeloutput_blocks100in_layers2bias', 'model_ema.diffusion_modeloutput_blocks100in_layers2weight', 'model_ema.diffusion_modeloutput_blocks100out_layers0bias', 'model_ema.diffusion_modeloutput_blocks100out_layers0weight', 'model_ema.diffusion_modeloutput_blocks100out_layers3bias', 'model_ema.diffusion_modeloutput_blocks100out_layers3weight', 'model_ema.diffusion_modeloutput_blocks100skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks100skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks101normbias', 'model_ema.diffusion_modeloutput_blocks101normweight', 'model_ema.diffusion_modeloutput_blocks101proj_inbias', 'model_ema.diffusion_modeloutput_blocks101proj_inweight', 'model_ema.diffusion_modeloutput_blocks101proj_outbias', 'model_ema.diffusion_modeloutput_blocks101proj_outweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks101transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks10emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks10emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks10in_layers0bias', 'model_ema.diffusion_modeloutput_blocks10in_layers0weight', 'model_ema.diffusion_modeloutput_blocks10in_layers2bias', 'model_ema.diffusion_modeloutput_blocks10in_layers2weight', 'model_ema.diffusion_modeloutput_blocks10out_layers0bias', 'model_ema.diffusion_modeloutput_blocks10out_layers0weight', 'model_ema.diffusion_modeloutput_blocks10out_layers3bias', 'model_ema.diffusion_modeloutput_blocks10out_layers3weight', 'model_ema.diffusion_modeloutput_blocks10skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks10skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks110emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks110emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks110in_layers0bias', 'model_ema.diffusion_modeloutput_blocks110in_layers0weight', 'model_ema.diffusion_modeloutput_blocks110in_layers2bias', 'model_ema.diffusion_modeloutput_blocks110in_layers2weight', 'model_ema.diffusion_modeloutput_blocks110out_layers0bias', 'model_ema.diffusion_modeloutput_blocks110out_layers0weight', 'model_ema.diffusion_modeloutput_blocks110out_layers3bias', 'model_ema.diffusion_modeloutput_blocks110out_layers3weight', 'model_ema.diffusion_modeloutput_blocks110skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks110skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks111normbias', 'model_ema.diffusion_modeloutput_blocks111normweight', 'model_ema.diffusion_modeloutput_blocks111proj_inbias', 'model_ema.diffusion_modeloutput_blocks111proj_inweight', 'model_ema.diffusion_modeloutput_blocks111proj_outbias', 'model_ema.diffusion_modeloutput_blocks111proj_outweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks111transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks20emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks20emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks20in_layers0bias', 'model_ema.diffusion_modeloutput_blocks20in_layers0weight', 'model_ema.diffusion_modeloutput_blocks20in_layers2bias', 'model_ema.diffusion_modeloutput_blocks20in_layers2weight', 'model_ema.diffusion_modeloutput_blocks20out_layers0bias', 'model_ema.diffusion_modeloutput_blocks20out_layers0weight', 'model_ema.diffusion_modeloutput_blocks20out_layers3bias', 'model_ema.diffusion_modeloutput_blocks20out_layers3weight', 'model_ema.diffusion_modeloutput_blocks20skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks20skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks21convbias', 'model_ema.diffusion_modeloutput_blocks21convweight', 'model_ema.diffusion_modeloutput_blocks30emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks30emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks30in_layers0bias', 'model_ema.diffusion_modeloutput_blocks30in_layers0weight', 'model_ema.diffusion_modeloutput_blocks30in_layers2bias', 'model_ema.diffusion_modeloutput_blocks30in_layers2weight', 'model_ema.diffusion_modeloutput_blocks30out_layers0bias', 'model_ema.diffusion_modeloutput_blocks30out_layers0weight', 'model_ema.diffusion_modeloutput_blocks30out_layers3bias', 'model_ema.diffusion_modeloutput_blocks30out_layers3weight', 'model_ema.diffusion_modeloutput_blocks30skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks30skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks31normbias', 'model_ema.diffusion_modeloutput_blocks31normweight', 'model_ema.diffusion_modeloutput_blocks31proj_inbias', 'model_ema.diffusion_modeloutput_blocks31proj_inweight', 'model_ema.diffusion_modeloutput_blocks31proj_outbias', 'model_ema.diffusion_modeloutput_blocks31proj_outweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks31transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks40emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks40emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks40in_layers0bias', 'model_ema.diffusion_modeloutput_blocks40in_layers0weight', 'model_ema.diffusion_modeloutput_blocks40in_layers2bias', 'model_ema.diffusion_modeloutput_blocks40in_layers2weight', 'model_ema.diffusion_modeloutput_blocks40out_layers0bias', 'model_ema.diffusion_modeloutput_blocks40out_layers0weight', 'model_ema.diffusion_modeloutput_blocks40out_layers3bias', 'model_ema.diffusion_modeloutput_blocks40out_layers3weight', 'model_ema.diffusion_modeloutput_blocks40skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks40skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks41normbias', 'model_ema.diffusion_modeloutput_blocks41normweight', 'model_ema.diffusion_modeloutput_blocks41proj_inbias', 'model_ema.diffusion_modeloutput_blocks41proj_inweight', 'model_ema.diffusion_modeloutput_blocks41proj_outbias', 'model_ema.diffusion_modeloutput_blocks41proj_outweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks41transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks50emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks50emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks50in_layers0bias', 'model_ema.diffusion_modeloutput_blocks50in_layers0weight', 'model_ema.diffusion_modeloutput_blocks50in_layers2bias', 'model_ema.diffusion_modeloutput_blocks50in_layers2weight', 'model_ema.diffusion_modeloutput_blocks50out_layers0bias', 'model_ema.diffusion_modeloutput_blocks50out_layers0weight', 'model_ema.diffusion_modeloutput_blocks50out_layers3bias', 'model_ema.diffusion_modeloutput_blocks50out_layers3weight', 'model_ema.diffusion_modeloutput_blocks50skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks50skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks51normbias', 'model_ema.diffusion_modeloutput_blocks51normweight', 'model_ema.diffusion_modeloutput_blocks51proj_inbias', 'model_ema.diffusion_modeloutput_blocks51proj_inweight', 'model_ema.diffusion_modeloutput_blocks51proj_outbias', 'model_ema.diffusion_modeloutput_blocks51proj_outweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks51transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks52convbias', 'model_ema.diffusion_modeloutput_blocks52convweight', 'model_ema.diffusion_modeloutput_blocks60emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks60emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks60in_layers0bias', 'model_ema.diffusion_modeloutput_blocks60in_layers0weight', 'model_ema.diffusion_modeloutput_blocks60in_layers2bias', 'model_ema.diffusion_modeloutput_blocks60in_layers2weight', 'model_ema.diffusion_modeloutput_blocks60out_layers0bias', 'model_ema.diffusion_modeloutput_blocks60out_layers0weight', 'model_ema.diffusion_modeloutput_blocks60out_layers3bias', 'model_ema.diffusion_modeloutput_blocks60out_layers3weight', 'model_ema.diffusion_modeloutput_blocks60skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks60skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks61normbias', 'model_ema.diffusion_modeloutput_blocks61normweight', 'model_ema.diffusion_modeloutput_blocks61proj_inbias', 'model_ema.diffusion_modeloutput_blocks61proj_inweight', 'model_ema.diffusion_modeloutput_blocks61proj_outbias', 'model_ema.diffusion_modeloutput_blocks61proj_outweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks61transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks70emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks70emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks70in_layers0bias', 'model_ema.diffusion_modeloutput_blocks70in_layers0weight', 'model_ema.diffusion_modeloutput_blocks70in_layers2bias', 'model_ema.diffusion_modeloutput_blocks70in_layers2weight', 'model_ema.diffusion_modeloutput_blocks70out_layers0bias', 'model_ema.diffusion_modeloutput_blocks70out_layers0weight', 'model_ema.diffusion_modeloutput_blocks70out_layers3bias', 'model_ema.diffusion_modeloutput_blocks70out_layers3weight', 'model_ema.diffusion_modeloutput_blocks70skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks70skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks71normbias', 'model_ema.diffusion_modeloutput_blocks71normweight', 'model_ema.diffusion_modeloutput_blocks71proj_inbias', 'model_ema.diffusion_modeloutput_blocks71proj_inweight', 'model_ema.diffusion_modeloutput_blocks71proj_outbias', 'model_ema.diffusion_modeloutput_blocks71proj_outweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks71transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks80emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks80emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks80in_layers0bias', 'model_ema.diffusion_modeloutput_blocks80in_layers0weight', 'model_ema.diffusion_modeloutput_blocks80in_layers2bias', 'model_ema.diffusion_modeloutput_blocks80in_layers2weight', 'model_ema.diffusion_modeloutput_blocks80out_layers0bias', 'model_ema.diffusion_modeloutput_blocks80out_layers0weight', 'model_ema.diffusion_modeloutput_blocks80out_layers3bias', 'model_ema.diffusion_modeloutput_blocks80out_layers3weight', 'model_ema.diffusion_modeloutput_blocks80skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks80skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks81normbias', 'model_ema.diffusion_modeloutput_blocks81normweight', 'model_ema.diffusion_modeloutput_blocks81proj_inbias', 'model_ema.diffusion_modeloutput_blocks81proj_inweight', 'model_ema.diffusion_modeloutput_blocks81proj_outbias', 'model_ema.diffusion_modeloutput_blocks81proj_outweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks81transformer_blocks0norm3weight', 'model_ema.diffusion_modeloutput_blocks82convbias', 'model_ema.diffusion_modeloutput_blocks82convweight', 'model_ema.diffusion_modeloutput_blocks90emb_layers1bias', 'model_ema.diffusion_modeloutput_blocks90emb_layers1weight', 'model_ema.diffusion_modeloutput_blocks90in_layers0bias', 'model_ema.diffusion_modeloutput_blocks90in_layers0weight', 'model_ema.diffusion_modeloutput_blocks90in_layers2bias', 'model_ema.diffusion_modeloutput_blocks90in_layers2weight', 'model_ema.diffusion_modeloutput_blocks90out_layers0bias', 'model_ema.diffusion_modeloutput_blocks90out_layers0weight', 'model_ema.diffusion_modeloutput_blocks90out_layers3bias', 'model_ema.diffusion_modeloutput_blocks90out_layers3weight', 'model_ema.diffusion_modeloutput_blocks90skip_connectionbias', 'model_ema.diffusion_modeloutput_blocks90skip_connectionweight', 'model_ema.diffusion_modeloutput_blocks91normbias', 'model_ema.diffusion_modeloutput_blocks91normweight', 'model_ema.diffusion_modeloutput_blocks91proj_inbias', 'model_ema.diffusion_modeloutput_blocks91proj_inweight', 'model_ema.diffusion_modeloutput_blocks91proj_outbias', 'model_ema.diffusion_modeloutput_blocks91proj_outweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn1to_kweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn1to_out0bias', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn1to_out0weight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn1to_qweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn1to_vweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn2to_kweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn2to_out0bias', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn2to_out0weight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn2to_qweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0attn2to_vweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0ffnet0projbias', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0ffnet0projweight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0ffnet2bias', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0ffnet2weight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0norm1bias', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0norm1weight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0norm2bias', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0norm2weight', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0norm3bias', 'model_ema.diffusion_modeloutput_blocks91transformer_blocks0norm3weight', 'model_ema.diffusion_modeltime_embed0bias', 'model_ema.diffusion_modeltime_embed0weight', 'model_ema.diffusion_modeltime_embed2bias', 'model_ema.diffusion_modeltime_embed2weight', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
[AnimateDiffEvo] - INFO - Loading motion module improvedHumansMotion_refinedHumanMovement.ckpt
D:\AI_STUFF\NewComfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py:128: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
  cur_prompt_series[f] = ''
D:\AI_STUFF\NewComfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py:129: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
  nxt_prompt_series[f] = ''

 Max Frames:  4
 frame index:  0
 Current Prompt:  i am a test, here I start
 Next Prompt:  I should end here, see me?
 Strength :  1.0


 Max Frames:  4
 frame index:  1
 Current Prompt:  i am a test, here I start
 Next Prompt:  I should end here, see me?
 Strength :  0.75


 Max Frames:  4
 frame index:  2
 Current Prompt:  i am a test, here I start
 Next Prompt:  I should end here, see me?
 Strength :  0.5


 Max Frames:  4
 frame index:  3
 Current Prompt:  i am a test, here I start
 Next Prompt:  I should end here, see me?
 Strength :  0.25

Requested to load SD1ClipModel
Loading 1 new model

 Max Frames:  4
 frame index:  0
 Current Prompt:
 Next Prompt:
 Strength :  1.0


 Max Frames:  4
 frame index:  1
 Current Prompt:
 Next Prompt:
 Strength :  0.75


 Max Frames:  4
 frame index:  2
 Current Prompt:
 Next Prompt:
 Strength :  0.5


 Max Frames:  4
 frame index:  3
 Current Prompt:
 Next Prompt:
 Strength :  0.25

[AnimateDiffEvo] - INFO - Regular AnimateDiff activated - latents passed in (4) less or equal to context_length 16.
[AnimateDiffEvo] - INFO - Using motion module improvedHumansMotion_refinedHumanMovement.ckpt version v2.
Requested to load BaseModel
Requested to load AnimateDiffModel
Loading 2 new models
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:21<00:00,  1.06s/it]
Global Step: 840001
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates']
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 136.99 seconds

Prompt travel not working, error points to incompatible dtype.

I am receiving this error:

ComfyUI_FizzNodes\BatchFuncs.py:128: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
cur_prompt_series[f] = ''

I receive this on line 129 as well. If I set those variables to None, it resolves the error but then the same error occurs on lines 131 and 132.

I have updated and regressed my pandas version as well to no avail.

Thank you for all you do.

Issues with Batch Prompt Schedule

I'm getting an error with Batch Prompt Schedule: RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 154 but got size 77 for tensor number 12 in the list. I'm not sure if it's because there is a limit for the number of characters in the prompt or something else. I can use my prompts up to a point, but if I add another prompt schedule line I get the error above.

Probably unrelated, I'm also getting this warning: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas. Value 'photo realistic girl high quality, detailed, high resolution, 4k' has dtype incompatible with float64, please explicitly cast to a compatible dtype first. Do I need to change the wording?

Error when using long prompt and short prompt together

I am getting error when trying to use long prompt and short together. This is the error I got
image

This is the prompt I am using to test and getting error.
"0": "portrait, 1girl",
"4": "maximum details, cinematic, (abstract art:1.1), deep shadow,1 girl, adult russian woman, grey eyes, blonde sleek hair, Style-GravityMagic, looking away, solo, upper body, detailed background, detailed face, (sovietpunkai, soviet communist theme:1.1), eyes ablaze with dark power, blood sorcerer, scarlet clothes, cape, hood, dynamic movement, evil smile, red glyphs swirling in the air, floating particles, blood droplets, scarlet color scheme, blood dripping, sacrificial altar, in background, glowing red power, nighttime, symmetrical composition, cinematic atmosphere,",
"8": "portrait, 1girl"

BatchPromptSchedule missing

Hi,

I have redownloaded it several times with comfyui manager and manually both but it keeps on showing this error. I have also reinstalled and checked everything from requirements.txt.

Need Help!

Thanks,

Screenshot 2023-12-08 120135

IndexError: invalid index to scalar variable when run Simple Animation Workflow example

When I run workflow of Simple Animation Workflow example. I got this issue. How can I solve this and prevent this problem for later
https://github.com/FizzleDorf/ComfyUI_FizzNodes/assets/46942135/8899f25e-fbc8-423c-bef2-e7c5a91fb7f4

image

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "E:\1.ext_tools\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\1.ext_tools\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\1.ext_tools\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\1.ext_tools\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 81, in animate
pos_cur_prompt, pos_nxt_prompt, weight = interpolate_prompt_series(pos, max_frames, start_frame, pre_text, app_text, pw_a,
File "E:\1.ext_tools\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py", line 153, in interpolate_prompt_series
cur_prompt_series[i] = prepare_batch_prompt(cur_prompt_series[i], max_frames, i, float(prompt_weight_1[0]),
IndexError: invalid index to scalar variable.

Thank you.

prompt node style proposal

prompts

Wouldn't it be more intuitive to represent sequences through nodes, leveraging the strengths of ComfyUI, rather than simply porting deforum? By expressing them in this way, it seems like it would be useful even when changing the order of the sequence.

black screen in output

Yesterday everything was fine, but today I generated absolutely the same thing, but it only resulted in a black screen, although I didn’t change anything. And this error appeared in the terminal, although it didn’t exist yesterday .
D:\comfuUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py:54: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '(Masterpiece, best quality:1.2), closeup, a girl dancing through forest spring day, cherryblossoms ' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
cur_prompt_series[i] = str(pre_text) + " " + str(current_prompt) + " " + str(app_text)
D:\comfuUI\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py:55: FutureWarning: Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas. Value '(Masterpiece, best quality:1.2), closeup, a girl dancing through forest spring day, cherryblossoms ' has dtype incompatible with float64, please explicitly cast to a compatible dtype first.
nxt_prompt_series[i] = str(pre_text) + " " + str(current_prompt) + " " + str(app_text).

一个不生效的问题

我将它下载到我的custom_nodes文件夹里,他不启用,进入comfyUI里也没有相关节点

ComfyUI_FizzNodes module for custom nodes: No module named 'numexpr.interpreter

Traceback (most recent call last):
File "D:\Program Files\ComfyUI_10\ComfyUI\nodes.py", line 1734, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\Program Files\ComfyUI_10\ComfyUI\custom_nodes\ComfyUI_FizzNodes_init
.py", line 57, in
from .ScheduledNodes import ValueSchedule, PromptSchedule, PromptScheduleNodeFlow, PromptScheduleNodeFlowEnd, PromptScheduleEncodeSDXL, StringSchedule, BatchPromptSchedule, BatchValueSchedule, BatchPromptScheduleEncodeSDXL #, BatchGLIGENSchedule
File "D:\Program Files\ComfyUI_10\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 4, in
import numexpr
File "D:\Program Files\ComfyUI_10\python_embeded\lib\site-packages\numexpr_init_.py", line 24, in
from numexpr.interpreter import MAX_THREADS, use_vml, BLOCK_SIZE1
ModuleNotFoundError: No module named 'numexpr.interpreter'

Cannot import D:\Program Files\ComfyUI_10\ComfyUI\custom_nodes\ComfyUI_FizzNodes module for custom nodes: No module named 'numexpr.interpreter'

Error occurred when executing BatchPromptSchedule: Sizes of tensors must match except in dimension 0.

Error occurred when executing BatchPromptSchedule:
Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 12 in the list.

Hello, I give this error when I use this prompt, I try to reduce or increase the number of characters in 12 keyframe and it give same error but with tensor number 48.

After Adjust entire prompt it will work but I don't know what cause error.
Length inconsistency, the fact that a certain number of characters are exceeded compared to the first prompt, or a minimum number of characters compared to the previous prompt

This is the prompt

"1": "with a sleek, short bob haircut, characteristic of the early 1990s. She's wearing a bright neon windbreaker, large hoop earrings, and bold, geometric-patterned leggings. The background is an urban setting with graffiti art, reflecting the vibrant street culture of the era.",

"12": "embodying the grunge fashion of the mid-1990s. She has long, unkempt hair, dark eyeliner, and a flannel shirt tied around her waist. Her outfit includes a band T-shirt and ripped jeans, complemented by a pair of well-worn combat boots. The setting is a dimly lit, retro coffee shop.",

"24": "showcasing the minimalist fashion trend. She has a sleek, straight hairstyle, middle-parted. She's dressed in a simple, elegant slip dress in a pastel color, paired with a delicate choker necklace and strappy sandals. The background is a minimalist, chic urban apartment",

"48": "Early 2000s style, featuring a woman with chunky highlights in her hair and a glossy lip. She's wearing a bright, Juicy Couture tracksuit, paired with a tank top with a glittery graphic. Accessories include a large, metallic handbag and platform sneakers. The setting is a bustling shopping mall.",

"60": "Mid-2000s fashion, with a woman sporting a side-swept bangs hairstyle and smoky eye makeup. Her outfit consists of a layered tank top over a lace-trimmed camisole, low-rise bootcut jeans, and a wide studded belt. She's wearing pointy-toe stiletto heels. The background is a trendy nightclub scene",

"72": "Late 2000s elegance, featuring a woman with a soft, wavy hairstyle and natural makeup. She's dressed in a sophisticated empire waist dress with floral prints, paired with a cropped cardigan and ballet flats. The setting is a serene, sunlit garden with blooming flowers",

"84": "Early 2010s hipster fashion, with a woman sporting ombre hair and thick-rimmed glasses. She's wearing a vintage graphic tee, high-waisted shorts, and a plaid shirt tied around her waist. Her look is completed with a pair of classic Converse sneakers. The background is an artsy, urban café.",

"96": "Mid-2010s style, featuring a woman with a lob (long bob) haircut and bold, contoured makeup. She's dressed in a chic, off-shoulder top, ripped skinny jeans, and ankle boots. Accessories include a statement necklace and a clutch. The setting is a modern, upscale bar.",

"108": "Late 2010s trend, with a woman flaunting a beachy waves hairstyle and glowing, dewy makeup. Her outfit is a boho-chic maxi dress, layered with a denim jacket, and accessorized with a wide-brimmed hat and ankle-strap sandals. The background is a picturesque beach at sunset.",

"120": "Contemporary 2020s fashion, showcasing a woman with a sleek, high ponytail and minimalist makeup. She's wearing a monochrome, tailored pantsuit with a turtleneck sweater, complemented by minimalist jewelry and sleek, pointed-toe heels. The setting is a modern, architecturally striking office space."
can you resolve Prompt Lengths

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.