Giter Club home page Giter Club logo

comfyui_noise's Introduction

ComfyUI Noise

This repo contains 6 nodes for ComfyUI that allows for more control and flexibility over the noise. This allows e.g. for workflows with small variations to generations or finding the accompanying noise to some input image and prompt.

Nodes

Noisy Latent Image:

This node lets you generate noise, you can find this node under latent>noise and it the following settings:

  • source: where to generate the noise, currently supports GPU and CPU.
  • seed: the noise seed.
  • width: image width.
  • height: image height.
  • batch_size: batch size.

Duplicate Batch Index:

The functionality of this node has been moved to core, please use: Latent>Batch>Repeat Latent Batch and Latent>Batch>Latent From Batch instead.

This node lets you duplicate a certain sample in the batch, this can be used to duplicate e.g. encoded images but also noise generated from the node listed above. You can find this node under latent and it has the following settings:

  • latents: the latents.
  • batch_index: which sample in the latents to duplicate.
  • batch_size: the new batch size, (i.e. how many times to duplicate the sample).

Slerp Latents:

This node lets you mix two latents together. Both of the input latents must share the same dimensions or the node will ignore the mix factor and instead output the top slot. When it comes to other things attached to the latents such as e.g. masks, only those of the top slot are passed on. You can find this node under latent and it comes with the following inputs:

  • latents1: first batch of latents.
  • latents2: second batch of latents. This input is optional.
  • mask: determines where in the latents to slerp. This input is optional
  • factor: how much of the second batch of latents should be slerped into the first.

Get Sigma:

This node can be used to calculate the amount of noise a sampler expects when it starts denoising. You can find this node under latent>noise and it comes with the following inputs and settings:

  • model: The model for which to calculate the sigma.
  • sampler_name: the name of the sampler for which to calculate the sigma.
  • scheduler: the type of schedule used in the sampler
  • steps: the total number of steps in the schedule
  • start_at_step: the start step of the sampler, i.e. how much noise it expects in the input image
  • end_at_step: the current end step of the previous sampler, i.e. how much noise already is in the image.

Most of the time you'd simply want to keep start_at_step at zero, and end_at_step at steps, but if you'd want to re-inject some noise in between two samplers, e.g. one sampler that denoises from 0 to 15, and a second that denoises from 10 to 20, you'd want to use a start_at_step 10 and an end_at_step of 15. So that the image we get, which is at step 15, can be noised back down to step 10, so the second sampler can bring it to 20. Take note that the Advanced Ksampler has a settings for add_noise and return_with_leftover_noise which when working with these nodes we both want to have disabled.

Inject Noise:

This node lets you actually inject the noise into an image latent, you can find this node under latent>noise and it comes with the following inputs:

  • latents: The latents to inject the noise into.
  • noise: The noise. This input is optional
  • mask: determines where to inject noise. This input is optional
  • strength: The strength of the noise. Note that we can use the node above to calculate for us an appropriate strength value.

Unsampler:

This node does the reverse of a sampler. It calculates the noise that would generate the image given the model and the prompt. You can find this node under sampling and it takes the following inputs and settings:

  • model: The model to target.
  • steps: number of steps to noise.
  • end_step: to what step to travel back to.
  • cfg: classifier free guidance scale.
  • sampler_name: The name of the sampling technique to use.
  • scheduler: The type of schedule to use.
  • normalize: whether to normalize the noise before output. Useful when passing it on to an Inject Noise node which expects normalizes noise.
  • positive: Positive prompt.
  • negative: Negative prompt.
  • latent_image: The image to renoise.

When trying to reconstruct the target image as faithful as possible this works best if both the unsampler and sampler use a cfg scale close to 1.0 and similar number of steps. But it is fun and worth it to play around with these settings to get a better intuition of the results. This node let's you do similar things the A1111 img2img alternative script does

Examples

here are some examples that show how to use the nodes above. Workflows to these examples can be found in the example_workflow folder.

generating variations

screenshot of a workflow that demos generating small variations to a given seed

To create small variations to a given generation we can do the following: We generate the noise of the seed that we're interested using a Noisy Latent Image node, we then create an entire batch of these with a Duplicate Batch Index node. Note that if we were doing this for img2img we can use this same node to duplicate the image latents. Next we generate some more noise, but this time we generate a batch of noise rather than a single sample. We then Slerp this newly created noise into the other one with a Slerp Latents node. To figure out the required strength for injecting this noise we use a Get Sigma node. And finally we inject the slerped noise into a batch of empty latents with a Inject Noise node. Take note that we use an advanced Ksampler with the add_noise setting disabled

"unsampling"

screenshot of a workflow that demos generating small variations to a given seed

To get the noise that recreates a certain image, we first load an image. Then we use the Unsampler node with a low cfg value. To check if this is working we then take the resulting noise and feed it back into an advanced ksampler with the add_noise setting disabled, and a cfg of 1.0.

comfyui_noise's People

Contributors

blenderneko avatar constantego avatar ltdrdata avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

comfyui_noise's Issues

Suggestion: Create a "batch unsampler" -- Gist link included

I'm working on support for DemoFusion (an upscaling technique), which requires getting the full sequence of progressively noised latents. I added batching to the unsampler in ComfyUI_Noise to create the "Batch Unsampler" node. It's exactly the same as the Unsampler, but outputs all the intermediate steps. The intermediate steps are taken in the callback function during sampling.

https://gist.github.com/ttulttul/2b09f0f14bb35639ada7ed37b1f0428d

I'd love your feedback, @BlenderNeko. One thing I observe is that the initial latents in the batch seem a little de-contrasted. I wonder if that's because they need to be normalized in order to look "right". But of course in the present case, we don't want to normalize anything.

BNK_unsampler broken in latest update

Error occurred when executing BNK_Unsampler:

module 'comfy.sample' has no attribute 'convert_cond'

File "D:\AI\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI_Noise\nodes.py", line 221, in unsampler
positive = comfy.sample.convert_cond(positive)

Unsample node result

Hello BlenderNeko,

I encountered a problem with the unsample node, I followed the example you gave in example_unsample.png, but I didn't get the same image after unsampling and sampling.

Here's

We ever spoke. I was using your nodes to work on a video workflow and inject diffusion partly from frame -1 (unsampled) into frame 1 (for consistency improvement). And you told me there where a problem with the unsample; maybe it's the same thing?

Thank you !

Unsampler broken by comfyui update

Unsampler was broken in the following comfyui commit by the removal of the batch_area_memory method
comfyanonymous/ComfyUI@dd4ba68

The removed method is called here

comfy.model_management.load_models_gpu([model] + models, comfy.model_management.batch_area_memory(noise.shape[0] * noise.shape[2] * noise.shape[3]) + inference_memory)

Switching out the old call for the new one seems to work for me

comfy.model_management.load_models_gpu([model] + models, model.memory_required(noise.shape) + inference_memory)

Unsampler + LCM

Any suggestions to get a Unsampler workflow working using LCM sampler?
Unfortunately none of my attempts were successfull.
grafik
(It would be great to have some kind of speed boost to the Unsampler workflow)

Unsampler & SDXL ?

Hello
Can the unsampler work with SDXL? I can only make it work with SD1.5. Many thanks for these fantastic tools.

Empty Latent vs Noisy Latent with Advanced KSampler

Hello @BlenderNeko,

I tried injecting noise created by your Noisy Latent Image node into a KSampler (Advanced) node and (naively?) assumed that by setting the add_noise parameter to disabled I would be able to obtain the same effect as by using an Empty Latent Image with add_noise enabled (using the same seed). This is at least what logic suggests to me. To my surprise, when add_noise is disabled, I always get the same image result, whatever Latent I use as input to the KSampler. The result seems to have no relation with what is usually generated when add_noise is enabled and remains the same if I use different seeds in the Noisy Latent Image. I checked the generated noise by decoding and previewing the Latent and it obviously varies.

The example you provide is making use of Inject Noise and KSampler (Advanced) in a manner similar to what I'm trying to achieve. Can you explain why it seems to work in one case and not the other?

Submenu

With the ever growing custom nodes that are being released, it gets quite hard to find specific nodes when they are just added to a existing menu point. Would be awesome if you could put in a menu point "BlenderNeko" or similar, then add your nodes in there, so they are easier to find.

Unsampler leads to black image

Thanks for making such a useful tool, but for some reason it doesn't work for me. I followed the workflow exactly, even looking at youtube videos, but for some reason using the unsampler always leads to a black image for me. I'd be very thankful if anyone can help.

Sigma Corrected Noise

Hello,

I was wondering if it would be possible to add a similar option to your unsampler node as the Auto1111 script has with it's "sigma corrected noise"

This substantially improves the output and looking over the code it shouldn't be that difficult to implement (hopefully).

Error occurred when executing BNK_Unsampler:

Error occurred when executing BNK_Unsampler:

module 'comfy.sample' has no attribute 'convert_cond'

File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Noise\nodes.py", line 221, in unsampler
positive = comfy.sample.convert_cond(positive)
^^^^^^^^^^^^^^^^^^^^^^^^^

Unsampler Invalid Buffer Size error on Mac M2

I seem to be getting this error when trying use the unsampler, sometimes i get something similar at the KSampler after Unsampler Step.
Could it be because I'm using webp as image source?
this is the workflow screenshot
workflow

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "ComfyUi/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/custom_nodes/ComfyUI_Noise/nodes.py", line 236, in unsampler
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, force_full_denoise=False, denoise_mask=noise_mask, sigmas=sigmas, start_step=0, last_step=end_at_step, callback=callback)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 716, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 622, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 561, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 285, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 275, in forward
    return self.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 272, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 252, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/samplers.py", line 226, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/model_base.py", line 85, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 854, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 46, in forward_timestep_embed
    x = layer(x, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/attention.py", line 604, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/attention.py", line 431, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 189, in checkpoint
    return func(*inputs)
           ^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/attention.py", line 491, in _forward
    n = self.attn1(n, context=context_attn1, value=value_attn1)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "python/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/attention.py", line 383, in forward
    out = optimized_attention(q, k, v, self.heads)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "ComfyUi/ComfyUI/comfy/ldm/modules/attention.py", line 318, in attention_pytorch
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Invalid buffer size: 14.58 GB

Get Sigma is broken after recent update

The Get Sigma node is broken since one of the recent updates to ComfyUI (along with many other extensions):
comfyanonymous/ComfyUI@57753c9

Error occurred when executing BNK_GetSigma:

'BaseModel' object has no attribute 'get_model_object'

File "C:\AI\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI\ComfyUI\custom_nodes\ComfyUI_Noise\nodes.py", line 142, in calc_sigma
sampler = comfy.samplers.KSampler(real_model, steps=steps, device=device, sampler=sampler_name, scheduler=scheduler, denoise=1.0, model_options=model.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 705, in __init__
self.set_steps(steps, denoise)
File "C:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 726, in set_steps
self.sigmas = self.calculate_sigmas(steps).to(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI\ComfyUI\comfy\samplers.py", line 717, in calculate_sigmas
sigmas = calculate_sigmas(self.model.get_model_object("model_sampling"), self.scheduler, steps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1688, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")

(There may be other nodes affected, but this is the one is the first I noticed.)

KSamplerVariationsWithNoise+ is broken

similar to a bunch of other issues:

'SDXL' object has no attribute 'get_model_object'

seems that there is a PR with a fix ready - any plans to merge it?

Unsampler not working correctly

I get patchy results with it (sometimes it works sometimes it doesn't) It doesn't seem to work at all with SDXL models and only with certain sampler using 1.5. When I run it I get this output.


RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))

Any ideas what this might be?

BNK Unsampler exceed allowed memory. (out of memory)

I'm getting this error when loading 150 frames... seems kind of excessive. Any ideas on how to fix this?

Error occurred when executing BNK_Unsampler:

Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 9.49 GiB
Requested : 9.89 GiB
Device limit : 23.99 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

File "/stable-diffusion/execution.py", line 154, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/stable-diffusion/execution.py", line 84, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/stable-diffusion/execution.py", line 77, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/stable-diffusion/custom_nodes/ComfyUI_Noise/nodes.py", line 236, in unsampler
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, force_full_denoise=False, denoise_mask=noise_mask, sigmas=sigmas, start_step=0, last_step=end_at_step, callback=callback)
File "/stable-diffusion/comfy/samplers.py", line 716, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/stable-diffusion/comfy/samplers.py", line 622, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "/stable-diffusion/comfy/samplers.py", line 561, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/stable-diffusion/comfy/k_diffusion/sampling.py", line 580, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/samplers.py", line 285, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/samplers.py", line 275, in forward
return self.apply_model(*args, **kwargs)
File "/stable-diffusion/comfy/samplers.py", line 272, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "/stable-diffusion/comfy/samplers.py", line 252, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
File "/stable-diffusion/comfy/samplers.py", line 226, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "/stable-diffusion/comfy/model_base.py", line 85, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/custom_nodes/SeargeSDXL/modules/custom_sdxl_ksampler.py", line 70, in new_unet_forward
x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 854, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
File "/stable-diffusion/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 46, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 604, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 431, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "/stable-diffusion/comfy/ldm/modules/diffusionmodules/util.py", line 189, in checkpoint
return func(*inputs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 541, in _forward
x = self.ff(self.norm3(x))
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 85, in forward
return self.net(x)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/container.py", line 215, in forward
input = module(input)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ldm/modules/attention.py", line 64, in forward
x, gate = self.proj(x).chunk(2, dim=-1)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/stable-diffusion/comfy/ops.py", line 28, in forward
return super().forward(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)

broken unsampler due to comfy breaking change.

Hi, thanks for the good work done here.
I was testing unsample and noticed the latest comfy broke it

the following

def batch_area_memory(area): if xformers_enabled() or pytorch_attention_flash_attention(): #TODO: these formulas are copied from maximum_batch_area below return (area / 20) * (1024 * 1024) else: return (((area * 0.6) / 0.9) + 1024) * (1024 * 1024)

seems is removed/renamed in from comfy with commit dd4ba68b6e93a562d9499eff34e50dbbbc8714e7

but it's referenced at

nodes.py line 225
comfy.model_management.load_models_gpu([model] + models, comfy.model_management.batch_area_memory(noise.shape[0] * noise.shape[2] * noise.shape[3]) + inference_memory)

putting it back in comfy fixes it.
maybe there is a different funciton for the same.

Regards

combine "generating variations" and "unsampling"

I use your nodes a lot. There are many applications for both of your example workflows "generating variations" and "unsampling". I really would like to combine both of these workflows to start with an image, unsample to get noise that relates to the original image (1x), duplicate the output of the unsampler (8x), generate different latent noise vectors (8x), slerp the two together, run them through the ksampler, get 8 different images that all relate to the original images with some variety, pick the best looking image.

If I put the slerp factor to 1.00 (discard the output of the unsampler) it correctly generates 8x images that have no relation with the original image:
image

Any other slerp factor that gives some positive weight to the output of the unsampler returns 8 identical noisy outputs:
image

If I remove the latent noise/slerp/noise inject the workflow functions as expected:
image

The reason I want to do this is to get a lot more control over inpainting, which is in my experience a weakness of comfyui. I would expect this to work. Is this idea even possible? Is the execution wrong?

(IMPORT FAILED) ComfyUI Noise

I got this error when trying to install the nodes:

Traceback (most recent call last):
  File "C:\Apps\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1888, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Noise\__init__.py", line 1, in <module>
    from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
  File "C:\Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Noise\nodes.py", line 10, in <module>
    import comfy.sampler_helpers
ModuleNotFoundError: No module named 'comfy.sampler_helpers'

Cannot import C:\Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Noise module for custom nodes: No module named 'comfy.sampler_helpers'

To be fair, I've been having similar issues trying to install other nodes recently.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.