mikubill / sd-webui-controlnet Goto Github PK
View Code? Open in Web Editor NEWWebUI extension for ControlNet
License: GNU General Public License v3.0
WebUI extension for ControlNet
License: GNU General Public License v3.0
目前无法生存高分辨率图很可惜啊,用已经生成好的图去i2i或者highres fix,动作很容易就变形或者变得奇怪了
Hi, greate tool, but there is one problem.
When source image is a 2D image and use a realistic model to generate target image, it copied everything from that 2D image, which makes output image not realistic anymore. More like a 3D rendering result.
Even I choose openpose as control net's model, still has this issue. Character's eyes are too big, mouth size is unrealistic, hair is super big. All those 2D image's feature, comes to the result, even with openpose.
So, is there a way to add a parameter to control its strength?
Thanks
what does Weight for? seem like adjust it doesn't change anything?
Right now if you're trying a bunch of prompts on the same starting image you have to wait for the preprocessor each time, or manually upload the preprocessed image. This could be cached and reused.
Hello,
Just got back from work and been hear the craze over this. I installed the extension, updated my WebUI, got everything set up, appiled the highres fix, but whenever I generate an image with ControlNet enabled, I get hit with this error.
Error running process: C:\Users\Joseph\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "C:\Users\Joseph\stable-diffusion-webui\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "C:\Users\Joseph\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 231, in process
restore_networks()
File "C:\Users\Joseph\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 217, in restore_networks
self.latest_network.restore(unet)
File "C:\Users\Joseph\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 111, in restore
model.forward = model._original_forward
File "C:\Users\Joseph\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1269, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'UNetModel' object has no attribute '_original_forward'
Tried to figure out what's going on, but this is something far beyond my knowledge. Any help would be appreciated. Thanks.
When I try to update the repo, it gives out this:
File "D:\AI_WORKPLACE\AUTOMATIC1111\current\stable-diffusion-webui\modules\scripts.py", line 270, in initialize_scripts script = script_class() File "D:\AI_WORKPLACE\AUTOMATIC1111\current\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 118, in __init__ "normal_map": midas_normal, NameError: name 'midas_normal' is not defined
If i change width or heigth to something other than 512 i get:
RuntimeError: Sizes of tensors must match except in dimension 1.
Also, canvas width and height are currently reversed in your script. Increasing canvas width actually increases height of the canvas.
using mask as input
0%| | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(p823x0yzj0q7ebe)', '1girl black hair, ', '(worst quality:1.2), (low quality:1.2) , (monochrome:0.7)', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 456, 344, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'Refresh models', True, 'openpose', 'out(b46e25f5)', 1, {'image': array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
...,
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=uint8), 'mask': array([[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
...,
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]]], dtype=uint8)}, False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
File "B:\AIimages\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "B:\AIimages\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "B:\AIimages\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 628, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "B:\AIimages\stable-diffusion-webui\modules\processing.py", line 828, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "B:\AIimages\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 327, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "B:\AIimages\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 225, in launch_sampling
return func()
File "B:\AIimages\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 327, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "B:\AIimages\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "B:\AIimages\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "B:\AIimages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "B:\AIimages\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 123, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
File "B:\AIimages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "B:\AIimages\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "B:\AIimages\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "B:\AIimages\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "B:\AIimages\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "B:\AIimages\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "B:\AIimages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1212, in _call_impl
result = forward_call(*input, **kwargs)
File "B:\AIimages\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "B:\AIimages\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "B:\AIimages\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 77, in forward
h = torch.cat(
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 15 for tensor number 1 in the list.
Here is the apple image (835 x 1000 / 1:1.2 ratio), But canvas size is 512 x 768 (1:1.5 ratio)
In this case, current resize mode work like this.
Could you add Crop and Resize mode? Crop as canvas ratio first and resize.
If you look at WEBUI img2img, there is a "crop and resize" function, which is the same as the method I suggested.
Currently, the preprocessed output is resized in a fixed way.
sd-webui-controlnet/scripts/controlnet.py
Lines 265 to 266 in 4fdf3e5
I propose adding two more options, on a dropdown / radio button:
control = Resize(h if h<w else w, interpolation=InterpolationMode.BICUBIC)(control)
control = CenterCrop((h, w))(control)
control = Resize((h,w), interpolation=InterpolationMode.BICUBIC)(control)
This should solve some cropping issues with non 1:1 aspect ratio inputs
Is possible to include this model to generate images with good hands?
I think is already part of ControlNet, but it does not create a hand pose estimation in my experiments with this extension.
If anyone have more info, please comment.
https://github.com/CMU-Perceptual-Computing-Lab/openpose
https://github.com/Hzzone/pytorch-openpose#hand-pose-estimation
Is there a way to make custom pose for openpose preprocessor? Something like a posemaker like this one - https://webapp.magicposer.com/ with a possibility to extract the pose with the "skeleton" similar to what extension makes with openpose and feed it to an extension?
It would be nice to be able to process poses and depth images in batches! This blender addon can export openpose images from character bones. Using this, we will be able to create animations too!
https://twitter.com/toni_nimono/status/1625411494602055685
Hello,
I get this error when I try to process multiple images with the img2img batch function:
Traceback (most recent call last):
File "C:\StableDiffusion2023\stable-diffusion-webui-master(3)\stable-diffusion-webui-master\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\StableDiffusion2023\stable-diffusion-webui-master(3)\stable-diffusion-webui-master\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\StableDiffusion2023\stable-diffusion-webui-master(3)\stable-diffusion-webui-master\modules\img2img.py", line 163, in img2img
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args)
File "C:\StableDiffusion2023\stable-diffusion-webui-master(3)\stable-diffusion-webui-master\modules\img2img.py", line 76, in process_batch
processed_image.save(os.path.join(output_dir, filename))
AttributeError: 'numpy.ndarray' object has no attribute 'save'
I tried to use different preprocessors and models but everytime I get the same error. When I just use img2img with just a single image there is no error. I using Windows 10 whith a NVIDIA RTX 3090 Ti. I set the Input directory and the Output directory correctly.
When I use higher resolutions the black and white mask image gets blurry and the details wont come out as good.
Can it be upscaled in the background with SwinIR or Esrgan, when choosing a resolution above 512x512? The details would then stay reasonably crispy
Batch
Can you provide the prompts examples?
is this intend? when generated image using canny model it's will always leave white line (the line that control net draw in canvas)
An error occurs in img2img when using the Controlnet extension. There is no problem if remove the Controlnet extension.
Error running process: C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 237, in process
raise RuntimeError(f"model not found: {model}")
RuntimeError: model not found: 0.5
~
Error running process: C:\stable-diffusion-webui\extensions\stable-diffusion-webui-daam\scripts\daam_script.py
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "C:\stable-diffusion-webui\extensions\stable-diffusion-webui-daam\scripts\daam_script.py", line 99, in process
self.attentions = [s.strip() for s in attention_texts.split(",") if s.strip()]
AttributeError: 'bool' object has no attribute 'split'
0%| | 0/16 [00:00<?, ?it/s]ssii_intermediate_type, ssii_every_n, ssii_start_at_n, ssii_stop_at_n, ssii_video, ssii_video_format, ssii_mp4_parms, ssii_video_fps, ssii_add_first_frames, ssii_add_last_frames, ssii_smooth, ssii_seconds, ssii_lores, ssii_hires, ssii_debug:0.0, 0.0, False, 0.0, True, True, False, , False, False, False, False, Auto, 0.5, 1
Step, abs_step, hr, hr_active: 0, 0, False, False
0%| | 0/16 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(dew37470n9yuaad)', 0, 'masterpiece, best quality, 1girl, solo, white swimsut, beach, blonde hair, looking at viewer, jumping, blue sky, white cloud, sun, lensflare, bubbles', '(worst quality, low quality:1.4), cap, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, mutation, missing arms, missing legs, extra arms, extra legs, blurry, mutated hands, fused fingers, too many fingers, extra fingers, extra others, futanari, fused penis, missing penis, extra penis, mutated penis, username, mask, furry, odd eyes, artist name', [], <PIL.Image.Image image mode=RGBA size=512x768 at 0x17C388D59F0>, None, None, None, None, None, None, 20, 15, 4, 0, 0, False, False, 1, 1, 8.5, 1.5, 0.75, 3991550646.0, -1.0, 0, 0, 0, False, 768, 512, 0, 0, 32, 0, '', '', '', [], 0, 0, 0, 0, 0, 0.25, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'Refresh models', 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, [], False, True, 'Denoised', 1.0, 0.0, 0.0, True, 'gif', 'h264', 2.0, 0.0, 0.0, False, 0.0, True, True, False, '', False, False, False, False, 'Auto', 0.5, 1, False, False, 1, False, '
CFG Scale
should be 2 or lower.Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 'Positive', 0, ', ', True, 32, 1, '', 0, '', 0, '', True, False, False, False, 0, False, None, True, None, None, False, True, True, True, 0, 0, 384, 384, False, False, True, True, True, False, True, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, False, 0, 1, 384, 384, True, False, True, True, True, False, 1, True, 3, False, 3, False, 3, 1, 'Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 2, False, 8, 0, 0, 2048, 2048, 2) {}This one may be iffy, but based on what I read, it seems like it might be possible to stack more than one ControlNet onto Stable Diffusion, at the same time. If that's possible, it would be really interesting to use that. Being able to define both depth and normals when generating images of a building, or depth + pose for characters, would allow a lot of control.
it seems that some have the same names as the models. not sure which are compatible or how they interact
i trained some lora on 2.1, wonder if there is an opportunity for thie extension to cooperate with 2.1?
So one thing I'm noticing is that, when I change away from a depth-based model (or just disable the ControlNet stuff for a few gens), and then switch back, it tries to load Midas again, and fails with a CUDA memory error. I think it tries to reload the model without having first properly unloaded it. I have to restart the entire program each time this happens, with 16 gb ram and 10 gb vram.
Help! what the wrong it is...
Loaded state_dict from [E:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\models\Anything3.ckpt]
Error running process: E:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "E:\AI\stable-diffusion-webui\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "E:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 247, in process
network = PlugableControlModel(model_path, os.path.join(cn_models_dir, "cldm_v15.yaml"), weight, lowvram=lowvram)
File "E:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 49, in init
self.control_model.load_state_dict(state_dict)
File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ControlNet:
how do we load the same image into the extension as the image loaded into img2img? So we don't have to manually load the image twice each time
Errors will be reported when the new version is used:
Error completing request
Arguments: ('task(88y7i4r4jcugm8w)', '1girl', 'lowres, bad anatomy, bad hands, (text:1.6), error, missing fingers, clothe pull, extra digit, fewer digits, (cropped:1.2), (censored:1.2), (low quality, worst quality:1.4), fat, (dress, ribbon:1.2), pubic hair, jpeg artifacts, (signature, watermark, username:1.3), (blurry:1.2), mutated, mutation, out of focus, mutated, extra limb, poorly drawn hands and fingers, missing limb, floating limbs, disconnected limbs, malformed hands and fingers, (motion lines:1.2)', [], 30, 15, False, False, 1, 1, 9, -1.0, -1.0, 0, 0, 0, False, 672, 480, False, 0.35, 2, '4x_foolhardy_Remacri', 20, 0, 0, [], 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'Refresh models', True, 'openpose', 'control_any3_openpose(95070f75)', 1, {'image': array([[[238, 247, 255],
[236, 245, 254],
[233, 242, 251],
...,
[192, 203, 225],
[187, 198, 220],
[183, 194, 216]],
[[236, 245, 254],
[235, 244, 253],
[233, 242, 251],
...,
[183, 194, 216],
[178, 189, 211],
[175, 186, 208]],
[[233, 242, 251],
[233, 242, 251],
[232, 241, 250],
...,
[184, 195, 215],
[179, 190, 210],
[176, 187, 207]],
...,
[[181, 181, 189],
[181, 181, 189],
[180, 180, 188],
...,
[225, 225, 233],
[210, 210, 218],
[201, 201, 209]],
[[181, 181, 189],
[181, 181, 189],
[181, 181, 189],
...,
[224, 224, 232],
[220, 220, 228],
[212, 212, 220]],
[[181, 181, 189],
[181, 181, 189],
[181, 181, 189],
...,
[218, 218, 226],
[225, 225, 233],
[218, 218, 226]]], dtype=uint8), 'mask': array([[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
...,
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]]], dtype=uint8)}, False, 'Scale to Fit (Inner Fit)', True, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\processing.py", line 628, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\processing.py", line 828, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 323, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 221, in launch_sampling
return func()
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 323, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 135, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]})
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1212, in _call_impl
result = forward_call(*input, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 97, in forward2
return forward(*args, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 70, in forward
emb = self.time_embed(t_emb)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 178, in lora_Linear_forward
return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
File "C:\Users\60552\Documents\AI folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)
To reproduce, put a new model into models directory.
any plans to get this supporting inpainting models?
Hey mikubill, I'm trying to understand why a1111 requires so little gpu. The model itself is 5.7G and a simple read is already OOM. How is 4GB optimized?
would it be possible to combine controlnet with this extention?
https://github.com/Coyote-A/ultimate-upscale-for-automatic1111 (or would that developer have to integrate the functionality?)
currently it copies the feature-image for every tile that is used in the upscale and thus copies the image over itself. The improved detail coherency from canary or hde might work wonders to allow regaining detailes (especially thinking about hands etc.)
EDIT:
just found the thread talking about general upscaling the mask images. I do think the combination with ultimate upscale would be preferable to general upscaling though, because the individual tile-sizes stay in 512-768px range and thus should work better with the trained models from controlnet, hopefully XP
When trying the segmentation
preprocessor, I had a Python error trying to import from the prettytable
package.
I don't know if this error was specific to my system, but it was easily corrected with :
venv\Script\activate.bat
pip install prettytable
To be safe, the package should be added to the requirements for auto-install.
Currently when using an SD2 model an error is received. An error is received on a normal working SD V2 model, and a V-Parameterization model (SD v2.1 768)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x1024 and 768x320)
Full error log attached.
error.log
This would be a useful feature for example when starting from a 3D Model you rendered, which means you can also render better maps than the auto-generated ones, and they could be fed to the process for more accurate results.
Pretty sure there is an extension for the 2.0 depth model exactly for this purpose.
Thanks for listening.
According to lllyasviel's comment, models like Anyhing V3 require clip skip = 2, 3xlonger token. I wonder if this extension requires such extra work.
lllyasviel/ControlNet#12
Examples:
https://github.com/thygate/stable-diffusion-webui-depthmap-script - uses MiDaS and LeReS directly from webui\models\ so its not needed to download it again, can be shared by multiple extensions
https://github.com/Extraltodeus/depthmap2mask - for midas as well
https://github.com/arenatemp/stable-diffusion-webui-model-toolkit - dont remember if it downloads something to models, but it extracts components from checkpoints to a folder there
https://github.com/Klace/stable-diffusion-webui-instruct-pix2pix - was already doing that before being integrated into main
there are some other extensions that either share or create their own folders in models
also ControlNet folder in default path IMO would be better suited
anyway, outstanding job, really well done, thanks
EDIT:
dpt_hybrid-midas-501f0c75 - specificly i have it already in stable-diffusion-webui\models\midas, for depth extensions, aslo:
dpt_beit_large_384.pt
dpt_beit_large_512.pt
dpt_large-midas-2f21e586.pt
midas_v21_small-70d6b9c8.pt
midas_v21-f6b98070.pt
I attempted to directly resolve this myself by modifying the slider values in the py from 1024 to 4096, however it is still limited to 1024.
Almost every image I work with has a side well beyond 1024, normally more than 3x that. However even when the max for the sliders is raised to 4096, the sliders themselves are locked at 1024 max values.
The error occurs when Low VRAM (8GB or below) is enabled. However, the image creation itself is normally performed.
`ERROR: Exception in ASGI application | 2/20 [00:00<00:05, 3.38it/s]
Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
message = await recv_stream.receive()
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in call
await super().call(scope, receive, send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 125, in call
await self.middleware_stack(scope, receive, send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 104, in call
response = await self.dispatch_func(request, call_next)
File "C:\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call
await responder(scope, receive, send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in call
await self.app(scope, receive, self.send_with_gzip)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call
await route.handle(scope, receive, send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "C:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "C:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "C:\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\stable-diffusion-webui\modules\progress.py", line 85, in progressapi
shared.state.set_current_image()
File "C:\stable-diffusion-webui\modules\shared.py", line 243, in set_current_image
self.do_set_current_image()
File "C:\stable-diffusion-webui\modules\shared.py", line 251, in do_set_current_image
self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
File "C:\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "C:\stable-diffusion-webui\modules\sd_samplers_common.py", line 50, in
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "C:\stable-diffusion-webui\modules\sd_samplers_common.py", line 37, in single_sample_to_image
x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
File "C:\stable-diffusion-webui\modules\processing.py", line 423, in decode_first_stage
x = model.decode_first_stage(x)
File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
return self.first_stage_model.decode(z)
File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 89, in decode
z = self.post_quant_conv(z)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same`
There is an issue for open pose. When source image and target image are not the same style, for example. source image is a 2D cartoon one, target image is a realistic photo, facial bones of openpose gonna always make the face terrible, even after adjust the weight option.
So, the best solution when source image and target image are different styles, is just offer a choice to not use the facial bones of openpose.
They are: eye, nose and ear bones.
This tool is a blast! Especially the pose one! Absolutely brilliant. The only thing it lacks is somekind detect the depth (like the script does) of characters, determine which are closer, which are farther, and what limbs or body parts are behind the body or objects. For example if the model put her hands behind back, it thinks just that they are not visible and make non visible parts random but in front, not behind. And same for characters that are behind other characters, it tries to draw them not behind. I can't fully suggest how to achieve it but it would be awesome!
P.S. I tried to use both Controlnet + depth script, and it somewhy drags images from original image even when denoise is 1.
I don't see a ControlNet section on the img2img tab, but as far as I know, it should be something the model could support. I think it would allow some extremely fine control over the output.
Thanks for this, I hope I can get it to work soon!
I tried the Scribble Model, but I got this error once I ran it:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (616x1024 and 768x320)
I resized the input image to 512 x 704 pixels and set the normal Width & Height accordingly. I tried running it without any Preprocessor or with the Fake_Scribble processor, but got the same error message both times. Weight left at 1, Scribble Mode On & Off tried.
This is a minor request, but changing the preprocessor text of the midas depth model from "midas" to "depth" would allow it to match the model's actual name, like with the other ControlNet preprocessors. This could help prevent some potential confusion for users who download the depth model but don't immediately see a "depth" option. Personally, even though I'm aware that midas = depth, it's still an extra reminder I have to give myself to select the right preprocessor each time. As a bonus, the sorting for the preprocessors and the models will also match better.
ERROR: Exception in ASGI application██████████████████████████████████████████████████████████████████████████▌ | 15/30 [00:13<00:14, 1.02it/s]
Traceback (most recent call last):
File "G:\stable-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "G:\stable-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
message = await recv_stream.receive()
File "G:\stable-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:\stable-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "G:\stable-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "G:\stable-webui\venv\lib\site-packages\fastapi\applications.py", line 271, in call
await super().call(scope, receive, send)
File "G:\stable-webui\venv\lib\site-packages\starlette\applications.py", line 125, in call
await self.middleware_stack(scope, receive, send)
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in call
response = await self.dispatch_func(request, call_next)
File "G:\stable-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call
await responder(scope, receive, send)
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 44, in call
await self.app(scope, receive, self.send_with_gzip)
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "G:\stable-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "G:\stable-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "G:\stable-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "G:\stable-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call
await route.handle(scope, receive, send)
File "G:\stable-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "G:\stable-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "G:\stable-webui\venv\lib\site-packages\fastapi\routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "G:\stable-webui\venv\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "G:\stable-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "G:\stable-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "G:\stable-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "G:\stable-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "G:\stable-webui\modules\progress.py", line 85, in progressapi
shared.state.set_current_image()
File "G:\stable-webui\modules\shared.py", line 243, in set_current_image
self.do_set_current_image()
File "G:\stable-webui\modules\shared.py", line 251, in do_set_current_image
self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
File "G:\stable-webui\modules\sd_samplers_common.py", line 50, in samples_to_image_grid
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "G:\stable-webui\modules\sd_samplers_common.py", line 50, in
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
File "G:\stable-webui\modules\sd_samplers_common.py", line 37, in single_sample_to_image
x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
File "G:\stable-webui\modules\processing.py", line 423, in decode_first_stage
x = model.decode_first_stage(x)
File "G:\stable-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "G:\stable-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "G:\stable-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "G:\stable-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
return self.first_stage_model.decode(z)
File "G:\stable-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 89, in decode
z = self.post_quant_conv(z)
File "G:\stable-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "G:\stable-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
File "G:\stable-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "G:\stable-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same
Please, add more detailed tutorial how to transfer controlnet to your own model
It would be amazing if I could take existing openpose pose data and feed it directly into the txt2img or img2img process rather than it trying to first generate the pose estimation from a given image (which it may or may not get right).
i got this error, how to fix?
Loading preprocessor: canny, model: body_pose_model(f14788ab) Loaded state_dict from [/content/stable-diffusion-webui/extensions/sd-webui-controlnet/models/body_pose_model.pth] Error running process: /content/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py Traceback (most recent call last): File "/content/stable-diffusion-webui/modules/scripts.py", line 386, in process script.process(p, *script_args) File "/content/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 252, in process network = PlugableControlModel(model_path, os.path.join(cn_models_dir, "cldm_v15.yaml"), weight, lowvram=lowvram) File "/content/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 57, in __init__ self.control_model.load_state_dict(state_dict) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1671, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for ControlNet: Missing key(s) in state_dict: "time_embed.0.weight", "time_embed.0.bias", "time_embed.2.weight", "time_embed.2.bias", "input_blocks.0.0.weight", "input_blocks.0.0.bias", "input_blocks.1.0.in_layers.0.weight", "input_blocks.1.0.in_layers.0.bias", "input_blocks.1.0.in_layers.2.weight", "input_blocks.1.0.in_layers.2.bias", "input_blocks.1.0.emb_layers.1.weight", "input_blocks.1.0.emb_layers.1.bias", "input_blocks.1.0.out_layers.0.weight", "input_blocks.1.0.out_layers.0.bias", "input_blocks.1.0.out_layers.3.weight", "input_blocks.1.0.out_layers.3.bias", "input_blocks.1.1.norm.weight", "input_blocks.1.1.norm.bias", "input_blocks.1.1.proj_in.weight", "input_blocks.1.1.proj_in.bias", "input_blocks.1.1.transformer_blocks.0.attn1.to_q.weight", "input_blocks.1.1.transformer_blocks.0.attn1.to_k.weight", "input_blocks.1.1.transformer_blocks.0.attn1.to_v.weight", "input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.weight", "input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.bias", "input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.weight", "input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.bias", "input_blocks.1.1.transformer_blocks.0.ff.net.2.weight", "input_blocks.1.1.transformer_blocks.0.ff.net.2.bias", "input_blocks.1.1.transformer_blocks.0.attn2.to_q.weight", "input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight", "input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight", "input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.weight", "input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.bias", "input_blocks.1.1.transformer_blocks.0.norm1.weight", "input_blocks.1.1.transformer_blocks.0.norm1.bias", "input_blocks.1.1.transformer_blocks.0.norm2.weight", "input_blocks.1.1.transformer_blocks.0.norm2.bias", "input_blocks.1.1.transformer_blocks.0.norm3.weight", "input_blocks.1.1.transformer_blocks.0.norm3.bias", "input_blocks.1.1.proj_out.weight", "input_blocks.1.1.proj_out.bias", "input_blocks.2.0.in_layers.0.weight", "input_blocks.2.0.in_layers.0.bias", "input_blocks.2.0.in_layers.2.weight", "input_blocks.2.0.in_layers.2.bias", "input_blocks.2.0.emb_layers.1.weight", "input_blocks.2.0.emb_layers.1.bias", "input_blocks.2.0.out_layers.0.weight", "input_blocks.2.0.out_layers.0.bias", "input_blocks.2.0.out_layers.3.weight", "input_blocks.2.0.out_layers.3.bias", "input_blocks.2.1.norm.weight", "input_blocks.2.1.norm.bias", "input_blocks.2.1.proj_in.weight", "input_blocks.2.1.proj_in.bias", "input_blocks.2.1.transformer_blocks.0.attn1.to_q.weight", "input_blocks.2.1.transformer_blocks.0.attn1.to_k.weight", "input_blocks.2.1.transformer_blocks.0.attn1.to_v.weight", "input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.weight", "input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.bias", "input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.weight", "input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.bias", "input_blocks.2.1.transformer_blocks.0.ff.net.2.weight", "input_blocks.2.1.transformer_blocks.0.ff.net.2.bias", "input_blocks.2.1.transformer_blocks.0.attn2.to_q.weight", "input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight", "input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight", "input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.weight", "input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.bias", "input_blocks.2.1.transformer_blocks.0.norm1.weight", "input_blocks.2.1.transformer_blocks.0.norm1.bias", "input_blocks.2.1.transformer_blocks.0.norm2.weight", "input_blocks.2.1.transformer_blocks.0.norm2.bias", "input_blocks.2.1.transformer_blocks.0.norm3.weight", "input_blocks.2.1.transformer_blocks.0.norm3.bias", "input_blocks.2.1.proj_out.weight", "input_blocks.2.1.proj_out.bias", "input_blocks.3.0.op.weight", "input_blocks.3.0.op.bias", "input_blocks.4.0.in_layers.0.weight", "input_blocks.4.0.in_layers.0.bias", "input_blocks.4.0.in_layers.2.weight", "input_blocks.4.0.in_layers.2.bias", "input_blocks.4.0.emb_layers.1.weight", "input_blocks.4.0.emb_layers.1.bias", "input_blocks.4.0.out_layers.0.weight", "input_blocks.4.0.out_layers.0.bias", "input_blocks.4.0.out_layers.3.weight", "input_blocks.4.0.out_layers.3.bias", "input_blocks.4.0.skip_connection.weight", "input_blocks.4.0.skip_connection.bias", "input_blocks.4.1.norm.weight", "input_blocks.4.1.norm.bias", "input_blocks.4.1.proj_in.weight", "input_blocks.4.1.proj_in.bias", "input_blocks.4.1.transformer_blocks.0.attn1.to_q.weight", "input_blocks.4.1.transformer_blocks.0.attn1.to_k.weight", "input_blocks.4.1.transformer_blocks.0.attn1.to_v.weight", "input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight", "input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias", "input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight", "input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias", "input_blocks.4.1.transformer_blocks.0.ff.net.2.weight", "input_blocks.4.1.transformer_blocks.0.ff.net.2.bias", "input_blocks.4.1.transformer_blocks.0.attn2.to_q.weight", "input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight", "input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight", "input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight", "input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias", "input_blocks.4.1.transformer_blocks.0.norm1.weight", "input_blocks.4.1.transformer_blocks.0.norm1.bias", "input_blocks.4.1.transformer_blocks.0.norm2.weight", "input_blocks.4.1.transformer_blocks.0.norm2.bias", "input_blocks.4.1.transformer_blocks.0.norm3.weight", "input_blocks.4.1.transformer_blocks.0.norm3.bias", "input_blocks.4.1.proj_out.weight", "input_blocks.4.1.proj_out.bias", "input_blocks.5.0.in_layers.0.weight", "input_blocks.5.0.in_layers.0.bias", "input_blocks.5.0.in_layers.2.weight", "input_blocks.5.0.in_layers.2.bias", "input_blocks.5.0.emb_layers.1.weight", "input_blocks.5.0.emb_layers.1.bias", "input_blocks.5.0.out_layers.0.weight", "input_blocks.5.0.out_layers.0.bias", "input_blocks.5.0.out_layers.3.weight", "input_blocks.5.0.out_layers.3.bias", "input_blocks.5.1.norm.weight", "input_blocks.5.1.norm.bias", "input_blocks.5.1.proj_in.weight", "input_blocks.5.1.proj_in.bias", "input_blocks.5.1.transformer_blocks.0.attn1.to_q.weight", "input_blocks.5.1.transformer_blocks.0.attn1.to_k.weight", "input_blocks.5.1.transformer_blocks.0.attn1.to_v.weight", "input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight", "input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias", "input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight", "input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias", "input_blocks.5.1.transformer_blocks.0.ff.net.2.weight", "input_blocks.5.1.transformer_blocks.0.ff.net.2.bias", "input_blocks.5.1.transformer_blocks.0.attn2.to_q.weight", "input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight", "input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight", "input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight", "input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias", "input_blocks.5.1.transformer_blocks.0.norm1.weight", "input_blocks.5.1.transformer_blocks.0.norm1.bias", "input_blocks.5.1.transformer_blocks.0.norm2.weight", "input_blocks.5.1.transformer_blocks.0.norm2.bias", "input_blocks.5.1.transformer_blocks.0.norm3.weight", "input_blocks.5.1.transformer_blocks.0.norm3.bias", "input_blocks.5.1.proj_out.weight", "input_blocks.5.1.proj_out.bias", "input_blocks.6.0.op.weight", "input_blocks.6.0.op.bias", "input_blocks.7.0.in_layers.0.weight", "input_blocks.7.0.in_layers.0.bias", "input_blocks.7.0.in_layers.2.weight", "input_blocks.7.0.in_layers.2.bias", "input_blocks.7.0.emb_layers.1.weight", "input_blocks.7.0.emb_layers.1.bias", "input_blocks.7.0.out_layers.0.weight", "input_blocks.7.0.out_layers.0.bias", "input_blocks.7.0.out_layers.3.weight", "input_blocks.7.0.out_layers.3.bias", "input_blocks.7.0.skip_connection.weight", "input_blocks.7.0.skip_connection.bias", "input_blocks.7.1.norm.weight", "input_blocks.7.1.norm.bias", "input_blocks.7.1.proj_in.weight", "input_blocks.7.1.proj_in.bias", "input_blocks.7.1.transformer_blocks.0.attn1.to_q.weight", "input_blocks.7.1.transformer_blocks.0.attn1.to_k.weight", "input_blocks.7.1.transformer_blocks.0.attn1.to_v.weight", "input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.weight", "input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.bias", "input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.weight", "input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.bias", "input_blocks.7.1.transformer_blocks.0.ff.net.2.weight", "input_blocks.7.1.transformer_blocks.0.ff.net.2.bias", "input_blocks.7.1.transformer_blocks.0.attn2.to_q.weight", "input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight", "input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight", "input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.weight", "input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.bias", "input_blocks.7.1.transformer_blocks.0.norm1.weight", "input_blocks.7.1.transformer_blocks.0.norm1.bias", "input_blocks.7.1.transformer_blocks.0.norm2.weight", "input_blocks.7.1.transformer_blocks.0.norm2.bias", "input_blocks.7.1.transformer_blocks.0.norm3.weight", "input_blocks.7.1.transformer_blocks.0.norm3.bias", "input_blocks.7.1.proj_out.weight", "input_blocks.7.1.proj_out.bias", "input_blocks.8.0.in_layers.0.weight", "input_blocks.8.0.in_layers.0.bias", "input_blocks.8.0.in_layers.2.weight", "input_blocks.8.0.in_layers.2.bias", "input_blocks.8.0.emb_layers.1.weight", "input_blocks.8.0.emb_layers.1.bias", "input_blocks.8.0.out_layers.0.weight", "input_blocks.8.0.out_layers.0.bias", "input_blocks.8.0.out_layers.3.weight", "input_blocks.8.0.out_layers.3.bias", "input_blocks.8.1.norm.weight", "input_blocks.8.1.norm.bias", "input_blocks.8.1.proj_in.weight", "input_blocks.8.1.proj_in.bias", "input_blocks.8.1.transformer_blocks.0.attn1.to_q.weight", "input_blocks.8.1.transformer_blocks.0.attn1.to_k.weight", "input_blocks.8.1.transformer_blocks.0.attn1.to_v.weight", "input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.weight", "input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.bias", "input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.weight", "input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.bias", "input_blocks.8.1.transformer_blocks.0.ff.net.2.weight", "input_blocks.8.1.transformer_blocks.0.ff.net.2.bias", "input_blocks.8.1.transformer_blocks.0.attn2.to_q.weight", "input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight", "input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight", "input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.weight", "input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.bias", "input_blocks.8.1.transformer_blocks.0.norm1.weight", "input_blocks.8.1.transformer_blocks.0.norm1.bias", "input_blocks.8.1.transformer_blocks.0.norm2.weight", "input_blocks.8.1.transformer_blocks.0.norm2.bias", "input_blocks.8.1.transformer_blocks.0.norm3.weight", "input_blocks.8.1.transformer_blocks.0.norm3.bias", "input_blocks.8.1.proj_out.weight", "input_blocks.8.1.proj_out.bias", "input_blocks.9.0.op.weight", "input_blocks.9.0.op.bias", "input_blocks.10.0.in_layers.0.weight", "input_blocks.10.0.in_layers.0.bias", "input_blocks.10.0.in_layers.2.weight", "input_blocks.10.0.in_layers.2.bias", "input_blocks.10.0.emb_layers.1.weight", "input_blocks.10.0.emb_layers.1.bias", "input_blocks.10.0.out_layers.0.weight", "input_blocks.10.0.out_layers.0.bias", "input_blocks.10.0.out_layers.3.weight", "input_blocks.10.0.out_layers.3.bias", "input_blocks.11.0.in_layers.0.weight", "input_blocks.11.0.in_layers.0.bias", "input_blocks.11.0.in_layers.2.weight", "input_blocks.11.0.in_layers.2.bias", "input_blocks.11.0.emb_layers.1.weight", "input_blocks.11.0.emb_layers.1.bias", "input_blocks.11.0.out_layers.0.weight", "input_blocks.11.0.out_layers.0.bias", "input_blocks.11.0.out_layers.3.weight", "input_blocks.11.0.out_layers.3.bias", "zero_convs.0.0.weight", "zero_convs.0.0.bias", "zero_convs.1.0.weight", "zero_convs.1.0.bias", "zero_convs.2.0.weight", "zero_convs.2.0.bias", "zero_convs.3.0.weight", "zero_convs.3.0.bias", "zero_convs.4.0.weight", "zero_convs.4.0.bias", "zero_convs.5.0.weight", "zero_convs.5.0.bias", "zero_convs.6.0.weight", "zero_convs.6.0.bias", "zero_convs.7.0.weight", "zero_convs.7.0.bias", "zero_convs.8.0.weight", "zero_convs.8.0.bias", "zero_convs.9.0.weight", "zero_convs.9.0.bias", "zero_convs.10.0.weight", "zero_convs.10.0.bias", "zero_convs.11.0.weight", "zero_convs.11.0.bias", "input_hint_block.0.weight", "input_hint_block.0.bias", "input_hint_block.2.weight", "input_hint_block.2.bias", "input_hint_block.4.weight", "input_hint_block.4.bias", "input_hint_block.6.weight", "input_hint_block.6.bias", "input_hint_block.8.weight", "input_hint_block.8.bias", "input_hint_block.10.weight", "input_hint_block.10.bias", "input_hint_block.12.weight", "input_hint_block.12.bias", "input_hint_block.14.weight", "input_hint_block.14.bias", "middle_block.0.in_layers.0.weight", "middle_block.0.in_layers.0.bias", "middle_block.0.in_layers.2.weight", "middle_block.0.in_layers.2.bias", "middle_block.0.emb_layers.1.weight", "middle_block.0.emb_layers.1.bias", "middle_block.0.out_layers.0.weight", "middle_block.0.out_layers.0.bias", "middle_block.0.out_layers.3.weight", "middle_block.0.out_layers.3.bias", "middle_block.1.norm.weight", "middle_block.1.norm.bias", "middle_block.1.proj_in.weight", "middle_block.1.proj_in.bias", "middle_block.1.transformer_blocks.0.attn1.to_q.weight", "middle_block.1.transformer_blocks.0.attn1.to_k.weight", "middle_block.1.transformer_blocks.0.attn1.to_v.weight", "middle_block.1.transformer_blocks.0.attn1.to_out.0.weight", "middle_block.1.transformer_blocks.0.attn1.to_out.0.bias", "middle_block.1.transformer_blocks.0.ff.net.0.proj.weight", "middle_block.1.transformer_blocks.0.ff.net.0.proj.bias", "middle_block.1.transformer_blocks.0.ff.net.2.weight", "middle_block.1.transformer_blocks.0.ff.net.2.bias", "middle_block.1.transformer_blocks.0.attn2.to_q.weight", "middle_block.1.transformer_blocks.0.attn2.to_k.weight", "middle_block.1.transformer_blocks.0.attn2.to_v.weight", "middle_block.1.transformer_blocks.0.attn2.to_out.0.weight", "middle_block.1.transformer_blocks.0.attn2.to_out.0.bias", "middle_block.1.transformer_blocks.0.norm1.weight", "middle_block.1.transformer_blocks.0.norm1.bias", "middle_block.1.transformer_blocks.0.norm2.weight", "middle_block.1.transformer_blocks.0.norm2.bias", "middle_block.1.transformer_blocks.0.norm3.weight", "middle_block.1.transformer_blocks.0.norm3.bias", "middle_block.1.proj_out.weight", "middle_block.1.proj_out.bias", "middle_block.2.in_layers.0.weight", "middle_block.2.in_layers.0.bias", "middle_block.2.in_layers.2.weight", "middle_block.2.in_layers.2.bias", "middle_block.2.emb_layers.1.weight", "middle_block.2.emb_layers.1.bias", "middle_block.2.out_layers.0.weight", "middle_block.2.out_layers.0.bias", "middle_block.2.out_layers.3.weight", "middle_block.2.out_layers.3.bias", "middle_block_out.0.weight", "middle_block_out.0.bias". Unexpected key(s) in state_dict: "conv3_3.bias", "Mconv5_stage2_L2.bias", "conv5_3_CPM_L2.bias", "conv5_1_CPM_L2.bias", "Mconv3_stage3_L1.bias", "Mconv1_stage2_L1.weight", "Mconv2_stage5_L1.weight", "Mconv7_stage2_L1.bias", "Mconv3_stage2_L2.weight", "Mconv6_stage5_L1.weight", "conv3_1.weight", "Mconv1_stage2_L2.bias", "Mconv3_stage5_L1.weight", "Mconv1_stage3_L1.weight", "Mconv7_stage2_L2.weight", "Mconv5_stage3_L1.bias", "Mconv6_stage3_L2.bias", "Mconv4_stage6_L1.bias", "conv3_3.weight", "Mconv1_stage5_L2.weight", "Mconv2_stage4_L2.bias", "Mconv6_stage2_L2.weight", "Mconv2_stage2_L1.bias", "Mconv5_stage4_L1.bias", "Mconv5_stage2_L1.weight", "conv3_2.weight", "Mconv6_stage3_L1.bias", "conv5_2_CPM_L2.bias", "conv5_1_CPM_L2.weight", "Mconv1_stage5_L1.bias", "Mconv5_stage6_L2.bias", "Mconv2_stage4_L1.bias", "Mconv5_stage3_L2.bias", "conv5_5_CPM_L1.weight", "Mconv4_stage4_L1.weight", "Mconv5_stage4_L2.bias", "Mconv4_stage5_L1.bias", "Mconv3_stage5_L1.bias", "Mconv4_stage2_L1.bias", "Mconv1_stage5_L2.bias", "Mconv6_stage6_L2.bias", "Mconv5_stage6_L1.bias", "Mconv6_stage6_L1.weight", "Mconv7_stage3_L1.bias", "Mconv7_stage6_L1.bias", "Mconv6_stage6_L2.weight", "Mconv7_stage2_L1.weight", "Mconv6_stage3_L1.weight", "Mconv6_stage2_L1.bias", "Mconv6_stage2_L1.weight", "conv5_1_CPM_L1.weight", "Mconv5_stage6_L1.weight", "Mconv4_stage3_L1.bias", "conv5_2_CPM_L1.weight", "Mconv1_stage4_L2.weight", "Mconv2_stage2_L1.weight", "Mconv4_stage3_L2.weight", "conv4_2.weight", "conv2_2.bias", "Mconv6_stage3_L2.weight", "Mconv2_stage6_L1.bias", "conv1_2.bias", "Mconv3_stage2_L2.bias", "Mconv3_stage5_L2.weight", "Mconv7_stage5_L1.weight", "Mconv1_stage4_L2.bias", "Mconv3_stage3_L1.weight", "conv1_1.bias", "Mconv1_stage5_L1.weight", "Mconv4_stage2_L1.weight", "conv4_3_CPM.weight", "conv4_1.bias", "Mconv1_stage2_L1.bias", "conv2_2.weight", "conv4_3_CPM.bias", "Mconv2_stage3_L1.bias", "conv5_2_CPM_L2.weight", "conv5_5_CPM_L2.weight", "Mconv7_stage5_L2.bias", "Mconv3_stage3_L2.weight", "Mconv5_stage3_L1.weight", "Mconv2_stage5_L1.bias", "Mconv3_stage6_L2.bias", "Mconv1_stage3_L2.weight", "conv4_2.bias", "conv5_1_CPM_L1.bias", "Mconv6_stage4_L2.bias", "conv5_5_CPM_L1.bias", "Mconv5_stage4_L1.weight", "conv5_4_CPM_L2.bias", "Mconv6_stage2_L2.bias", "Mconv2_stage3_L1.weight", "Mconv6_stage4_L1.weight", "Mconv5_stage6_L2.weight", "Mconv3_stage4_L2.weight", "Mconv3_stage4_L2.bias", "Mconv3_stage6_L1.bias", "conv5_5_CPM_L2.bias", "Mconv7_stage6_L2.weight", "Mconv7_stage3_L1.weight", "Mconv6_stage5_L2.weight", "Mconv4_stage6_L2.bias", "Mconv7_stage5_L1.bias", "Mconv3_stage6_L2.weight", "Mconv1_stage6_L1.weight", "Mconv4_stage6_L2.weight", "Mconv5_stage2_L1.bias", "Mconv3_stage2_L1.weight", "Mconv3_stage3_L2.bias", "Mconv2_stage4_L1.weight", "Mconv6_stage6_L1.bias", "Mconv5_stage2_L2.weight", "Mconv4_stage3_L1.weight", "Mconv7_stage4_L2.weight", "Mconv4_stage4_L1.bias", "Mconv4_stage5_L2.bias", "conv4_4_CPM.weight", "Mconv2_stage4_L2.weight", "Mconv2_stage5_L2.weight", "Mconv7_stage6_L1.weight", "conv4_1.weight", "Mconv2_stage6_L2.weight", "conv2_1.bias", "Mconv6_stage5_L2.bias", "Mconv4_stage5_L1.weight", "Mconv2_stage3_L2.weight", "conv3_2.bias", "conv4_4_CPM.bias", "Mconv5_stage3_L2.weight", "Mconv3_stage4_L1.bias", "conv5_3_CPM_L1.bias", "Mconv5_stage5_L1.weight", "conv1_2.weight", "conv5_3_CPM_L1.weight", "Mconv4_stage4_L2.weight", "Mconv3_stage2_L1.bias", "Mconv3_stage4_L1.weight", "Mconv3_stage6_L1.weight", "conv5_2_CPM_L1.bias", "Mconv1_stage6_L1.bias", "Mconv2_stage3_L2.bias", "Mconv2_stage6_L1.weight", "Mconv7_stage4_L2.bias", "Mconv4_stage3_L2.bias", "conv3_1.bias", "Mconv2_stage2_L2.bias", "Mconv3_stage5_L2.bias", "Mconv4_stage2_L2.bias", "Mconv1_stage4_L1.bias", "Mconv4_stage6_L1.weight", "Mconv5_stage5_L2.weight", "Mconv6_stage5_L1.bias", "Mconv2_stage2_L2.weight", "Mconv4_stage2_L2.weight", "Mconv7_stage6_L2.bias", "Mconv1_stage6_L2.weight", "Mconv1_stage2_L2.weight", "Mconv1_stage4_L1.weight", "Mconv1_stage3_L1.bias", "conv5_4_CPM_L1.weight", "Mconv7_stage4_L1.bias", "Mconv6_stage4_L1.bias", "Mconv2_stage5_L2.bias", "conv3_4.weight", "conv3_4.bias", "Mconv5_stage5_L1.bias", "Mconv7_stage3_L2.weight", "Mconv1_stage6_L2.bias", "conv5_3_CPM_L2.weight", "Mconv5_stage4_L2.weight", "Mconv4_stage4_L2.bias", "Mconv7_stage4_L1.weight", "conv5_4_CPM_L1.bias", "conv5_4_CPM_L2.weight", "conv1_1.weight", "Mconv7_stage2_L2.bias", "Mconv7_stage3_L2.bias", "conv2_1.weight", "Mconv1_stage3_L2.bias", "Mconv2_stage6_L2.bias", "Mconv4_stage5_L2.weight", "Mconv5_stage5_L2.bias", "Mconv7_stage5_L2.weight", "Mconv6_stage4_L2.weight".
─ Traceback (most recent call last) ────────────────────────────────╮
│ D:\stable-diffusion-webui\launch.py:361 in │
│ │
│ 358 │
│ 359 if name == "main": │
│ 360 │ prepare_environment() │
│ ❱ 361 │ start() │
│ 362 │
│ │
│ D:\stable-diffusion-webui\launch.py:356 in start │
│ │
│ 353 │ if '--nowebui' in sys.argv: │
│ 354 │ │ webui.api_only() │
│ 355 │ else: │
│ ❱ 356 │ │ webui.webui() │
│ 357 │
│ 358 │
│ 359 if name == "main": │
│ │
│ D:\stable-diffusion-webui\webui.py:205 in webui │
│ │
│ 202 │ │ │
│ 203 │ │ modules.script_callbacks.before_ui_callback() │
│ 204 │ │ │
│ ❱ 205 │ │ shared.demo = modules.ui.create_ui() │
│ 206 │ │ │
│ 207 │ │ if cmd_opts.gradio_queue: │
│ 208 │ │ │ shared.demo.queue(64) │
│ │
│ D:\stable-diffusion-webui\modules\ui.py:458 in create_ui │
│ │
│ 455 │ parameters_copypaste.reset() │
│ 456 │ │
│ 457 │ modules.scripts.scripts_current = modules.scripts.scripts_txt2img │
│ ❱ 458 │ modules.scripts.scripts_txt2img.initialize_scripts(is_img2img=False) │
│ 459 │ │
│ 460 │ with gr.Blocks(analytics_enabled=False) as txt2img_interface: │
│ 461 │ │ txt2img_prompt, txt2img_prompt_styles, txt2img_negative_prompt, submit, _, , tx │
│ │
│ D:\stable-diffusion-webui\modules\scripts.py:270 in initialize_scripts │
│ │
│ 267 │ │ auto_processing_scripts = scripts_auto_postprocessing.create_auto_preprocessing │
│ 268 │ │ │
│ 269 │ │ for script_class, path, basedir, script_module in auto_processing_scripts + scri │
│ ❱ 270 │ │ │ script = script_class() │
│ 271 │ │ │ script.filename = path │
│ 272 │ │ │ script.is_txt2img = not is_img2img │
│ 273 │ │ │ script.is_img2img = is_img2img │
│ │
│ D:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py:115 in init │
│ │
│ 112 │ │ │ "depth": midas, │
│ 113 │ │ │ "hed": hed, │
│ 114 │ │ │ "mlsd": mlsd, │
│ ❱ 115 │ │ │ "normal_map": midas_normal, │
│ 116 │ │ │ "openpose": openpose, │
│ 117 │ │ │ "openpose_hand": openpose_hand, │
│ 118 │ │ │ "fake_scribble": fake_scribble, │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
NameError: name 'midas_normal' is not defined
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.