pkuliyi2015 / multidiffusion-upscaler-for-automatic1111 Goto Github PK
View Code? Open in Web Editor NEWTiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
License: Other
Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
License: Other
it works fine until it goes to upscale,then SD stops working and i got these on CMD panel
Error completing request
Arguments: ('task(d58rlok9n0wo2vx)', 'masterpiece,best quality,loli,\n\n1girl,japanese clothes,white hair,long hair,geta,calm,(looking at viewer:1.2),rim lighting,happy,light particals,shining eyes,(outdoors:1.5),dancing,geta,hair buns,forehead mark, light smile,yellow eyes,\n\n(on an vast ocean:1.2),sunbeam,scenery,(ripplings under feet:1.1),clouds,(sunrise:1.2),waterscape,(a lot of light sparkles:1.2),(sunset glow:1.3),god light,ripplings,countless laterns floating on sea surface,burning sky,sea wave,water splash,torri,japanese temple on sea surface at background,\n\n(serene:1.1), (pure:1.1), (graceful:1.1), (spiritual:1.1), (mystical:1.1), (tranquil:1.1), (holy:1.1), (sacred:1.1), (devotion:0.8),\n\nsurrounded by flames,fire,\n\nyys,\nlora:buzhihuo_320:1:ALL,\nlora:yys_323:0.7:ARTSTYLE\nlora:LoconLoraOffsetNoise_locon0501:1,', '[EasyNegative:0.5], bad_prompt_version2, (worst quality, low quality:1.5), bad anatomy, fewer digits, text, old, signature, watermark, username, artist name, bad proportions,lowres, polar lowres, bad anatomy, bad face, bad hands, bad body, bad shose, bad feet, bad proportions, {bad leg}, {{more legs}}, worst quality, low quality, normal quality, gross proportions, blurry, poorly drawn asymmetric eyes, text,error, missing fingers, missing arms, missing legs, short legs, extra digit,vivid color,bag,bad-hands-5,', [], 30, 15, False, False, 1, 1, 7.5, -1.0, -1.0, 0, 0, 0, False, 512, 768, True, 0.45, 2, 'R-ESRGAN 4x+', 10, 0, 0, [], 0, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, 1, False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, False, True, True, 0, 3072, 192, False, '', 0, <scripts.external_code.ControlNetUnit object at 0x0000027880A14CA0>, <scripts.external_code.ControlNetUnit object at 0x0000027880A143A0>, <scripts.external_code.ControlNetUnit object at 0x0000027880A17400>, <scripts.external_code.ControlNetUnit object at 0x00000278808CE1A0>, False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nKEEPFACE_STRONG:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nKEEPFACE_WEAK:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1\nCHANGEFACE_STRONG:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nCHANGEFACE_WEAK:1,0,0,0,0,0,0,0,0.8,1,1,0.2,0,0,0,0,0\nCLOTHES:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\nPOSES:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\nARTSTYLE:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\nCHARACTER_DESTYLE:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0\nBACKGROUND_DESTYLE:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\nTEST:1,0,0,0,0,0.15,0.25,0,1,1,1,1,1,1,1,1,1\n', False, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 'black', '20', False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "F:\AI_SD\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "F:\AI_SD\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\AI_SD\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "F:\AI_SD\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "F:\AI_SD\modules\processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "F:\AI_SD\modules\processing.py", line 908, in sample
samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
File "F:\AI_SD\modules\sd_samplers_kdiffusion.py", line 324, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "F:\AI_SD\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "F:\AI_SD\modules\sd_samplers_kdiffusion.py", line 324, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "F:\AI_SD\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\AI_SD\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "F:\AI_SD\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI_SD\modules\sd_samplers_kdiffusion.py", line 138, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]})
File "F:\AI_SD\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI_SD\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\AI_SD\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 78, in kdiff_repeat
return self.compute_x_tile(x_in, repeat_func, custom_func)
File "F:\AI_SD\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 117, in compute_x_tile
assert H == self.h and W == self.w
AssertionError
This extension is amazingly fast, andn making upsclar possible for low end machine, but there are some issues.I don not know whether it is the model I use or the parameters, I could not get the desired result.
Firstly, it seems like the VAE (My VAE is nor named after the model I used) is not working as intended, the upscalar versioin looks much flatter, like when the VAE is not working; And I do not know whether it is beacuse of the low contrast, I think the upscaled version looks less detailed than the orginal one. is it normal? can anybody tell me what I am doing wrong? Here is the before and after comparsion and all my settings.
models and prompts
The original
https://imgsli.com/MTYzMzk4
如题
I recently just installed the plugin, so I apologize if I did anything wrong.
Essentially, when I try to use MultiDiffusion alongside ControlNet, I receive an error. MultiDiffusion's settings were unchanged. ControlNet was activated with the first slot using t2i keypose model and the second slot using the openpose model. Their preprocessors were set to "none" since I provided a custom pose for both of them.
I first tried MultiDiffusion with ControlNet together, and I received an error. I then disabled MultiDiffusion and the image generated correctly, with both ControlNets working. Lastly, I enabled only MultiDiffusion and disabled both ControlNets, and the image generated fine as well. After a few more runs, it seems like both work independently but aren't working when used together.
I'll post the console log below:
ControlNet found. MultiDiffusion ControlNet support is enabled.
Loading model from cache: t2iadapter_keypose_sd14v1 [ba1d909a]
Loading preprocessor: none
Loading model from cache: control_openpose-fp16 [9ca67cc5]
Loading preprocessor: none
MultiDiffusion hooked into DPM++ SDE Karras sampler. Tile size: 64 x 64 Tile batches: 4 Batch size: 1
0%| | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(p85frjnq8udmjuy)', 'a person standing', '', [], 20, 16, False, False, 1, 1, 10, -1.0, -1.0, 0, 0, 0, False, 640, 640, False, 1, 2, '4x-UltraSharp', 0, 0, 0, [], 0, True, False, 1024, 1024, True, 64, 64, 32, 1, 'None', 2, False, False, True, True, 0, 2048, 128, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, True, True, 'none', 't2iadapter_keypose_sd14v1 [ba1d909a]', 1, {'image': array([[[0, 0, 0],
[1, 1, 1],
[1, 1, 1],
...,
[1, 1, 1],
[0, 0, 0],
[0, 0, 0]],
[[1, 1, 1],
[1, 1, 1],
[0, 0, 0],
...,
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]],
[[0, 0, 0],
[1, 1, 1],
[1, 1, 1],
...,
[1, 1, 1],
[1, 1, 1],
[0, 0, 0]],
...,
[[1, 1, 1],
[1, 1, 1],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[1, 1, 1]],
[[1, 1, 1],
[0, 0, 0],
[1, 1, 1],
...,
[0, 0, 0],
[1, 1, 1],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]], dtype=uint8), 'mask': array([[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
...,
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]]], dtype=uint8)}, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, True, 'none', 'control_openpose-fp16 [9ca67cc5]', 1, {'image': array([[[0, 0, 0],
[1, 1, 1],
[1, 1, 1],
...,
[1, 1, 1],
[0, 0, 0],
[0, 0, 0]],
[[1, 1, 1],
[1, 1, 1],
[0, 0, 0],
...,
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]],
[[0, 0, 0],
[1, 1, 1],
[1, 1, 1],
...,
[1, 1, 1],
[1, 1, 1],
[0, 0, 0]],
...,
[[1, 1, 1],
[1, 1, 1],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[1, 1, 1]],
[[1, 1, 1],
[0, 0, 0],
[1, 1, 1],
...,
[0, 0, 0],
[1, 1, 1],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]], dtype=uint8), 'mask': array([[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
...,
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]],
[[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
...,
[ 0, 0, 0, 255],
[ 0, 0, 0, 255],
[ 0, 0, 0, 255]]], dtype=uint8)}, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, '', False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, None, 50) {}
Traceback (most recent call last):
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\processing.py", line 632, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\processing.py", line 832, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 349, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 225, in launch_sampling
return func()
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 349, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 117, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\multidiffusion.py", line 190, in kdiff_repeat
return self.compute_x_tile(x_in, func)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\multidiffusion.py", line 254, in compute_x_tile
x_tile_out = func(x_tile, bboxes)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\multidiffusion.py", line 188, in func
x_tile_out = self.sampler_func(x_tile, sigma_in_tile, cond=new_cond)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 233, in forward2
return forward(*args, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 176, in forward
control = param.control_model(x=x_in, hint=param.hint_cond, timesteps=timesteps, context=context)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\adapter.py", line 105, in forward
self.control = self.control_model(hint_in)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\adapter.py", line 260, in forward
x = self.conv_in(x)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\silve\Documents\AI Art__Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [1, 2, 192, 64, 64]
Just like this.Wooden floor turned into barnacles.
Stockings become cracked walls
I've tried several settings, but none of them work well
Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3929348709, Size: 2048x2048, Model hash: 5493a0ec49, Denoising strength: 0.4, Clip skip: 2, ENSD: 31337, Mask blur: 4, Tiled Diffusion upscaler: ESRGAN_4x, Tiled Diffusion scale factor: 2, Tiled Diffusion tile width: 96, Tiled Diffusion tile height: 96, Tiled Diffusion overlap: 48
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 750716271, Size: 2048x2048, Model hash: 5493a0ec49, Denoising strength: 0.4, Clip skip: 2, ENSD: 31337, Mask blur: 4, Tiled Diffusion upscaler: Nearest, Tiled Diffusion scale factor: 2, Tiled Diffusion tile width: 96, Tiled Diffusion tile height: 96, Tiled Diffusion overlap: 48
Steps: 25, Sampler: DPM++ 2M Karras, CFG scale: 14, Seed: 2795935609, Size: 2048x2048, Model hash: 5493a0ec49, Denoising strength: 0.4, Clip skip: 2, ENSD: 31337, Mask blur: 4, Tiled Diffusion upscaler: Nearest, Tiled Diffusion scale factor: 2, Tiled Diffusion tile width: 96, Tiled Diffusion tile height: 96, Tiled Diffusion overlap: 48
Adjusting Steps or Tiled Diffusion upscaler or CFG Scale,neither can curb this trend.
But when I use hires.fix, is not so obvious,
Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3505390930, Size: 1024x1024, Model hash: 5493a0ec49, Denoising strength: 0.5, Clip skip: 2, ENSD: 31337, Tiled Diffusion tile width: 96, Tiled Diffusion tile height: 96, Tiled Diffusion overlap: 48, ControlNet Enabled: True, ControlNet Module: depth, ControlNet Model: control_depth [bda98948], ControlNet Weight: 1, ControlNet Guidance Start: 0, ControlNet Guidance End: 1, Hires upscale: 1.5, Hires steps: 20, Hires upscaler: Latent (nearest-exact)
(I can only upsample to 1536x1536 when using hires.fix, due to the memory of the graphics card)
Is there a way to reduce the memory footprint of hires.fix without lowering the output resolution? Avoid "RuntimeError: Not enough memory, use lower resolution"?
Or could you add Latent (nearest-exact) to Tiled Diffusion so that it can use the upsampling settings recommended by the OrangeMixs developers?↓
"Upscaler:
Detailed illust → Latenet (nearest-exact)
Denoise strength: 0.5 (0.5~0.6)"
Or something else will do
在 i2i 中将图拖进主框,然后拖进“Region Prompt Control”中的框(上面的按钮“从图生图导入”没有作用)。
用红框圈住图中画坏了的手,提示符填写 hand, 然后生成结果是整个图全部改变了。
重新试验,把图拖进 inpaint 的主框,拖进下方“Region Prompt Control”中的框,红框圈住手,提示符填写 hand,点击生成后等一下就报错:
denoised = latent * mask + nmask * denoised
RuntimeError: The size of tensor a (14) must match the size of tensor b (103) at non-singleton dimension 3
不知道为什么,感觉我用的方法不对。
I'm trying to 2x upscale a 896x384 picture. It used to work with Encoder Tile Size 1024, Decoder Tile Size 64. Since remove bbox limit and add tips I can only use Encoder Tile Size 768. And on the latest fix neg prompt errors commit it doesn't work at all even with the lowest Encoder Tile Size 256.
Upscaling a 512x768 and 768x512 pictures still works with Encoder Tile Size 1024. Changing latent tile size doesn't help as well.
I'm using AMD RX 5700 with 8GB vram. Launching in docker with --no-half --medvram --no-half-vae
.
Here's an error code:
[Tiled Diffusion] upscaling image with 4x_NMKD-Superscale-SP_178000_G...
[Tiled VAE]: input_size: torch.Size([1, 3, 768, 1792]), tile_size: 256, padding: 32
[Tiled VAE]: split to 3x7 = 21 tiles. Optimal tile size 256x256, original tile size 256x256
[Tiled VAE]: Executing Encoder Task Queue: 100%|███████████████████████████████| 1911/1911 [00:05<00:00, 339.96it/s]
[Tiled VAE]: Done in 5.831s, max VRAM alloc 611.158 MB
0%| | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(lxdnubrzsapm7fm)', 0, 'crow', '(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, (poorly drawn eyes, deformed iris and pupils:1.1), (anime, painting, 3d render:1.1), (ugly eyes and mouth, missing teeth:1.2)', ['upscale'], <PIL.Image.Image image mode=RGBA size=896x384 at 0x7FFA0844A040>, None, None, None, None, None, None, 20, 5, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.4, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 1, 32, 0, 'outputs/txt2img', '', '', [], 0, True, 'Mixture of Diffusers', False, True, 1024, 1024, 64, 96, 16, 1, '4x_NMKD-Superscale-SP_178000_G', 2, False, False, 1, False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', True, True, True, False, 0, 256, 64, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 128, 64, 0.35, 128, 5, True, 0, False, 8, 0, 2, 2048, 2048, 2) {}
Traceback (most recent call last):
File "/sd/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/sd/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/sd/modules/img2img.py", line 171, in img2img
processed = process_images(p)
File "/sd/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/sd/modules/processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/sd/modules/processing.py", line 1054, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "/sd/modules/sd_samplers_kdiffusion.py", line 324, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/sd/modules/sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "/sd/modules/sd_samplers_kdiffusion.py", line 324, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/sd/venv/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/sd/repositories/k-diffusion/k_diffusion/sampling.py", line 553, in sample_dpmpp_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/sd/modules/sd_samplers_kdiffusion.py", line 138, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]})
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/sd/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/sd/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/sd/modules/sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/sd/modules/sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1212, in _call_impl
result = forward_call(*input, **kwargs)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
x = layer(x, context)
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 324, in forward
x = block(x, context=context[i])
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 259, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/sd/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 262, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "/sd/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/sd/modules/sd_hijack_optimizations.py", line 129, in split_cross_attention_forward
s2 = s1.softmax(dim=-1, dtype=q.dtype)
torch.cuda.OutOfMemoryError: HIP out of memory. Tried to allocate 1.72 GiB (GPU 0; 7.98 GiB total capacity; 5.18 GiB already allocated; 1018.00 MiB free; 6.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF
Hello, I'm a MS student at Peking University and very interested in your repo.
Could I have your e-mail or WeChat for the further communication?
My e-mail is [email protected]. Thanks!
Keeps telling me to enable the setting even though the extension is not enabled.
因为放大的时候对prompt有限制,而对于comfyUI来说放大的prompt本来就是单独设置的,应该会非常契合。
兄弟,你这个插件是真牛逼。
不亚于ControlNet对我的帮助!
对…6G显存的我帮助太大了
I found this bug when I tried to upscale a 512x768 image by x1.5 and compared different parameters. To do this, I enabled the built-in X/Y/Z plot
script in img2img tab of WebUI, then set X to 2 CFG Scale
values and Y to 2 Denoising
values.
I expected to get 2x2=4 images with their pixels all in 768x1152, which is x1.5 upscaled over the input.
However, I get images pixels upscaled over and over again. I checked the img2img outputs folder and find things like:
The above bug is confirmed with the latest Multidiffusion extension commit (23ccdc7) and WebUI commit (a9fed7c).
Seems the inpainting is not work with the extension.
this is the setting i used.
Error code down below.
Loading weights [7107c05c1c] from G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\models\Stable-diffusion\dalcefoPainting_3rd.safetensors Creating model from config: G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Loading VAE weights specified in settings: G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt Applying xformers cross attention optimization. Model loaded in 1.7s (create model: 0.2s, apply weights to model: 0.4s, apply half(): 0.3s, load VAE: 0.1s, move model to device: 0.6s). Upscaling image with ESRGAN_4x... ControlNet found. MultiDiffusion ControlNet support is enabled. MultiDiffusion hooked into DPM++ 2M Karras sampler. Tile size: 64 x 64 Tile batches: 42 Batch size: 1 Loading weights [3498c900ec] from G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\models\Stable-diffusion\RPG_V4_inpainting.inpainting.safetensors Creating model from config: G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\configs\v1-inpainting-inference.yaml LatentInpaintDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.54 M params. Loading VAE weights specified in settings: G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt Applying xformers cross attention optimization. Model loaded in 1.6s (create model: 0.2s, apply weights to model: 0.5s, apply half(): 0.3s, load VAE: 0.2s, move model to device: 0.4s). Error completing request Arguments: ('task(sthhbj23vlzsi46)', 2, '<(masterpiece, realistic:1.3), (extremely intricate:1.2)>, portrait of a girl, (face:1.2), floting hair, wind, cloud, sunlight, cinematic light, bangs, <hypernet:dalcefo_nocopy_v2:0.8>', '(worst quality, low quality:1.4), (depth of field, blurry:1.2), (greyscale, monochrome:1.1), 3D face, cropped, lowres, text, jpeg artifacts, signature, watermark, username, blurry, artist name, trademark, watermark, title, multiple view, Reference sheet, curvy, plump, fat, muscular female, strabismus,', [], <PIL.Image.Image image mode=RGBA size=776x1016 at 0x1EF230B5900>, None, {'image': <PIL.Image.Image image mode=RGBA size=776x1016 at 0x1EF230B4CA0>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=776x1016 at 0x1EF230B5810>}, None, None, None, None, 26, 15, 1, 0, 1, False, False, 1, 1, 8, 1.5, 0.4, -1.0, -1.0, 0, 0, 0, False, 680, 520, 0, 1, 16, 0, '', '', '', ['Model hash: 7107c05c1c'], 0, 0, 2, 512, 512, True, 'None', 'None', 0, 0, 0, 0, 0, 0.25, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, False, True, False, 0, -1, True, False, 1024, 1024, True, 64, 64, 32, 1, 'ESRGAN_4x', 2, False, True, False, True, False, 0, 3072, 192, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 'x264', 'mci', 10, 0, False, True, True, True, 'intermediate', 'animation', False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'Refresh models', True, True, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, '', 0.5, True, False, '', 'Lerp', False, False, 1, 0.3, False, 'OUT', ['OUT'], 5, 0, 'Nearest', False, 'Nearest', False, 'Lerp', '422', '156', False, False, False, 3, 0, False, False, 0, False, 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'first', -1, 1, 0, 1, 4, 1, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, False, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Use from inpaint tab, inpaint at full res ON, denoise <0.5</p>', 'None', 30, 4, 0, 0, False, 'None', '<br>', 'None', 30, 4, 0, 0, 4, 0.4, True, 32, None, True, None, None, False, False, 0, True, 384, 384, False, 2, True, True, False, False, True, True, True, None, None, None, None, None, 50, False, 4.0, '', 10.0, 'Linear', 3, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 10.0, True, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 512, 512, False, False, True, True, True, False, False, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, False, '{inspiration}', None, 'linear', 'lerp', 'token', 'random', '30', 'fixed', 1, '8', None, 'Lanczos', 2, 0, 0, 'mp4', 10.0, 0, '', True, False, False, '', '', False, '', 127, False, 30, 9999, 1, 10, 0.25, True, False, '1', '', True, '', '', '', '', '', '', '', '', '', '', '', 'None', 0.3, 60) {} Traceback (most recent call last): File "G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\modules\img2img.py", line 171, in img2img processed = process_images(p) File "G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 577, in process_images_inner p.init(p.all_prompts, p.all_seeds, p.all_subseeds) File "G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\modules\processing.py", line 984, in init image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(self.mask_for_overlay.convert('L'))) File "G:\Collection\BaiduDownload\stablediffusion\stable-diffusion\stable-diffusion-webui\python\lib\site-packages\PIL\Image.py", line 1731, in paste self.im.paste(im, box, mask.im) ValueError: images do not match
this is the full error code.
显卡:GTX970M(6G) 启动器:秋叶
报错:
RuntimeError: CUDA out of memory. Tried to allocate 15.64 GiB (GPU 0; 6.00 GiB total capacity; 1.81 GiB already allocated; 2.77 GiB free; 1.85 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
可能是我哪里操作的问题?求大佬解答。
@pkuliyi2015 this is a great extension, esp for controlling depth-maps of equirectangular/hdri like scenes.
they need a lot of resolution, 15-32k best.
Btw, how did you get into the extension-index.md of Auto1111 wiki? I try to put my Panorama-Extension there, but my post is one of thousands in the discussion section.
https://github.com/GeorgLegato/sd-webui-panorama-viever
my workflow so far without your extension:
https://www.reddit.com/r/StableDiffusion/comments/11jnnb7/workflow_sharks_sd1111controldepth_lighting/
1st trial to create 16k x 8k image for equirectangular view:
30597 steps on my 3080 12GiB Ram... °!°
Input Controlnet is nativ 16k x 8k HDRI image preprocessed to Depth
cya
If you set the batch count size greater than 1, all subsequent images from the second onwards will be generated with an incorrect ratio. This is particularly noticeable when using ControlNet. Looks like an extension is not hooked for the second generation
Loading model from cache: control_canny [e3fe7712]
Loading preprocessor: canny
MultiDiffusion hooked into DPM++ 2M Karras sampler. Tile size: 96x96, Tile batches: 2, Batch size: 1
0% 0/15 [00:00<?, ?it/s]
MultiDiffusion Sampling: 0% 0/30 [00:00<?, ?it/s]
MultiDiffusion Sampling: 3% 1/30 [00:00<00:11, 2.49it/s]
7% 1/15 [00:01<00:16, 1.19s/it]
13% 2/15 [00:02<00:14, 1.15s/it]
MultiDiffusion Sampling: 7% 2/30 [00:03<00:50, 1.79s/it]
20% 3/15 [00:03<00:13, 1.13s/it]
MultiDiffusion Sampling: 10% 3/30 [00:04<00:40, 1.49s/it]
27% 4/15 [00:04<00:12, 1.13s/it]
MultiDiffusion Sampling: 13% 4/30 [00:05<00:34, 1.34s/it]
33% 5/15 [00:05<00:11, 1.13s/it]
MultiDiffusion Sampling: 17% 5/30 [00:06<00:31, 1.27s/it]
40% 6/15 [00:06<00:10, 1.13s/it]
MultiDiffusion Sampling: 20% 6/30 [00:07<00:29, 1.21s/it]
47% 7/15 [00:07<00:08, 1.12s/it]
MultiDiffusion Sampling: 23% 7/30 [00:08<00:27, 1.19s/it]
53% 8/15 [00:09<00:07, 1.12s/it]
MultiDiffusion Sampling: 27% 8/30 [00:09<00:25, 1.16s/it]
60% 9/15 [00:10<00:06, 1.12s/it]
MultiDiffusion Sampling: 30% 9/30 [00:11<00:24, 1.15s/it]
67% 10/15 [00:11<00:05, 1.12s/it]
MultiDiffusion Sampling: 33% 10/30 [00:12<00:22, 1.14s/it]
73% 11/15 [00:12<00:04, 1.12s/it]
MultiDiffusion Sampling: 37% 11/30 [00:13<00:21, 1.14s/it]
80% 12/15 [00:13<00:03, 1.12s/it]
MultiDiffusion Sampling: 40% 12/30 [00:14<00:20, 1.13s/it]
87% 13/15 [00:14<00:02, 1.12s/it]
MultiDiffusion Sampling: 43% 13/30 [00:15<00:19, 1.13s/it]
93% 14/15 [00:15<00:01, 1.12s/it]
MultiDiffusion Sampling: 47% 14/30 [00:16<00:18, 1.13s/it]
100% 15/15 [00:16<00:00, 1.13s/it]
0% 0/15 [00:00<?, ?it/s]
7% 1/15 [00:00<00:11, 1.24it/s]
13% 2/15 [00:01<00:10, 1.25it/s]
20% 3/15 [00:02<00:09, 1.24it/s]
27% 4/15 [00:03<00:08, 1.24it/s]
33% 5/15 [00:04<00:08, 1.24it/s]
40% 6/15 [00:04<00:07, 1.24it/s]
47% 7/15 [00:05<00:06, 1.24it/s]
53% 8/15 [00:06<00:05, 1.23it/s]
60% 9/15 [00:07<00:04, 1.24it/s]
67% 10/15 [00:08<00:04, 1.23it/s]
73% 11/15 [00:08<00:03, 1.24it/s]
80% 12/15 [00:09<00:02, 1.23it/s]
87% 13/15 [00:10<00:01, 1.24it/s]
93% 14/15 [00:11<00:00, 1.23it/s]
100% 15/15 [00:12<00:00, 1.24it/s]
MultiDiffusion Sampling: 47% 14/30 [00:40<00:46, 2.88s/it]
Total progress: 100% 30/30 [00:39<00:00, 1.31s/it]
There is also an exception if you set "Latent tile batch size" greater than 1 with batch count size >1.
Loading model from cache: control_canny [e3fe7712]
Loading preprocessor: canny
MultiDiffusion hooked into DPM++ 2M Karras sampler. Tile size: 96x96, Tile batches: 1, Batch size: 2
0% 0/15 [00:00<?, ?it/s]
MultiDiffusion Sampling: 0% 0/15 [00:00<?, ?it/s]
7% 1/15 [00:01<00:15, 1.14s/it]
13% 2/15 [00:02<00:14, 1.11s/it]
20% 3/15 [00:03<00:13, 1.09s/it]
27% 4/15 [00:04<00:11, 1.09s/it]
33% 5/15 [00:05<00:10, 1.09s/it]
40% 6/15 [00:06<00:09, 1.08s/it]
47% 7/15 [00:07<00:08, 1.08s/it]
53% 8/15 [00:08<00:07, 1.08s/it]
60% 9/15 [00:09<00:06, 1.08s/it]
67% 10/15 [00:10<00:05, 1.08s/it]
73% 11/15 [00:11<00:04, 1.09s/it]
80% 12/15 [00:13<00:03, 1.09s/it]
87% 13/15 [00:14<00:02, 1.09s/it]
93% 14/15 [00:15<00:01, 1.08s/it]
100% 15/15 [00:16<00:00, 1.09s/it]
0% 0/15 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(xxxxxxxxxxxxxxx)', 'a car, road, mountains', '', [], 15, 8, False, False, 2, 1, 8, -1.0, -1.0, 0, 640, 640, False, 960, 768, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 2, 'None', 2, False, False, 1, False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 7, 0.4, 0.4, 0.2, 0.2, '', '', False, False, True, True, True, 2048, 128, True, False, 1, False, False, False, 1.1, 1.5, 70, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 6, 95, 'Cosine Up', 4, 'Cosine Up', 4, 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.external_code.ControlNetUnit object at 0x>, <scripts.external_code.ControlNetUnit object at 0x>, <scripts.external_code.ControlNetUnit object at 0x>, <scripts.external_code.ControlNetUnit object at 0x>, <scripts.external_code.ControlNetUnit object at 0x>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, True, True, False, 0, None, False, None, False, None, False, None, False, None, False, 50) {}
Traceback (most recent call last):
File "/path-to-auto1111-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/path-to-auto1111-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/path-to-auto1111-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/path-to-auto1111-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/path-to-auto1111-webui/modules/processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/path-to-auto1111-webui/modules/processing.py", line 836, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/path-to-auto1111-webui/modules/sd_samplers_kdiffusion.py", line 351, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/path-to-auto1111-webui/modules/sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "/path-to-auto1111-webui/modules/sd_samplers_kdiffusion.py", line 351, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/path-to-auto1111-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path-to-auto1111-webui/modules/sd_samplers_kdiffusion.py", line 119, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path-to-auto1111-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/path-to-auto1111-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/path-to-auto1111-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/path-to-auto1111-webui/modules/sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "/path-to-auto1111-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path-to-auto1111-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path-to-auto1111-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 233, in forward2
return forward(*args, **kwargs)
File "/path-to-auto1111-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 176, in forward
control = param.control_model(x=x_in, hint=param.hint_cond, timesteps=timesteps, context=context)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path-to-auto1111-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 115, in forward
return self.control_model(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path-to-auto1111-webui/extensions/sd-webui-controlnet/scripts/cldm.py", line 380, in forward
h += guided_hint
RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 0
Sorry but when I try them in t2i or i2i (defalut settings),it always report" RuntimeError: Cannot set version_counter for inference tensor", is any mistake I have made ?
I haven't enable the controlnet when I use multdiffusion and tlied VAE.
ERROR REPORT:
_Traceback (most recent call last):
File "H:\stable-diffusion-webui-directml\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "H:\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "H:\stable-diffusion-webui-directml\modules\img2img.py", line 171, in img2img
processed = process_images(p)
File "H:\stable-diffusion-webui-directml\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "H:\stable-diffusion-webui-directml\modules\processing.py", line 577, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "H:\stable-diffusion-webui-directml\modules\processing.py", line 1017, in init
self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image))
File "H:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "H:\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "H:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "H:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
return self.first_stage_model.encode(x)
File "H:\stable-diffusion-webui-directml\modules\lowvram.py", line 48, in first_stage_model_encode_wrap
return first_stage_model_encode(x)
File "H:\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
h = self.encoder(x)
File "H:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in call_impl
return forward_call(*input, **kwargs)
File "H:\stable-diffusion-webui-directml\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 481, in call
return self.vae_tile_forward(x)
File "H:\stable-diffusion-webui-directml\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 369, in wrapper
ret = fn(*args, **kwargs)
File "H:\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "H:\stable-diffusion-webui-directml\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 627, in vae_tile_forward
tile = z[:, :, input_bbox[2]:input_bbox[3],
RuntimeError: Cannot set version_counter for inference tensor
Great extension. With tiled vae i can process 2048x2048 and even bigger images with 8gb vram. This is sorta gamechanging.
Thank you for your hard work.
I created issue because there is no discussions section.
happens when it finished an image right after 100% or if I interrupt it before it's done.
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\call_queue.py", line 33, in f
shared.state.begin()
File "D:\stable-diffusion-webui\modules\shared.py", line 231, in begin
devices.torch_gc()
File "D:\stable-diffusion-webui\modules\devices.py", line 59, in torch_gc
torch.cuda.empty_cache()
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 125, in empty_cache
torch._C._cuda_emptyCache()
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
multidiffusion works normally, but when tile vae is turned on, an error is reported:
Input type (float) and bias type (c10::Half) should be the same
我有一台配置2G显存N卡的旧笔记本之前最大只能跑大概600400分辨率
使用了VAE最大跑到过1200800,偶尔出现黑色块和放大爆显存但大多都是可以成功的
这是啥概念呢
我自己台式机是A卡的8G显存,因为用不了这个插件我最大直接生成40W-50W像素,就算用低像素放大最多也只跑到过90W像素。
几乎和N卡用VAE拉伸效果是差不多的,N卡用这个小4倍显存几乎跑出了差不多像素的图可见这个插件的厉害。
作者能否让A卡也能享受到VAE的快乐?
显卡 显存比较小只有8G 不知道各项参数设置在多少范围内 能比较能得到一个理想的结果?好像没有看到有相关的介绍这个插件的视频或者教程!现在使用不知道该怎么设置!
Why this error occurs?
Can this be used for upscaling instead of HighRes Fix or can upscaling only be done img2img?
error: output type 'tensor<1x32x4x512x544xf16>' and mean type 'tensor<1x32x1x1x1xf32>' are not broadcast compatible
On a mac tiledVAE does not work while MultiDiffusion does
Traceback (most recent call last):
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\img2img.py", line 171, in img2img
processed = process_images(p)
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\processing.py", line 1054, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\Users\asdf\miniconda3\envs\auto\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\asdf\ai\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\Users\asdf\miniconda3\envs\auto\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\asdf\ai\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 140, in forward
x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond={"c_crossattn": [uncond], "c_concat": [image_cond_in[-uncond.shape[0]:]]})
RuntimeError: The expanded size of the tensor (1) must match the existing size (2) at non-singleton dimension 0. Target sizes: [1, 4, 384, 256]. Tensor sizes: [2, 4, 384, 256]
OS: Windows 11
GPU: Nvidia 4090
It looks independent of my settings. I got it to work on other images so it seems like some images break the code.
Hi, thanks for this amazing project, I saw this wide image generation I think is amazing, on the instructions I find this:
"Wide Image Generation (txt2img)
txt2img panorama generation, as mentioned in MultiDiffusion.
All tiles share the same prompt currently.
Please use simple positive prompts to get good results, otherwise the result will be pool.
We are urgently working on the rectangular & fine-grained prompt control.
Example - masterpiece, best quality, highres, city skyline, night.
"
But can you describe how can I generate this wide images step by step on automatic 1111? it's on the Multidiffusion section?
Br,
Pablo.
如题,单独使用Tiled VAE,使得人体结构错误概率(多手多腿,手腿乱接)显著提高。ControlNet无法纠正错误。
测试参数:
GPU: 1080Ti 11Gb VRAM
Checkpoint: chilloutMix / kiwiMix / perfectWorld
VAE: VAE-ft-mse-840000-ema-pruned.safetensors
分辨率: 1024*1536
Sampler: DPM++ SDE Karras
CFG: 6
Clip Skip: 2
Encoder/Decoder Tile Size: 256/128
更正:没事了,单独使用Tiled VAE还是得开Hires. fix。不过以往只能512*768 放大两倍,再高就爆显存,现在可以放大三倍了。留在这里供其他人参考吧,赞美大佬。
Edit: for those who use Tiled VAE only in text2img, you still need Hires. fix. But you can increase upscale multiplier higher compared to before.
I tried to upscale 576*1024 image using R-ESRGAN 4x+, but it gives me an out of memory error
Log:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 8.00 GiB total capacity; 6.08 GiB already allocated; 0 bytes free; 6.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My GPU is RTX2060 Super (8 GB VRAM), but it can't handle that
Maybe I'm doing something wrong
Settings that I used:
Any help or explanation is greatly appreciated!!!
UPD: I tried with and without "Overwrite image size" checked
Thank you for your hard work.
I fully understand your desire to encourage new extension, but I don't think it's right to inconvenience using the UI.
multidiffusion.py. Line 293.
with gr.Accordion('MultiDiffusion', open=True):
to
with gr.Accordion('MultiDiffusion', open=False):
File "D:\sd\sdwebui\py310\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\sd\sdwebui\extensions\sd-webui-controlnet\scripts\hook.py", line 233, in forward2
return forward(*args, **kwargs)
File "D:\sd\sdwebui\extensions\sd-webui-controlnet\scripts\hook.py", line 176, in forward
control = param.control_model(x=x_in, hint=param.hint_cond, timesteps=timesteps, context=context)
File "D:\sd\sdwebui\py310\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\sd\sdwebui\extensions\sd-webui-controlnet\scripts\cldm.py", line 115, in forward
return self.control_model(*args, **kwargs)
File "D:\sd\sdwebui\py310\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\sd\sdwebui\extensions\sd-webui-controlnet\scripts\cldm.py", line 380, in forward
h += guided_hint
RuntimeError: The size of tensor a (8) must match the size of tensor b (16) at non-singleton dimension 0
error occurred when using controlnet
Nothing fancy, I guess making the extension off by default is much more common
https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/blob/main/scripts/vae_optimize.py#L111
There's not enough info in the readme to actually use this extension if you don't already know exactly how it's supposed to work. Should I be on the txt2img tab or img2img tab? Do I use both multidiffusion and tiled vae at the same time? I've got it to do 'something' but all my results are failures, so I'm missing something important somewhere.
Basically can we get some real directions on how to use this?
when I use StableDiffusion x4 upscaler to process image with >256x256 resolution, OOM errors happened frequently。
I divide latens into many patchs following the operation (unfold/fold) in latent-diffusion(https://github.com/CompVis/latent-diffusion/blob/main/ldm/models/diffusion/ddpm.py#L601,https://github.com/CompVis/latent-diffusion/blob/main/ldm/models/diffusion/ddpm.py#L715), this reduce memory substantially when decoding latents output, it seems similar to this method ?
I cannot get good results with this in i2i upscaling. Outputs are decent, but seemingly not better than ultimate SD upscale. Can you give some example settings for upscaling? And give annotations to settings explaining what they do?
Also have no idea how to properly apply this in t2i.
使用Region Prompt Control的时候出现RuntimeError, 取消勾选Region Prompt Control时正常
插件版本已经更新至最新
[Tiled Diffusion] upscaling image with Lanczos...
[Tiled Diffusion] ControlNet found, MultiDiffusion-ControlNet support is enabled.
MultiDiffusion hooked into Euler a sampler. Tile size: 96x96, Tile batches: 8, Batch size: 1
MultiDiffusion Sampling: 6%|████▊ | 8/144 [01:20<22:51, 10.09s/it]
[Tiled VAE]: input_size: torch.Size([1, 3, 1728, 1152]), tile_size: 960, padding: 32
[Tiled VAE]: split to 2x2 = 4 tiles. Optimal tile size 544x832, original tile size 960x960
[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 640 x 960 image
[Tiled VAE]: Executing Encoder Task Queue: 100%|█████████████████████████████████████████████████████████████████| 364/364 [00:01<00:00, 326.79it/s]
[Tiled VAE]: Done in 4.151s, max VRAM alloc 1123.737 MB
0%| | 0/7 [00:03<?, ?it/s]
Error completing request 13%|███████████ | 8/63 [00:02<00:17, 3.11it/s]
Arguments: ('task(wpz0s212znmet5m)', 0, 'best quality, extremely clear', '(worst quality, low quality:1.4), (blurry:1.3),(EasyNeagtive:1.1), watermark, bad_prompt_version2', [], <PIL.Image.Image image mode=RGBA size=768x1152 at 0x1863C75F4F0>, None, {'image': <PIL.Image.Image image mode=RGBA size=768x1152 at 0x1863C75D4B0>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=768x1152 at 0x1863C75DD80>}, None, None, None, None, 20, 0, 4, 0, 0, False, False, 1, 1, 7, 1.5, 0.3, -1.0, -1.0, 0, 0, 0, False, 768, 512, 0, 0, 32, 0, '', '', '', '', 0, '<span>(No stats yet, run benchmark in VRAM Estimator tab)</span>', True, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'Lanczos', 1.5, False, True, 1, True, 1, 0.02, 0.19, 0.2, 0.2, 'holding a book', 'bad hands', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', False, 1, 0.4, 0.4, 0.2, 0.2, '', '', True, True, True, True, 0, 960, 64, False, False, '#000000', False, None, None, <scripts.external_code.ControlNetUnit object at 0x00000186242DB4C0>, <scripts.external_code.ControlNetUnit object at 0x00000186242E8520>, None, '', 'Get Tags', False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, True, -1.0, '\n <h3><strong>Combinations</strong></h3>\n Choose a number of terms from a list, in this case we choose two artists\n <code>{2$$artist1|artist2|artist3}</code>\n If $$ is not provided, then 1$$ is assumed.\n <br>\n A range can be provided:\n <code>{1-3$$artist1|artist2|artist3}</code>\n In this case, a random number of artists between 1 and 3 is chosen.\n <br/><br/>\n\n <h3><strong>Wildcards</strong></h3>\n <p>Available wildcards</p>\n <ul>\n <li>__artist__</li><li>__background__</li><li>__clothes__</li><li>__clotheslow__</li><li>__clothesup__</li><li>__color__</li><li>__expression__</li><li>__hair style__</li><li>__hairornament__</li><li>__hat__</li><li>__ornament__</li><li>__pose__</li><li>__shoes__</li><li>__sock__</li><li>__style__</li></ul>\n <br/>\n <code>WILDCARD_DIR: scripts/wildcards</code><br/>\n <small>You can add more wildcards by creating a text file with one term per line and name is mywildcards.txt. Place it in scripts/wildcards. <code>__mywildcards__</code> will then become available.</small>\n ', '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, None, False, 50, False, 4.0, '', 10.0, 'Linear', 3, False, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 0.0001, 75, 0.0, False, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, False, False, 'Euler a', 0.95, 0.75, '0.75:0.95:5', '0.2:0.8:5', 'zero', 'pos', 'linear', 0.2, 0.0, 0.75, None, 'Lanczos', 1, 0, 0) {}
Traceback (most recent call last):
File "D:\Downloads\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\Downloads\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\modules\img2img.py", line 171, in img2img
processed = process_images(p)
File "D:\Downloads\stable-diffusion-webui\modules\processing.py", line 486, in process_images
res = process_images_inner(p)
File "D:\Downloads\stable-diffusion-webui\modules\processing.py", line 636, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "D:\Downloads\stable-diffusion-webui\modules\processing.py", line 1054, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "D:\Downloads\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\Downloads\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 227, in launch_sampling
return func()
File "D:\Downloads\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 324, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 125, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 71, in kdiff_repeat
return self.compute_x_tile(x_in, repeat_func, custom_func)
File "D:\Downloads\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 153, in compute_x_tile
x_tile_out = custom_func(x_tile, cond, uncond, index, bbox)
File "D:\Downloads\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\multidiffusion.py", line 67, in custom_func
def custom_func(x, custom_cond, uncond, bbox_id, bbox): return self.kdiff_custom_forward(
File "D:\Downloads\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\abstractdiffusion.py", line 219, in kdiff_custom_forward
uncond_out = forward_func(x_tile[cond_size:cond_size+uncond_size], sigma_in[cond_size:cond_size+uncond_size], cond={"c_crossattn": [
File "D:\Downloads\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 125, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\Downloads\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
return self.__orig_func(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1570, in _call_impl
result = forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 324, in forward
x = block(x, context=context[i])
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 259, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\Downloads\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 262, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1533, in _call_impl
return forward_call(*args, **kwargs)
File "D:\Downloads\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 342, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v))
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 196, in memory_efficient_attention
return _memory_efficient_attention(
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 292, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 312, in _memory_efficient_attention_forward
out, *_ = op.apply(inp, needs_gradient=False)
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 235, in apply
out, softmax_lse = cls.OPERATOR(
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\torch\_ops.py", line 504, in __call__
return self._op(*args, **kwargs or {})
File "D:\Downloads\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 59, in _flash_fwd
lse = _C_flashattention.fwd(
RuntimeError: Expected batch_size > 0 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Facing issue weird. 遇到了奇怪的问题。
Traceback (most recent call last):
File "D:\AI6\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "D:\AI6\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1013, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "D:\AI6\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 911, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "D:\AI6\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1800, in preprocess
x["name"],
TypeError: 'int' object is not subscriptable
Any ideas? 有什么建议吗?
Seems like bug with gradio app. 似乎是gradio app引起的BUG。
Launching Web UI with arguments: --api --xformers
No running, No images, only traceback it is. 没有运行,没有图片,只输出了报错。
Hey, you guys are awesome!
..I would like to ask for a little help, how to use controlnet?
With any denoising it doesn't care about the input image, as if controlnet doesn't work at all. I've tried all combinations of everything:)
Steps: 40, Sampler: DDIM (tried all) , CFG scale: 3.5 (tried all) , Seed: 410819813, Size: 1024x1024, Model hash: 710fc74d4c (Protogen), Denoising strength: 0. 87 (tried all), Mask blur: 4, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: control_sd15_canny [fef5e48e] (tried the same input image and a different), ControlNet-0 Weight: 1 (tried 2), ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, Tiled Diffusion upscaler: ESRGAN_4x, Tiled Diffusion scale factor: 2, Tiled Diffusion tile width: 96, Tiled Diffusion tile height: 96, Tiled Diffusion overlap: 48
ON: xformers
OFF: medvram, always-batch-cond-uncond, no-half, no-half-vae
Thanks
Mira
Hello!
I'm getting this error message. Can you please take a look into it?
Restarting UI...
Error loading script: tilediffusion.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\tilediffusion.py", line 64, in <module>
from methods import MultiDiffusion, MixtureOfDiffusers, splitable, BlendMode
ImportError: cannot import name 'BlendMode' from 'methods' (D:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\methods\__init__.py)
Loading Unprompted v7.9.1 by Therefore Games
(SETUP) Initializing Unprompted object...
(SETUP) Loading configuration files...
(SETUP) Debug mode is False
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 9.0s (list extensions: 2.7s, load scripts: 1.0s, create ui: 4.9s, gradio launch: 0.2s).
Please forgive me for my poor English. When I use (mult) and (vav), I use the default, but the rendered image still repeats two people, write(1 girl) in the tag. Still useless,Then close it(mult) can have a person. How to solve it?Thank you very much.
hello, thanks for the awesome extension. I am trying to use it as a script in the api and i keep getting this error?
Error from my web apps Chrome developer tools network console:
{"message":"Cannot read properties of undefined (reading '0')"}
this same message happened with "Ultimate SD upscale" as well, but it ended up printing the below error at some point, and i just added the proper "script_args" to get it to work.
is this the same issue? as i tried to add the args to match what was in "multidiffusion.py" "process" function and it didnt work. Any Ideas? Thank you!
error for Ultimate SD gave this:
TypeError: Script.run() missing 18 required positional arguments: '_', 'tile_width', 'tile_height', 'mask_blur', 'padding', 'seams_fix_width', 'seams_fix_denoise', 'seams_fix_padding', 'upscaler_index', 'save_upscaled_image', 'redraw_mode', 'save_seams_fix_image', 'seams_fix_mask_blur', 'seams_fix_type', 'target_size_type', 'custom_width', 'custom_height', and 'custom_scale'
then i added these args to script_args and it worked:
script_name: 'MultiDiffusion',
script_args: [
imageBase64, 608, 608, 8, 32, 64, 0.35, 16,
3, 'True', 1, "False", 4,
0, "From img2img2 settings", 1512, 1512, 2
],
Upscaling image with BSRGAN...
LoRA weight_unet: 0.3, weight_tenc: 0.3, model: ligneClaireStyleCogecha_v10(21e170a9ba68)
dimension: 64, alpha: 64.0, multiplier_unet: 0.3, multiplier_tenc: 0.3
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
original forward/weights is backed up.
enable LoRA for text encoder
enable LoRA for U-Net
shapes for 0 weights are converted.
LoRA model ligneClaireStyleCogecha_v10(21e170a9ba68) loaded:
LoRA weight_unet: 1, weight_tenc: 1, model: makotoShinkaiSubstyles_offset(ca429a1460f9)
dimension: 128, alpha: 128.0, multiplier_unet: 1, multiplier_tenc: 1
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
shapes for 0 weights are converted.
LoRA model makotoShinkaiSubstyles_offset(ca429a1460f9) loaded:
setting (or sd model) changed. new networks created.
MultiDiffusion hooked into DPM++ 2M Karras sampler. Tile size: 64 x 64 Tile batches: 28 Batch size: 1
[Tiled VAE]: input_size: torch.Size([1, 3, 1056, 1920]), tile_size: 960, padding: 32
[Tiled VAE]: split to 2x2 = 4 tiles. Optimal tile size 928x512, original tile size 960x960
[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 960 x 528 image
[Tiled VAE]: Executing Encoder Task Queue: 100%|█████████████████████████████████████| 364/364 [00:07<00:00, 46.05it/s]
[Tiled VAE]: Done in 11.656s, max VRAM alloc 4130.609 MB
0%| | 0/13 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(908z6j5sctakndi)', 0, '1girl,upper body, (close up:0.4),cool colors', '(worst quality, low quality:1.4), watermark, logo,easynegative,', [], <PIL.Image.Image image mode=RGBA size=480x264 at 0x2627F5622C0>, None, None, None, None, None, None, 20, 15, 4, 0, 1, False, False, 1, 1, 6, 1.5, 0.6, 3987989009.0, -1.0, 0, 0, 0, False, 270, 480, 0, 0, 32, 0, '', '', '', [], 0, True, False, 1024, 1024, True, 64, 64, 32, 1, 'BSRGAN', 4, False, True, True, True, True, 0, 960, 64, True, False, 'LoRA', 'ligneClaireStyleCogecha_v10(21e170a9ba68)', 0.3, 0.3, 'LoRA', 'None', 0, 0, 'LoRA', 'makotoShinkaiSubstyles_offset(ca429a1460f9)', 1, 1, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'Refresh models', '
CFG Scale
should be 2 or lower.Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0) {}A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.