Giter Club home page Giter Club logo

vladmandic / automatic Goto Github PK

View Code? Open in Web Editor NEW
4.7K 58.0 328.0 53.23 MB

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models

Home Page: https://github.com/vladmandic/automatic

License: GNU Affero General Public License v3.0

JavaScript 2.06% Python 83.87% CSS 3.10% HTML 0.50% Shell 0.03% Batchfile 0.03% PowerShell 0.03% Jupyter Notebook 9.10% C++ 0.51% Cuda 0.76%
stable-diffusion generative-art stable-diffusion-webui img2img txt2img sdnext diffusers a1111-webui automatic1111 ai-art

automatic's Introduction

SD.Next

Stable Diffusion implementation with advanced features

Sponsors Last Commit License Discord

Wiki | Discord | Changelog


Notable features

All individual features are not listed here, instead check ChangeLog for full list of changes

  • Multiple backends!
    Diffusers | Original
  • Multiple diffusion models!
    Stable Diffusion 1.5/2.1 | SD-XL | LCM | Segmind | Kandinsky | Pixart-α | Stable Cascade | Würstchen | aMUSEd | DeepFloyd IF | UniDiffusion | SD-Distilled | BLiP Diffusion | KOALA | etc.
  • Built-in Control for Text, Image, Batch and video processing!
    ControlNet | ControlNet XS | Control LLLite | T2I Adapters | IP Adapters
  • Multiplatform!
    Windows | Linux | MacOS with CPU | nVidia | AMD | IntelArc | DirectML | OpenVINO | ONNX+Olive | ZLUDA
  • Platform specific autodetection and tuning performed on install
  • Optimized processing with latest torch developments with built-in support for torch.compile
    and multiple compile backends: Triton, ZLUDA, StableFast, DeepCache, OpenVINO, NNCF, IPEX
  • Improved prompt parser
  • Enhanced Lora/LoCon/Lyco code supporting latest trends in training
  • Built-in queue management
  • Enterprise level logging and hardened API
  • Built in installer with automatic updates and dependency management
  • Modernized UI with theme support and number of built-in themes (dark and light)

Main text2image interface:
Screenshot-Dark

For screenshots and informations on other available themes, see Themes Wiki


Backend support

SD.Next supports two main backends: Diffusers and Original:

  • Diffusers: Based on new Huggingface Diffusers implementation
    Supports all models listed below
    This backend is set as default for new installations
    See wiki article for more information
  • Original: Based on LDM reference implementation and significantly expanded on by A1111
    This backend and is fully compatible with most existing functionality and extensions written for A1111 SDWebUI
    Supports SD 1.x and SD 2.x models
    All other model types such as SD-XL, LCM, PixArt, Segmind, Kandinsky, etc. require backend Diffusers

Model support

Additional models will be added as they become available and there is public interest in them

Also supported are modifiers such as:

  • LCM and Turbo (adversarial diffusion distillation) networks
  • All LoRA types such as LoCon, LyCORIS, HADA, IA3, Lokr, OFT
  • IP-Adapters for SD 1.5 and SD-XL
  • InstantID, FaceSwap, FaceID, PhotoMerge
  • AnimateDiff for SD 1.5

Examples

IP Adapters: Screenshot-IPAdapter

Color grading:
Screenshot-Color

InstantID:
Screenshot-InstantID

Important

  • Loading any model other than standard SD 1.x / SD 2.x requires use of backend Diffusers
  • Loading any other models using Original backend is not supported
  • Loading manually download model .safetensors files is supported for specified models only (typically SD 1.x / SD 2.x / SD-XL models only)
  • For all other model types, use backend Diffusers and use built in Model downloader or
    select model from Networks -> Models -> Reference list in which case it will be auto-downloaded and loaded

Platform support

  • nVidia GPUs using CUDA libraries on both Windows and Linux
  • AMD GPUs using ROCm libraries on Linux
    Support will be extended to Windows once AMD releases ROCm for Windows
  • Intel Arc GPUs using OneAPI with IPEX XPU libraries on both Windows and Linux
  • Any GPU compatible with DirectX on Windows using DirectML libraries
    This includes support for AMD GPUs that are not supported by native ROCm libraries
  • Any GPU or device compatible with OpenVINO libraries on both Windows and Linux
  • Apple M1/M2 on OSX using built-in support in Torch with MPS optimizations
  • ONNX/Olive

Install

Tip

  • Server can run with or without virtual environment,
    Recommended to use VENV to avoid library version conflicts with other applications
  • nVidia/CUDA / AMD/ROCm / Intel/OneAPI are auto-detected if present and available,
    For any other use case such as DirectML, ONNX/Olive, OpenVINO specify required parameter explicitly
    or wrong packages may be installed as installer will assume CPU-only environment
  • Full startup sequence is logged in sdnext.log,
    so if you encounter any issues, please check it first

Run

Once SD.Next is installed, simply run webui.ps1 or webui.bat (Windows) or webui.sh (Linux or MacOS)

List of available parameters, run webui --help for the full & up-to-date list:

Server options:
  --config CONFIG                                    Use specific server configuration file, default: config.json
  --ui-config UI_CONFIG                              Use specific UI configuration file, default: ui-config.json
  --medvram                                          Split model stages and keep only active part in VRAM, default: False
  --lowvram                                          Split model components and keep only active part in VRAM, default: False
  --ckpt CKPT                                        Path to model checkpoint to load immediately, default: None
  --vae VAE                                          Path to VAE checkpoint to load immediately, default: None
  --data-dir DATA_DIR                                Base path where all user data is stored, default:
  --models-dir MODELS_DIR                            Base path where all models are stored, default: models
  --allow-code                                       Allow custom script execution, default: False
  --share                                            Enable UI accessible through Gradio site, default: False
  --insecure                                         Enable extensions tab regardless of other options, default: False
  --use-cpu USE_CPU [USE_CPU ...]                    Force use CPU for specified modules, default: []
  --listen                                           Launch web server using public IP address, default: False
  --port PORT                                        Launch web server with given server port, default: 7860
  --freeze                                           Disable editing settings
  --auth AUTH                                        Set access authentication like "user:pwd,user:pwd""
  --auth-file AUTH_FILE                              Set access authentication using file, default: None
  --autolaunch                                       Open the UI URL in the system's default browser upon launch
  --docs                                             Mount API docs, default: False
  --api-only                                         Run in API only mode without starting UI
  --api-log                                          Enable logging of all API requests, default: False
  --device-id DEVICE_ID                              Select the default CUDA device to use, default: None
  --cors-origins CORS_ORIGINS                        Allowed CORS origins as comma-separated list, default: None
  --cors-regex CORS_REGEX                            Allowed CORS origins as regular expression, default: None
  --tls-keyfile TLS_KEYFILE                          Enable TLS and specify key file, default: None
  --tls-certfile TLS_CERTFILE                        Enable TLS and specify cert file, default: None
  --tls-selfsign                                     Enable TLS with self-signed certificates, default: False
  --server-name SERVER_NAME                          Sets hostname of server, default: None
  --no-hashing                                       Disable hashing of checkpoints, default: False
  --no-metadata                                      Disable reading of metadata from models, default: False
  --disable-queue                                    Disable queues, default: False
  --subpath SUBPATH                                  Customize the URL subpath for usage with reverse proxy
  --backend {original,diffusers}                     force model pipeline type
  --allowed-paths ALLOWED_PATHS [ALLOWED_PATHS ...]  add additional paths to paths allowed for web access

Setup options:
  --reset                                            Reset main repository to latest version, default: False
  --upgrade                                          Upgrade main repository to latest version, default: False
  --requirements                                     Force re-check of requirements, default: False
  --quick                                            Bypass version checks, default: False
  --use-directml                                     Use DirectML if no compatible GPU is detected, default: False
  --use-openvino                                     Use Intel OpenVINO backend, default: False
  --use-ipex                                         Force use Intel OneAPI XPU backend, default: False
  --use-cuda                                         Force use nVidia CUDA backend, default: False
  --use-rocm                                         Force use AMD ROCm backend, default: False
  --use-zluda                                        Force use ZLUDA, AMD GPUs only, default: False
  --use-xformers                                     Force use xFormers cross-optimization, default: False
  --skip-requirements                                Skips checking and installing requirements, default: False
  --skip-extensions                                  Skips running individual extension installers, default: False
  --skip-git                                         Skips running all GIT operations, default: False
  --skip-torch                                       Skips running Torch checks, default: False
  --skip-all                                         Skips running all checks, default: False
  --skip-env                                         Skips setting of env variables during startup, default: False
  --experimental                                     Allow unsupported versions of libraries, default: False
  --reinstall                                        Force reinstallation of all requirements, default: False
  --test                                             Run test only and exit
  --version                                          Print version information
  --ignore                                           Ignore any errors and attempt to continue
  --safe                                             Run in safe mode with no user extensions

Logging options:
  --log LOG                                          Set log file, default: None
  --debug                                            Run installer with debug logging, default: False
  --profile                                          Run profiler, default: False

Notes

Control

SD.Next comes with built-in control for all types of text2image, image2image, video2video and batch processing

Control interface:
Screenshot-Control

Control processors:
Screenshot-Process

Masking: Screenshot-Mask

Extensions

SD.Next comes with several extensions pre-installed:

Collab

  • We'd love to have additional maintainers (with comes with full repo rights). If you're interested, ping us!
  • In addition to general cross-platform code, desire is to have a lead for each of the main platforms
    This should be fully cross-platform, but we'd really love to have additional contributors and/or maintainers to join and help lead the efforts on different platforms

Credits

Evolution

starts

Docs

If you're unsure how to use a feature, best place to start is Wiki and if its not there,
check ChangeLog for when feature was first introduced as it will always have a short note on how to use it

Sponsors

Allan GrantBrent OzarMatthew RunoSalad Technologiesa.v.mantzaris

automatic's People

Contributors

36db avatar ai-casanova avatar aptronymist avatar aria1th avatar automatic1111 avatar batvbs avatar brkirch avatar c43h66n12o12s2 avatar d8ahazard avatar dfaker avatar discus0434 avatar disty0 avatar dtlnor avatar ellangok avatar gegell avatar guaneec avatar hameerabbasi avatar lshqqytiger avatar mezotaken avatar midcoastal avatar orionaskatu avatar papuspartan avatar r-n avatar random-thoughtss avatar space-nuko avatar timntorres avatar trojaner avatar vladmandic avatar w-e-w avatar yfszzx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

automatic's Issues

[Feature Request]: Is it any way to run it on Windows natively?

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

I would like to run your fork on Windows without WSL. Is it possible, basically?
Thank you!

Proposed workflow

  1. Run some cmd-file.
  2. Get WebUI, generation, etc. working.

Additional information

No response

[Issue]: Memory_efficient error

Issue Description

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 4096, 1, 512) (torch.float16) key : shape=(1, 4096, 1, 512) (torch.float16) value : shape=(1, 4096, 1, 512) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 cutlassF is not supported because: xFormers wasn't build with CUDA support flshattF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128 tritonflashattF is not supported because: xFormers wasn't build with CUDA support max(query.shape[-1] != value.shape[-1]) > 128 triton is not available requires A100 GPU smallkF is not supported because: xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 unsupported embed per head: 512

It runs but doesn't show any image, just that error.

Platform Description

Win 10. Python 3.10.9. Torch 2 (Installed from wiki guide). No warnings at launch.py start.

[Feature Request]: Can we have radio buttons for samplers back?

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

Give the choice to have samplers in dropdown or radio buttons.

Proposed workflow

  1. Go to Settings
  2. Choose option

Additional information

Was there a particular reason this was removed? I feel I'm way faster with clicking them via radio buttons.

If not, I can add an PR for that.

[Issue]: Textual inversions not showing up on the Extra Network UI

Issue Description

When looking at the cmd box, it seems to load all the TI, but when trying to look for them in the extra network tab, they seem to be missing

You can see it properly pathed here
chrome_yRImhUbR4A

No images loading
chrome_irFp3gPRHd

my directory to show
explorer_6Jkzblvw8f

the cmd does load them
cmd_HgkZfjaDON

Platform Description

Windows, Chrome

[Issue]: Generate Forever Not Working

Issue Description

image

image

The image above is how you Generate Forever until you cancel it yourself. Currently Generating Forever is not working. Nothing happens when you click it.

Platform Description

Window 10
Ryzen 7 2700
RTX 3060 Ti

[Bug]: Torch is not able to use GPU

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

After installing the requirements, I run webui.bat and got the error

Steps to reproduce the problem

  1. Install requirements
  2. Run webui.bat
  3. error

What should have happened?

It should go through, I encoutered no such error when using the vanilla webui

Commit where the problem happens

ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

No

List of extensions

No

Console logs

venv "D:\Stable_diffusion\automatic\venv\Scripts\Python.exe"
Traceback (most recent call last):
  File "D:\Stable_diffusion\automatic\launch.py", line 310, in <module>
    prepare_environment()
  File "D:\Stable_diffusion\automatic\launch.py", line 247, in prepare_environment
    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
  File "D:\Stable_diffusion\automatic\launch.py", line 108, in run_python
    return run(f'"{python}" -c "{code}"', desc, errdesc)
  File "D:\Stable_diffusion\automatic\launch.py", line 84, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: "D:\Stable_diffusion\automatic\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout: <empty>
stderr: Traceback (most recent call last):
  File "<string>", line 1, in <module>
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Press any key to continue . . .

Additional information

No response

[Feature] Handle extensions looking up removed command line flags

Issue Description

This extension simply refuses to initialize, probably because handling wildcards directories changed a bit in this fork. Though, UmiAI extension, which relies on sort-of-wildcards, too, seems to be working fine.
I'll check in the following run, if UmiAI, despite throwing no errors during startup, functions properly.

Error loading script: tag_autocomplete_helper.py
╭─────────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────────╮
│ X:\AI\automatic\modules\scripts.py:255 in load_scripts                                                           │
│                                                                                                                  │
│   253 │   │   │   current_basedir = scriptfile.basedir                                                           │
│   254 │   │   │                                                                                                  │
│ ❱ 255 │   │   │   script_module = script_loading.load_module(scriptfile.path)                                    │
│   256 │   │   │   register_scripts_from_module(script_module)                                                    │
│   257                                                                                                            │
│                                                                                                                  │
│ X:\AI\automatic\modules\script_loading.py:11 in load_module                                                      │
│                                                                                                                  │
│    9 │   module_spec = importlib.util.spec_from_file_location(os.path.basename(path), path)                      │
│   10 │   module = importlib.util.module_from_spec(module_spec)                                                   │
│ ❱ 11 │   module_spec.loader.exec_module(module)                                                                  │
│   12 │                                                                                                           │
│   13 │   return module                                                                                           │
│ in exec_module:883                                                                                               │
│ in _call_with_frames_removed:241                                                                                 │
│                                                                                                                  │
│ X:\AI\automatic\extensions\a1111-sd-webui-tagcomplete\scripts\tag_autocomplete_helper.py:27 in <module>          │
│                                                                                                                  │
│    25 # The path to the folder containing the wildcards and embeddings                                           │
│    26 WILDCARD_PATH = FILE_DIR.joinpath('scripts/wildcards')                                                     │
│ ❱  27 EMB_PATH = Path(shared.cmd_opts.embeddings_dir)                                                            │
│    28 HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir)                                                          │
│    29                                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'Namespace' object has no attribute 'embeddings_dir'

I have "embeddings" directory with some models inside, and it's being used properly within the UI.

Platform Description

Windows 10 22H2, RTX 3090, MS Edge.

[Issue]: Issues with latest Gradio

Issue Description

not sure what got reverted, but suddenly its not opening for me. just crashes at the end.
here's the whole crash (which may contain unnecessary/duplicate known issues)

14:32:46-320390 INFO Available models: D:\stable-diffusion\stable-diffusion-webui\models\Stable-diffusion 153
loading script: toolkit_gui.py: AttributeError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ D:\stable-diffusion\automatic\modules\scripts.py:255 in load_scripts │
│ │
│ 254 │ │ │ │
│ > 255 │ │ │ script_module = script_loading.load_module(scriptfile.path) │
│ 256 │ │ │ register_scripts_from_module(script_module) │
│ │
│ D:\stable-diffusion\automatic\modules\script_loading.py:12 in load_module │
│ │
│ 11 │ module = importlib.util.module_from_spec(module_spec) │
│ > 12 │ module_spec.loader.exec_module(module) │
│ 13 │
│ │
│ ... 1 frames hidden ... │
│ in _call_with_frames_removed:241 │
│ │
│ D:\stable-diffusion\automatic\extensions\stable-diffusion-webui-model-toolkit\scripts\toolkit_gui.py:17 in │
│ │
│ 16 │
│ > 17 MODEL_SAVE_PATH = shared.cmd_opts.ckpt_dir or os.path.join(models_path, "Stable-diffusio │
│ 18 AUTOPRUNE_PATH = os.path.join(models_path, "Autoprune") │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
AttributeError: 'Namespace' object has no attribute 'ckpt_dir'
loading script: tagger.py: ModuleNotFoundError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ D:\stable-diffusion\automatic\modules\scripts.py:255 in load_scripts │
│ │
│ 254 │ │ │ │
│ > 255 │ │ │ script_module = script_loading.load_module(scriptfile.path) │
│ 256 │ │ │ register_scripts_from_module(script_module) │
│ │
│ D:\stable-diffusion\automatic\modules\script_loading.py:12 in load_module │
│ │
│ 11 │ module = importlib.util.module_from_spec(module_spec) │
│ > 12 │ module_spec.loader.exec_module(module) │
│ 13 │
│ │
│ ... 3 frames hidden ... │
│ │
│ D:\stable-diffusion\automatic\extensions\stable-diffusion-webui-wd14-tagger\tagger\api.py:6 in │
│ │
│ 5 from modules import shared │
│ > 6 from modules.api.api import decode_base64_to_image │
│ 7 from modules.call_queue import queue_lock │
│ │
│ D:\stable-diffusion\automatic\modules\api\api.py:9 in │
│ │
│ 8 from io import BytesIO │
│ > 9 from gradio_client.utils import decode_base64_to_file │
│ 10 from fastapi import APIRouter, Depends, FastAPI, Request, Response │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ModuleNotFoundError: No module named 'gradio_client'
loading script: api.py: ModuleNotFoundError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ D:\stable-diffusion\automatic\modules\scripts.py:255 in load_scripts │
│ │
│ 254 │ │ │ │
│ > 255 │ │ │ script_module = script_loading.load_module(scriptfile.path) │
│ 256 │ │ │ register_scripts_from_module(script_module) │
│ │
│ D:\stable-diffusion\automatic\modules\script_loading.py:12 in load_module │
│ │
│ 11 │ module = importlib.util.module_from_spec(module_spec) │
│ > 12 │ module_spec.loader.exec_module(module) │
│ 13 │
│ │
│ ... 2 frames hidden ... │
│ │
│ D:\stable-diffusion\automatic\extensions-builtin\sd-webui-controlnet\scripts\api.py:13 in │
│ │
│ 12 from modules.api.models import * │
│ > 13 from modules.api import api │
│ 14 │
│ │
│ D:\stable-diffusion\automatic\modules\api\api.py:9 in │
│ │
│ 8 from io import BytesIO │
│ > 9 from gradio_client.utils import decode_base64_to_file │
│ 10 from fastapi import APIRouter, Depends, FastAPI, Request, Response │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ModuleNotFoundError: No module named 'gradio_client'
loading script: controlnet.py: ModuleNotFoundError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ D:\stable-diffusion\automatic\modules\scripts.py:255 in load_scripts │
│ │
│ 254 │ │ │ │
│ > 255 │ │ │ script_module = script_loading.load_module(scriptfile.path) │
│ 256 │ │ │ register_scripts_from_module(script_module) │
│ │
│ D:\stable-diffusion\automatic\modules\script_loading.py:12 in load_module │
│ │
│ 11 │ module = importlib.util.module_from_spec(module_spec) │
│ > 12 │ module_spec.loader.exec_module(module) │
│ 13 │
│ │
│ ... 3 frames hidden ... │
│ │
│ D:\stable-diffusion\automatic\extensions-builtin\sd-webui-controlnet\scripts\external_code.py:7 in │
│ │
│ 6 │
│ > 7 from modules.api import api │
│ 8 │
│ │
│ D:\stable-diffusion\automatic\modules\api\api.py:9 in │
│ │
│ 8 from io import BytesIO │
│ > 9 from gradio_client.utils import decode_base64_to_file │
│ 10 from fastapi import APIRouter, Depends, FastAPI, Request, Response │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ModuleNotFoundError: No module named 'gradio_client'
loading script: external_code.py: ModuleNotFoundError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ D:\stable-diffusion\automatic\modules\scripts.py:255 in load_scripts │
│ │
│ 254 │ │ │ │
│ > 255 │ │ │ script_module = script_loading.load_module(scriptfile.path) │
│ 256 │ │ │ register_scripts_from_module(script_module) │
│ │
│ D:\stable-diffusion\automatic\modules\script_loading.py:12 in load_module │
│ │
│ 11 │ module = importlib.util.module_from_spec(module_spec) │
│ > 12 │ module_spec.loader.exec_module(module) │
│ 13 │
│ │
│ ... 2 frames hidden ... │
│ │
│ D:\stable-diffusion\automatic\extensions-builtin\sd-webui-controlnet\scripts\external_code.py:7 in │
│ │
│ 6 │
│ > 7 from modules.api import api │
│ 8 │
│ │
│ D:\stable-diffusion\automatic\modules\api\api.py:9 in │
│ │
│ 8 from io import BytesIO │
│ > 9 from gradio_client.utils import decode_base64_to_file │
│ 10 from fastapi import APIRouter, Depends, FastAPI, Request, Response │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ModuleNotFoundError: No module named 'gradio_client'
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ D:\stable-diffusion\automatic\launch.py:93 in │
│ │
│ 92 import webui │
│ > 93 webui.webui() │
│ 94 │
│ │
│ D:\stable-diffusion\automatic\webui.py:207 in webui │
│ │
│ 206 modules.progress.setup_progress_api(app) │
│ > 207 create_api(app) │
│ 208 ui_extra_networks.add_pages_to_demo(app) │
│ │
│ D:\stable-diffusion\automatic\webui.py:157 in create_api │
│ │
│ 156 def create_api(app): │
│ > 157 from modules.api.api import Api │
│ 158 api = Api(app, queue_lock) │
│ │
│ D:\stable-diffusion\automatic\modules\api\api.py:9 in │
│ │
│ 8 from io import BytesIO │
│ > 9 from gradio_client.utils import decode_base64_to_file │
│ 10 from fastapi import APIRouter, Depends, FastAPI, Request, Response │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ModuleNotFoundError: No module named 'gradio_client'
Press any key to continue . . .

Platform Description

win11, chrome

[Bug]: Could not open requirements file: [Errno 2] No such file or directory: 'requirements_versions.txt'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Using webui.bat to un on Windows 11.
Got an error.

Steps to reproduce the problem

  1. Run the webui.bat on Windows 11

What should have happened?

Web UI run

Commit where the problem happens

ee410df

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Command Line Arguments

none

List of extensions

none

Console logs

`Installing requirements for Web UI
Traceback (most recent call last):
  File "E:\sd\automatic\launch.py", line 361, in <module>
    prepare_environment()
  File "E:\sd\automatic\launch.py", line 310, in prepare_environment
    run_pip(f"install -r {requirements_file}", "requirements for Web UI")
  File "E:\sd\automatic\launch.py", line 137, in run_pip
    return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
  File "E:\sd\automatic\launch.py", line 105, in run
    raise RuntimeError(message)
RuntimeError: Couldn't install requirements for Web UI.
Command: "E:\sd\automatic\venv\Scripts\python.exe" -m pip install -r requirements_versions.txt --prefer-binary
Error code: 1
stdout: <empty>
stderr: ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements_versions.txt'`

Additional information

No response

[Bug]: Invalid requirement: 'clip_interrogator=0.6.0' (from line 2 of requirements_versions.txt)

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Got that error during the installation. It's because of mistype in the requirements_versions.txt file.

clip_interrogator=0.6.0 should be changed to clip_interrogator==0.6.0

Cheers!

Steps to reproduce the problem

Run installtion or update via the automatic.sh.

What should have happened?

It had to install all requirements without errors.

Commit where the problem happens

f275e43

What platforms do you use to access the UI ?

Windows, Linux

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

nope

List of extensions

nope

Console logs

ERROR: Invalid requirement: 'clip_interrogator=0.6.0' (from line 2 of requirements_versions.txt)
Hint: = is not a valid operator. Did you mean == ?

Installing requirements for Web UI
Traceback (most recent call last):
  File "/home/uuser/ai/automatic/launch.py", line 305, in <module>
    prepare_environment()
  File "/home/uuser/ai/automatic/launch.py", line 270, in prepare_environment
    run_pip(f"install -r \"{requirements_file}\"", "requirements for Web UI")
  File "/home/uuser/ai/automatic/launch.py", line 132, in run_pip
    return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
  File "/home/uuser/ai/automatic/launch.py", line 100, in run
    raise RuntimeError(message)
RuntimeError: Couldn't install requirements for Web UI.
Command: "/home/uuser/ai/automatic/venv/bin/python" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1
stdout: <empty>
stderr: ERROR: Invalid requirement: 'clip_interrogator=0.6.0' (from line 2 of requirements_versions.txt)
Hint: = is not a valid operator. Did you mean == ?

Additional information

none

[Issue]: Extension Koyha ss Additional Networks

Issue Description

Runtime Error:

activating extra network lora with arguments [<modules.extra_networks.ExtraNetworkParams object at 0x000001D01FC65840>]: RuntimeError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ I:\Super SD 2.0\automatic\modules\extra_networks.py:75 in activate │
│ │
│ 74 │ │ try: │
│ > 75 │ │ │ extra_network.activate(p, extra_network_args) │
│ 76 │ │ except Exception as e: │
│ │
│ I:\Super SD 2.0\automatic\extensions-builtin\Lora\extra_networks_lora.py:23 in activate │
│ │
│ 22 │ │ │
│ > 23 │ │ lora.load_loras(names, multipliers) │
│ 24 │
│ │
│ I:\Super SD 2.0\automatic\extensions-builtin\Lora\lora.py:214 in load_loras │
│ │
│ 213 │ │ │ if lora is None or os.path.getmtime(lora_on_disk.filename) > lora.mtime: │
│ > 214 │ │ │ │ lora = load_lora(name, lora_on_disk.filename) │
│ 215 │
│ │
│ I:\Super SD 2.0\automatic\extensions-builtin\Lora\lora.py:176 in load_lora │
│ │
│ 175 │ │ with torch.no_grad(): │
│ > 176 │ │ │ module.weight.copy_(weight) │
│ 177 │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
RuntimeError: output with shape [128, 320] doesn't match the broadcast shape [128, 320, 128, 320]

The dropdown where the LoRAs are listed, is not populated. Neither in tx2img, nor in the Additional Networks tab.

Platform Description

Win10, Firefox 106

[Bug]: Torch not installed, aborting in WSl

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

On wsl Ubuntu for window, after successfully install Torch 2.0.0 compliled with cuda 11.8 and run ./automatic.sh install, I got this error:
./automatic.sh: line 31: : command not found Torch not installed, aborting
Not sure what is the problem, I make sure to install cuda 11.8 and python is 3.10.6 version already

Steps to reproduce the problem

  1. open wsl2 Ubuntu on window
  2. Install python and Torch as instructed
  3. run command: ./automatic.sh install

What should have happened?

It should install all requirements and launch

Commit where the problem happens

ee412dd

What platforms do you use to access the UI ?

Windows, Linux

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

List of extensions

No

Console logs

./automatic.sh: line 31: : command not found
Torch not installed, aborting

Additional information

No response

[Issue]: Error With Tag Autocomplete Extension

Issue Description

Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
loading script: tag_autocomplete_helper.py: AttributeError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ C:\Users\zerok\Downloads\Test\automatic2\modules\scripts.py:255 in load_scripts │
│ │
│ 254 │ │ │ │
│ > 255 │ │ │ script_module = script_loading.load_module(scriptfile.path) │
│ 256 │ │ │ register_scripts_from_module(script_module) │
│ │
│ C:\Users\zerok\Downloads\Test\automatic2\modules\script_loading.py:12 in load_module │
│ │
│ 11 │ module = importlib.util.module_from_spec(module_spec) │
│ > 12 │ module_spec.loader.exec_module(module) │
│ 13 │
│ │
│ ... 1 frames hidden ... │
│ in _call_with_frames_removed:241 │
│ │
│ C:\Users\zerok\Downloads\Test\automatic2\extensions\a1111-sd-webui-tagcomplete\scripts\tag_autocomplete_helper.py:2 │
│ 7 in │
│ │
│ 26 WILDCARD_PATH = FILE_DIR.joinpath('scripts/wildcards') │
│ > 27 EMB_PATH = Path(shared.cmd_opts.embeddings_dir) │
│ 28 HYP_PATH = Path(shared.cmd_opts.hypernetwork_dir) │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
AttributeError: 'Namespace' object has no attribute 'embeddings_dir'
Loading booru2prompt settings
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[VRAMEstimator] No stats available, run benchmark first
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Creating model from config: C:\Users\zerok\Downloads\Test\automatic2\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.

Model loaded in 47.9s (load=3.0s create=0.7s apply=30.8s vae=1.5s move=1.2s hijack=9.8s embeddings=1.0s)
00:23:15-413464 INFO Startup time: 186.0s (torch=7.7s libraries=2.9s models=0.2s codeformer=0.3s gfpgan=0.1s
scripts=54.7s ui=71.2s gradio=0.8s scripts app_started_callback=0.1s checkpoint=48.1s)
API error: GET: http://127.0.0.1:7860/file=tmp/tagAutocompletePath.txt?1681489614021 {'error': 'RuntimeError', 'detail': '', 'body': '', 'errors': 'File at path C:\Users\zerok\Downloads\Test\automatic2\tmp\tagAutocompletePath.txt does not exist.'}
http api: RuntimeError
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ C:\Users\zerok\Downloads\Test\automatic2\venv\lib\site-packages\anyio\streams\memory.py:94 in receive │
│ │
│ C:\Users\zerok\Downloads\Test\automatic2\venv\lib\site-packages\anyio\streams\memory.py:89 in receive_nowait │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
WouldBlock

During handling of the above exception, another exception occurred:

┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ C:\Users\zerok\Downloads\Test\automatic2\venv\lib\site-packages\starlette\middleware\base.py:78 in call_next │
│ │
│ C:\Users\zerok\Downloads\Test\automatic2\venv\lib\site-packages\anyio\streams\memory.py:114 in receive │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
EndOfStream

During handling of the above exception, another exception occurred:

┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ C:\Users\zerok\Downloads\Test\automatic2\modules\api\api.py:132 in exception_handling │
│ │
│ 131 │ │ try: │
│ > 132 │ │ │ return await call_next(request) │
│ 133 │ │ except Exception as e: │
│ │
│ C:\Users\zerok\Downloads\Test\automatic2\venv\lib\site-packages\starlette\middleware\base.py:84 in call_next │
│ │
│ ... 13 frames hidden ... │
│ │
│ C:\Users\zerok\Downloads\Test\automatic2\venv\lib\site-packages\starlette\routing.py:69 in app │
│ │
│ C:\Users\zerok\Downloads\Test\automatic2\venv\lib\site-packages\starlette\responses.py:338 in call
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
RuntimeError: File at path C:\Users\zerok\Downloads\Test\automatic2\tmp\tagAutocompletePath.txt does not exist.

As title said. When starting with the extension it will always bring this error and the Extension won't work in the WebUI.

Platform Description

Window 10
Ryzen 7 2700
RTX 3060 Ti

[Bug]: UI not loading properly

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

After installing and setting up the venv, the UI just fails to properly. This occurs in Chrome, Firefox, and Brave.
automaticWebUI

Steps to reproduce the problem

I just installed it, launched it, and accessed it through the browser.

What should have happened?

The UI should appear as it does in the screenshots. Graphical elements should load properly.

Commit where the problem happens

ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Command Line Arguments

No

List of extensions

No

Console logs

venv "D:\automatic\venv\Scripts\Python.exe"
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Torch 2.0.0+cu118 CUDA 11.8 cuDNN 8700
GPU NVIDIA GeForce RTX 3070 Ti VRAM 8192 Arch (8, 6) Cores 48
Available models: 1
Image Browser: send2trash is not installed. recycle bin cannot be used.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Checkpoint sd-v15-runwayml.ckpt  not found; loading fallback v1-5-pruned-emaonly.safetensors [6ce0161689]
Loading weights: D:\automatic\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors -------------- 0.0/4.3 GB -:--:--
Creating model from config: D:\automatic\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Couldn't find VAE named vae-ft-mse-840000-ema-pruned.ckpt; using None instead
Applying scaled dot product cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 3.1s (load=0.1s create=0.3s apply=0.7s vae=0.5s move=0.7s embeddings=0.8s)
Startup time: 35.8s (torch=3.7s libraries=2.9s codeformer=0.1s scripts=24.0s ui=1.6s gradio=0.3s checkpoint=3.4s)

Additional information

I am able to generate images just fine, only the UI seems to be messed up.

[Bug]: Web Styling is Broken (CSS?)

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

The WebUI styling is very broken.

Steps to reproduce the problem

Go to the UI.

What should have happened?

A normal UI should be shown.

Commit where the problem happens

99142d6

What platforms do you use to access the UI ?

MacOS

What browsers do you use to access the UI ?

Google Chrome, Apple Safari

Command Line Arguments

./automatic.sh

List of extensions

The defaults

Console logs

$ ./automatic.sh
SD server: optimized
Version: 99142d64 Sun Feb 26 11:05:53 2023 -0500
Repository: https://github.com/vladmandic/automatic
Last Merge: Sun Feb 26 10:35:20 2023 -0500 Merge pull request #38 from AUTOMATIC1111/master
System
- Platform: Ubuntu 22.10 5.19.0-31-generic x86_64
- nVIDIA: NVIDIA GeForce RTX 4090, 525.60.13
- Python: 3.10.7 Torch: 2.0.0.dev20230226+cu118 CUDA: 11.8 cuDNN: 8700 GPU: NVIDIA GeForce RTX 4090 Arch: (8, 9)
The following values were not passed to `accelerate launch` and had defaults used instead:
	`--num_processes` was set to a value of `1`
	`--num_machines` was set to a value of `1`
	`--mixed_precision` was set to a value of `'no'`
	`--dynamo_backend` was set to a value of `'no'`
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
Launching Web UI with arguments: --api --xformers --disable-console-progressbars --gradio-queue --skip-version-check --disable-nan-check --disable-safe-unpickle --cors-allow-origins=http://127.0.0.1:7860
WARNING:root:Pytorch pre-release version 2.0.0.dev20230226+cu118 - assuming intent to test it
Image Browser: send2trash is not installed. recycle bin cannot be used.
Checkpoint sd-v15-runwayml.ckpt [cc6cb27103] not found; loading fallback v1-5-pruned-emaonly.safetensors [6ce0161689]
Loading weights [6ce0161689] from /bits/automatic/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /bits/automatic/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Couldn't find VAE named vae-ft-mse-840000-ema-pruned.ckpt; using None instead
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 1.4s (create model: 0.2s, apply weights to model: 0.3s, apply half: 0.2s, load VAE: 0.5s, move model to device: 0.1s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.


### Additional information

<img width="1703" alt="CleanShot 2023-02-26 at 19 32 08@2x" src="https://user-images.githubusercontent.com/22371/221466703-0ad5e038-e806-4ace-ba67-872256607985.png">

[Bug]: Unable to install, unable to launch

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

PS J:\automatic> ./automatic.sh install
PS J:\automatic>
PS J:\automatic>
[main 2023-04-10T09:53:27.360Z] [SharedProcess] using utility process
[main 2023-04-10T09:53:27.590Z] update#setState idle
[main 2023-04-10T09:53:29.077Z] [UtilityProcess id: 1, type: extensionHost, pid: <none>]: creating new...
[main 2023-04-10T09:53:29.097Z] [UtilityProcess id: 1, type: extensionHost, pid: 2324]: successfully created
[main 2023-04-10T09:53:29.160Z] [UtilityProcess type: shared-process, pid: <none>]: creating new...
[main 2023-04-10T09:53:29.164Z] [UtilityProcess id: 1, type: fileWatcher, pid: <none>]: creating new...
[main 2023-04-10T09:53:29.178Z] [UtilityProcess type: shared-process, pid: 17428]: successfully created
[main 2023-04-10T09:53:29.198Z] [UtilityProcess id: 1, type: fileWatcher, pid: 1224]: successfully created
[17152:0410/115330.154:ERROR:jump_list.cc(301)] Failed to append custom category 'Recent Folders' to Jump List due to system privacy settings.
[main 2023-04-10T09:53:30.176Z] updateWindowsJumpList#setJumpList unexpected result: customCategoryAccessDeniedError
[main 2023-04-10T09:53:57.600Z] update#setState checking for updates
[main 2023-04-10T09:53:57.646Z] update#setState idle

Stays there...

Steps to reproduce the problem

git clone https://github.com/vladmandic/automatic
cd automatic
./automatic.sh install

(cmd and power shell, both with admin privileges)

What should have happened?

automatic installation

Commit where the problem happens

22bcc7b

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

No response

Command Line Arguments

git clone https://github.com/vladmandic/automatic
cd automatic
./automatic.sh install


### List of extensions

none

### Console logs

```Shell
PS J:\automatic> ./automatic.sh install
PS J:\automatic>
PS J:\automatic>
[main 2023-04-10T09:53:27.360Z] [SharedProcess] using utility process
[main 2023-04-10T09:53:27.590Z] update#setState idle
[main 2023-04-10T09:53:29.077Z] [UtilityProcess id: 1, type: extensionHost, pid: <none>]: creating new...
[main 2023-04-10T09:53:29.097Z] [UtilityProcess id: 1, type: extensionHost, pid: 2324]: successfully created
[main 2023-04-10T09:53:29.160Z] [UtilityProcess type: shared-process, pid: <none>]: creating new...
[main 2023-04-10T09:53:29.164Z] [UtilityProcess id: 1, type: fileWatcher, pid: <none>]: creating new...
[main 2023-04-10T09:53:29.178Z] [UtilityProcess type: shared-process, pid: 17428]: successfully created
[main 2023-04-10T09:53:29.198Z] [UtilityProcess id: 1, type: fileWatcher, pid: 1224]: successfully created
[17152:0410/115330.154:ERROR:jump_list.cc(301)] Failed to append custom category 'Recent Folders' to Jump List due to system privacy settings.
[main 2023-04-10T09:53:30.176Z] updateWindowsJumpList#setJumpList unexpected result: customCategoryAccessDeniedError
[main 2023-04-10T09:53:57.600Z] update#setState checking for updates
[main 2023-04-10T09:53:57.646Z] update#setState idle



### Additional information

It also opens Visual Studio Code  :S 

[Issue]: No setting for Lora location in system paths

Issue Description

Under system paths in settings tab you can set the location of almost every resource (a fantastic change btw), however there is no setting here for Lora locations, so only the default path is available.

Platform Description

Linux, Chrome

[Issue]: I want this to work so badly but...

Issue Description

I just cannot get this to work. I played with it a bunch before today's update for which I started completely fresh, and it's just one thing after another.

My webUI-user.bat that I customized for auto's def doesn't work, tried a base webui.bat, tried manually activating venv and launching. :| errors everywhere. I probably did something stupid, probably my fault, grrr.

image
image

Platform Description

Windows 10, 3090 TI, 64gb RAM, 2tb SSD

Just snapped this from Auto.
image

[Feature]: Support external CSS skins without conflicts

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

After an image is generated, the Stop and Skip buttons are not hidden so no new generation is possible.

Steps to reproduce the problem

  1. Generate an image

What should have happened?

The Stop and Skip buttons should be hidden and revealing the generate button.

Commit where the problem happens

AUTOMATIC1111/stable-diffusion-webui@ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--disable-sdp-attention --xformers --api --administrator --opt-split-attention --theme=dark

List of extensions

extensions_automatic

Console logs

none

Additional information

No response

[Issue]: ModuleNotFoundError: No module named 'rich'

Issue Description

After starting the webui.sh a ModuleNotFoundError occurs.

~$ cd automatic/
~/automatic$ ./webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on xxxxxx user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Traceback (most recent call last):
File "/home/tom/automatic/launch.py", line 12, in
from rich import print
ModuleNotFoundError: No module named 'rich'

Platform Description

I am using Ubuntu 22.04.2 LTS
Processor AMD 5800x
Memory 32 GB
Graphics AMD rx 6700 xt / ROCm v5.4.3 is installed

System is working with automatic1111

[Issue]: Generate button doesn't reactivate

Issue Description

After generating an image the Generate button doesn't reactivate. Stop and Skip still show, but don't respond.

2023-04-13_23-21-17.mp4

No errors on console. Javascript console has:
image

Platform Description

Windows 10, Chrome

[Feature Request]: Upscaler 2

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

I noticed that automatic Upscaler lacks the 2nd upscaler, could this be implemented? Would appreciate it. Blending upscalers sometimes produces better results for specific tasks.

Example
chrome_bBCkusW60T

Currently
chrome_s5igXqZPpa

Proposed workflow

As it is in A1111. Just another option under scale 1, with the visibility bar

Additional information

No response

[Feature]: Support multiple model locations

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

load extra models form other path.
i think we dont want to copy more models in limit Disk space.

even extension...

Proposed workflow

load extra models form other path

Additional information

No response

[Issue]: Launching server on Windows 11 returns an error

Issue Description

Greetings,

I just tried to launch the server via webui.bat on Windows 11 after seeing the last commit and, sadly, it doesn't seem to launch. The version from the main repo is working as intended.

I've done a git clone on a new folder to start clean and just tried to launch webui.bat and here is the log I'm getting:

Creating venv in directory G:\StableDiffusion\WebUI2\venv using python "C:\Users[redacted]\AppData\Local\Programs\Python\Python310\python.exe"
venv "G:\StableDiffusion\WebUI2\venv\Scripts\Python.exe"
Python 3.10.6 on Windows
Running setup
Installing requirements
Traceback (most recent call last):
File "G:\StableDiffusion\WebUI2\launch.py", line 87, in
setup.run_setup(False)
File "G:\StableDiffusion\WebUI2\setup.py", line 334, in run_setup
install_requirements()
File "G:\StableDiffusion\WebUI2\setup.py", line 273, in install_requirements
install(line)
File "G:\StableDiffusion\WebUI2\setup.py", line 86, in install
if not installed(package):
File "G:\StableDiffusion\WebUI2\setup.py", line 59, in installed
version = pkg_resources.get_distribution(p[0]).version
File "G:\StableDiffusion\WebUI2\venv\lib\site-packages\pkg_resources_init_.py", line 478, in get_distribution
dist = get_provider(dist)
File "G:\StableDiffusion\WebUI2\venv\lib\site-packages\pkg_resources_init_.py", line 354, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "G:\StableDiffusion\WebUI2\venv\lib\site-packages\pkg_resources_init_.py", line 909, in require
needed = self.resolve(parse_requirements(requirements))
File "G:\StableDiffusion\WebUI2\venv\lib\site-packages\pkg_resources_init_.py", line 795, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'addict' distribution was not found and is required by the application
Press any key to continue . . .

It's my very first issue report on GitHub, so if something is missing I will gladly provide more information or doing some more test.

See my specs below.

Platform Description

Windows 11 22H2 build 22621
Python 3.10.6
Git 2.40.0

[Bug]: Sliders in Firefox are broken

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Something is wrong with the CSS in Firefox.

image

Steps to reproduce the problem

  1. Start automatic
  2. Open firefox and chrome
  3. Go to "http://127.0.0.1:7860/"
  4. Check the stylesheet

What should have happened?

The UI should be the same

Commit where the problem happens

a7a2b30

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Command Line Arguments

.\webui.bat --skip-torch-cuda-test --autolaunch

List of extensions

No

Console logs

Creating model from config: C:\vlad-automatic-512\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Couldn't find VAE named vae-ft-mse-840000-ema-pruned.ckpt; using None instead
Applying scaled dot product cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 2.2s (load=0.3s create=0.4s apply=0.9s vae=0.6s)
Startup time: 10.8s (torch=4.3s libraries=2.3s codeformer=0.1s scripts=1.0s ui=0.3s gradio=0.2s scripts
app_started_callback=0.1s checkpoint=2.6s)

Additional information

Fresh installation

[Bug]: Could not find a version that satisfies the requirement for triton

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Running webui.bat (Windows) hits an error: Could not find a version that satisfies the requirement triton (from versions: none)

Steps to reproduce the problem

  1. .\webui.bat

What should have happened?

Installation worked

Commit where the problem happens

latest

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

List of extensions

No

Console logs

PS D:\StableDiffusion\automatic> .\webui.bat
venv "D:\StableDiffusion\automatic\venv\Scripts\Python.exe"
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu118
Collecting torch
  Using cached https://download.pytorch.org/whl/cu118/torch-2.0.0%2Bcu118-cp310-cp310-win_amd64.whl (2611.3 MB)
Collecting torchaudio
  Using cached https://download.pytorch.org/whl/cu118/torchaudio-2.0.1%2Bcu118-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu118/torchvision-0.15.1%2Bcu118-cp310-cp310-win_amd64.whl (4.9 MB)
ERROR: Could not find a version that satisfies the requirement triton (from versions: none)
ERROR: No matching distribution found for triton
Traceback (most recent call last):
  File "D:\StableDiffusion\automatic\launch.py", line 335, in <module>
    prepare_environment()
  File "D:\StableDiffusion\automatic\launch.py", line 238, in prepare_environment
    run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
  File "D:\StableDiffusion\automatic\launch.py", line 67, in run
    raise RuntimeError(f"""{errdesc or 'Error running command'}.
RuntimeError: Couldn't install torch.
Command: "D:\StableDiffusion\automatic\venv\Scripts\python.exe" -m pip install torch torchaudio torchvision triton --force --extra-index-url https://download.pytorch.org/whl/cu118
Error code: 1
Press any key to continue . . .

Additional information

I haven't tried to look too deep into this yet, totally could be user error. Mostly posting in case there's an obvious solution.

[Issue]: An issue with clip-interrogator-ext

Issue Description

I noticed that this extension, now being built in (thanks!), throws a bit of a fit while trying to generate a prompt based on an image. I deliberately deleted the CLIP model from the BLIP folder, to check:

Loading CLIP Interrogator 0.5.4...
Downloading: "https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_caption_capfilt_large.pth" to X:\AI\automatic\models\BLIP\model_base_caption_capfilt_large.pth

100%|███████████████████████████████████████████████████████████████████████████| 855M/855M [00:20<00:00, 43.7MB/s]
load checkpoint from X:\AI\automatic\models\BLIP\model_base_caption_capfilt_large.pth
Loading CLIP model...
Loaded CLIP model and data in 8.24 seconds.
The size of tensor a (8) must match the size of tensor b (64) at non-singleton dimension 0
The size of tensor a (8) must match the size of tensor b (64) at non-singleton dimension 0

image

The analysis section works flawlessly:
image

I know, this extension has quite an issue with vanilla WebUI (I couldn't make it work after latest updates at all).

Platform Description

Windows 10 22H2, RTX 3090

[Bug]: Autolaunch not working on windows

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Web ui not opened

Steps to reproduce the problem

  1. place "set COMMANDLINE_ARGS=--autolaunch" in webui-user,bat"
  2. run it

What should have happened?

web ui should open in browser

Commit where the problem happens

ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--autolaunch --disable-sdp-attention --xformers --api --no-half-vae --upcast-sampling --opt-channelslast --medvram  --opt-channelslast --opt-sub-quad-attention

List of extensions

none

Console logs

not needed

Additional information

No response

PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'setup.log'

Issue Description

venv "E:\automatic\venv\Scripts\Python.exe"
22:19:58-490185 INFO Python 3.10.6 on Windows
22:19:58-490185 INFO Running setup
22:19:58-537052 INFO Installing requirements
22:19:58-724528 INFO Version: f46fa09 Wed Apr 12 14:55:46 2023 -0400
22:19:59-052647 WARNING Latest available version: 2023-04-12T18:55:46Z
22:19:59-052647 INFO Updating Wiki
22:20:00-302652 INFO Installing packages
22:20:00-333899 ERROR Cannot install xformers package: {e}
22:20:00-333899 INFO Installing repositories
22:20:00-490181 INFO Installing submodules
22:20:01-177673 INFO Updating submodules
22:20:10-882716 INFO Built-in extensions: ['clip-interrogator-ext', 'LDSR', 'Lora', 'prompt-bracket-checker',
'ScuNET', 'sd-dynamic-thresholding', 'sd-extension-aesthetic-scorer',
'sd-extension-steps-animation', 'sd-extension-system-info', 'sd-webui-controlnet',
'sd-webui-model-converter', 'seed_travel', 'stable-diffusion-webui-images-browser',
'stable-diffusion-webui-rembg', 'SwinIR']
22:20:11-554556 ERROR Error running extension installer:
Traceback (most recent call last):
File "E:\automatic\extensions-builtin\clip-interrogator-ext\install.py", line 1, in
import launch
File "E:\automatic\launch.py", line 5, in
import setup
File "E:\automatic\setup.py", line 22, in
os.remove('setup.log')
PermissionError: [WinError 32] The process cannot access the file because it is being used by
another process: 'setup.log'

Platform Description

Windows 11 22H2

This error repeats ~10 times on various directories from "E:\automatic\extensions-builtin\

[Bug]: FileNotFoundError: [Errno 2] No such file or directory: 'T:\systemp\sd_temp_files\tmpe4svot1d'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Unfortunately, this bug still exists in this fork.
A defined temp folder in the WebUI settings will not be created if it's not existent and causes errors (even up to crashes) for some actions and extensions that rely on this folder.
Please fix this ASAP, because it's really annoying for almost half a year now.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

The temp folder specified in the settings should be created if not existent.

Commit where the problem happens

AUTOMATIC1111/stable-diffusion-webui@ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--disable-sdp-attention --xformers --api --administrator --opt-split-attention --theme=dark

List of extensions

extensions_automatic

Console logs

FileNotFoundError: [Errno 2] No such file or directory: 'T:\systemp\sd_temp_files\tmpe4svot1d'

Additional information

No response

[Issue]: ControlNet error with exception handler

Issue Description

Error running preload() for F:\Programs\automatic\extensions-builtin\sd-webui-controlnet\preload.py
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ F:\Programs\automatic\modules\script_loading.py:28 in preload_extensions │
│ │
│ 27 if hasattr(module, 'preload'): │
│ ❱ 28 module.preload(parser) │
│ 29 │
│ │
│ F:\Programs\automatic\extensions-builtin\sd-webui-controlnet\preload.py:2 in preload │
│ │
│ 1 def preload(parser): │
│ ❱ 2 parser.add_argument("--controlnet-dir", type=str, help="Path to directory with Contr │
│ 3 parser.add_argument("--no-half-controlnet", action='store_true', help="do not switch │
│ │
│ ... 4 frames hidden ... │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\argparse.py:1592 in _check_conflict │
│ │
│ 1591 conflict_handler = self._get_handler() │
│ ❱ 1592 conflict_handler(action, confl_optionals) │
│ 1593 │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\argparse.py:1601 in _handle_conflict_error │
│ │
│ 1600 in conflicting_actions]) │
│ ❱ 1601 raise ArgumentError(action, message % conflict_string) │
│ 1602 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ArgumentError: argument --controlnet-dir: conflicting option string: --controlnet-dir

During handling of the above exception, another exception occurred:

╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ F:\Programs\automatic\launch.py:91 in │
│ │
│ 90 setup.log.info(f"Server Arguments: {sys.argv[1:]}") │
│ ❱ 91 import webui │
│ 92 webui.webui() │
│ │
│ F:\Programs\automatic\webui.py:13 in │
│ │
│ 12 │
│ ❱ 13 from modules import paths, timer, import_hook, errors │
│ 14 │
│ │
│ ... 1 frames hidden ... │
│ │
│ F:\Programs\automatic\modules\shared.py:34 in │
│ │
│ 33 script_loading.preload_extensions(extensions_dir, parser) │
│ ❱ 34 script_loading.preload_extensions(extensions_builtin_dir, parser) │
│ 35 │
│ │
│ F:\Programs\automatic\modules\script_loading.py:32 in preload_extensions │
│ │
│ 31 print(f"Error running preload() for {preload_script}", file=sys.stderr) │
│ ❱ 32 shared.exception() │
│ 33 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: partially initialized module 'modules.shared' has no attribute 'exception' (most likely due to a
circular import)
Press any key to continue . . .

Platform Description

Using windows 11, RTX 4090.

Windows and Mac Issues - READ FIRST

Issue Description

Currently automatic.sh installer and launcher script is Linux only.
Old webui.bat / webui.sh scripts exist, but they do not have full functionality to install all submodules or even libraries.

The actual WebUI server does work on Windows/Mac, but requires advanced & manual installation.

New launcher/installer is on the way and I'll provide update here as soon as possible.
In the meantime, please do not create issues for Windows or Mac startup issues.

Update

Major updated posted, see #99 for details

[Issue]: error on start webui.bat with win10

Issue Description

venv "F:\stable-diffusion-webui\venv\Scripts\Python.exe"
Installing requirements for Web UI
Running extension installer: F:\stable-diffusion-webui\extensions\clip-interrogator-ext\install.py
A Installing requirements for CLIP Interrogator

Running extension installer: F:\stable-diffusion-webui\extensions\deforum-for-automatic1111-webui\install.py
Running extension installer: F:\stable-diffusion-webui\extensions\ebsynth_utility\install.py
A current transparent-background 1.2.3

Running extension installer: F:\stable-diffusion-webui\extensions\PBRemTools\install.py
A Installing None
Installing onnxruntime-gpu...
Installing None
Installing opencv-python...
Installing None
Installing Pillow...
Installing None
Installing segmentation-refinement...
Installing None
Installing scikit-learn...

Running extension installer: F:\stable-diffusion-webui\extensions\sd-webui-3d-open-pose-editor\install.py
Running extension installer: F:\stable-diffusion-webui\extensions\sd-webui-controlnet\install.py
Running extension installer: F:\stable-diffusion-webui\extensions\sd-webui-modelscope-text2video\install.py
Running extension installer: F:\stable-diffusion-webui\extensions\sd-webui-nulienhance\install.py
A Installed.

Running extension installer: F:\stable-diffusion-webui\extensions\sd-webui-segment-anything\install.py
Running extension installer: F:\stable-diffusion-webui\extensions\sd_dreambooth_extension\install.py
A Stop Motion CN - Running Preload
Set Gradio Queue: True
If submitting an issue on github, please provide the full startup log for debugging purposes.

Initializing Dreambooth
Dreambooth revision: 926ae204ef5de17efca2059c334b6098492a0641
Successfully installed fastapi-0.94.1 google-auth-oauthlib-0.4.6 tensorboard-2.12.0 transformers-4.26.1
[+] xformers version 0.0.17.dev464 installed.
[+] torch version 2.0.0+cu118 installed.
[+] torchvision version 0.15.1+cu118 installed.
[+] accelerate version 0.18.0 installed.
[+] diffusers version 0.14.0 installed.
[+] transformers version 4.26.1 installed.
[+] bitsandbytes version 0.35.4 installed.

Running extension installer: F:\stable-diffusion-webui\extensions\stable-diffusion-webui-blip2-captioner\install.py
Running extension installer: F:\stable-diffusion-webui\extensions\stable-diffusion-webui-rembg\install.py
Running extension installer: F:\stable-diffusion-webui\extensions\unprompted\install.py
A Installing requirements for Unprompted - img2pez
Installing requirements for Unprompted - pix2pix_zero

Running extension installer: F:\stable-diffusion-webui\extensions\video2video\install.py
A Installing video2video requirement: sk-video

Launching Web UI with arguments:
F:\stable-diffusion-webui\venv\lib\site-packages\pkg_resources_init_.py:123: PkgResourcesDeprecationWarning: otobuf is an invalid version and will not be supported in a future release
warnings.warn(
Stop Motion CN - Running Preload
Set Gradio Queue: True
Error running preload() for F:\stable-diffusion-webui\extensions-builtin\sd-webui-controlnet\preload.py
┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ F:\stable-diffusion-webui\modules\script_loading.py:28 in preload_extensions │
│ │
│ 27 if hasattr(module, 'preload'): │
│ > 28 module.preload(parser) │
│ 29 │
│ │
│ F:\stable-diffusion-webui\extensions-builtin\sd-webui-controlnet\preload.py:2 in preload │
│ │
│ 1 def preload(parser): │
│ > 2 parser.add_argument("--controlnet-dir", type=str, help="Path to directory with Contr │
│ 3 parser.add_argument("--no-half-controlnet", action='store_true', help="do not switch │
│ │
│ ... 4 frames hidden ... │
│ │
│ C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\argparse.py:1592 in _check_conflict │
│ │
│ 1591 conflict_handler = self._get_handler() │
│ > 1592 conflict_handler(action, confl_optionals) │
│ 1593 │
│ │
│ C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\argparse.py:1601 in _handle_conflict_error │
│ │
│ 1600 in conflicting_actions]) │
│ > 1601 raise ArgumentError(action, message % conflict_string) │
│ 1602 │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
ArgumentError: argument --controlnet-dir: conflicting option string: --controlnet-dir

During handling of the above exception, another exception occurred:

┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐
│ F:\stable-diffusion-webui\launch.py:311 in │
│ │
│ 310 prepare_environment() │
│ > 311 start() │
│ 312 │
│ │
│ F:\stable-diffusion-webui\launch.py:302 in start │
│ │
│ 301 print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with argum │
│ > 302 import webui │
│ 303 if '--nowebui' in sys.argv: │
│ │
│ ... 2 frames hidden ... │
│ │
│ F:\stable-diffusion-webui\modules\shared.py:34 in │
│ │
│ 33 script_loading.preload_extensions(extensions_dir, parser) │
│ > 34 script_loading.preload_extensions(extensions_builtin_dir, parser) │
│ 35 │
│ │
│ F:\stable-diffusion-webui\modules\script_loading.py:32 in preload_extensions │
│ │
│ 31 print(f"Error running preload() for {preload_script}", file=sys.stderr) │
│ > 32 shared.exception() │
│ 33 │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
AttributeError: partially initialized module 'modules.shared' has no attribute 'exception' (most likely due to a
circular import)
请按任意键继续. . .

Platform Description

Please fill this form with as much information as possible

[Bug]: ModuleNotFoundError: No module named 'Kornia'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

I suppose I need to go venv/scripts
and use cmd
.\activate
pip install kornia

Steps to reproduce the problem

Install problem : Work only with nightly version of torch
after install start webui.bat

What should have happened?

ModuleNotFoundError: No module named 'Kornia'

Commit where the problem happens

ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

List of extensions

No

Console logs

venv "D:\0-AI\Imagerie\automatic\venv\Scripts\Python.exe"
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu118
Collecting torch
  Using cached https://download.pytorch.org/whl/cu118/torch-2.0.0%2Bcu118-cp310-cp310-win_amd64.whl (2611.3 MB)
Collecting torchaudio
  Using cached https://download.pytorch.org/whl/cu118/torchaudio-2.0.1%2Bcu118-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu118/torchvision-0.15.1%2Bcu118-cp310-cp310-win_amd64.whl (4.9 MB)
Collecting typing-extensions
  Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)
Collecting filelock
  Using cached filelock-3.11.0-py3-none-any.whl (10.0 kB)
Collecting networkx
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting sympy
  Using cached https://download.pytorch.org/whl/sympy-1.11.1-py3-none-any.whl (6.5 MB)
Collecting jinja2
  Downloading https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
     ---------------------------------------- 133.1/133.1 kB 342.2 kB/s eta 0:00:00
Collecting requests
  Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting numpy
  Using cached numpy-1.24.2-cp310-cp310-win_amd64.whl (14.8 MB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached Pillow-9.5.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting MarkupSafe>=2.0
  Downloading https://download.pytorch.org/whl/MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl (16 kB)
Collecting idna<4,>=2.5
  Using cached https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
  Using cached https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-3.1.0-cp310-cp310-win_amd64.whl (97 kB)
Collecting urllib3<1.27,>=1.21.1
  Using cached urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting mpmath>=0.19
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.2 certifi-2022.12.7 charset-normalizer-3.1.0 filelock-3.11.0 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 numpy-1.24.2 pillow-9.5.0 requests-2.28.2 sympy-1.11.1 torch-2.0.0+cu118 torchaudio-2.0.1+cu118 torchvision-0.15.1+cu118 typing-extensions-4.5.0 urllib3-1.26.15

[notice] A new release of pip available: 22.3.1 -> 23.0.1
[notice] To update, run: D:\0-AI\Imagerie\automatic\venv\Scripts\python.exe -m pip install --upgrade pip
Installing gfpgan
Installing clip
Installing open_clip
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments:
┌─────────────────────────────── Traceback (most recent call last) ────────────────────────────────┐
│ D:\0-AI\Imagerie\automatic\launch.py:311 in <module>                                             │
│                                                                                                  │
│   308                                                                                            │
│   309 if __name__ == "__main__":                                                                 │
│   310 │   prepare_environment()                                                                  │
│ > 311 │   start()                                                                                │
│   312                                                                                            │
│                                                                                                  │
│ D:\0-AI\Imagerie\automatic\launch.py:302 in start                                                │
│                                                                                                  │
│   299                                                                                            │
│   300 def start():                                                                               │
│   301 │   print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with argum   │
│ > 302 │   import webui                                                                           │
│   303 │   if '--nowebui' in sys.argv:                                                            │
│   304 │   │   webui.api_only()                                                                   │
│   305 │   else:                                                                                  │
│                                                                                                  │
│ D:\0-AI\Imagerie\automatic\webui.py:29 in <module>                                               │
│                                                                                                  │
│    26 startup_timer.record("torch")                                                              │
│    27                                                                                            │
│    28 import gradio                                                                              │
│ >  29 import ldm.modules.encoders.modules                                                        │
│    30                                                                                            │
│    31 from modules import extra_networks, ui_extra_networks_checkpoints                          │
│    32 from modules import extra_networks_hypernet, ui_extra_networks_hypernets, ui_extra_netwo   │
│                                                                                                  │
│ D:\0-AI\Imagerie\automatic\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modul │
│ es.py:3 in <module>                                                                              │
│                                                                                                  │
│     1 import torch                                                                               │
│     2 import torch.nn as nn                                                                      │
│ >   3 import kornia                                                                              │
│     4 from torch.utils.checkpoint import checkpoint                                              │
│     5                                                                                            │
│     6 from transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel         │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘
ModuleNotFoundError: No module named 'kornia'
Appuyez sur une touche pour continuer...

Additional information

No response

[Issue]: VAE file isn't recognized

Issue Description

I am using the --vae-dir and the --ckpt-dir flags directing to my A1111 folders. Both of them contain the appropriate file named 'vae-ft-mse-840000-ema-pruned.ckpt'. However when I run the bat file I still receive the following message:
Couldn't find VAE named vae-ft-mse-840000-ema-pruned.ckpt; using None instead.

Platform Description

Using Windows 10.

[Bug]: Process (upscale) endless loop on windows

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Using upscale causes endless loop

Steps to reproduce the problem

  1. Go to "Process" tab and get any generated image
  2. Select any upscaler ie "4x-UltraSharp" set ratio to 1.6 and press generate
  3. processing never ends, image is not beeing saved. you have to stop terminal

What should have happened?

Image gets upscaled and saved to disk

Commit where the problem happens

ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--autolaunch --disable-sdp-attention --xformers --api --no-half-vae --upcast-sampling --opt-channelslast --medvram  --opt-channelslast --opt-sub-quad-attention

List of extensions

none

Console logs

100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 27/27 [00:26<00:00,  1.01it/s]

Additional information

No response

[Issue]: SDP does not work with Tiled VAE

Issue Description

..then I'll try to generate something with (for example) foolhardy's upscaler (with small tile size, like 256, and not 3072, so the Tiled VAE kicks in), I'll get this error:

100%|██████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00,  6.51it/s]
[Tiled VAE]: the input size is tiny and unnecessary to tile.
[Tiled VAE]: input_size: torch.Size([1, 3, 1472, 1030]), tile_size: 256, padding: 32
[Tiled VAE]: split to 6x4 = 24 tiles. Optimal tile size 256x256, original tile size 256x256
[Tiled VAE]: Fast mode enabled, estimating group norm parameters on 179 x 256 image
gradio call: NameError
╭─────────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────────╮
│ X:\AI\automatic\modules\call_queue.py:57 in f                                                                    │
│                                                                                                                  │
│    56 │   │   │   │   pr.enable()                                                                                │
│ ❱  57 │   │   │   res = list(func(*args, **kwargs))                                                              │
│    58 │   │   │   if shared.cmd_opts.profile:                                                                    │
│                                                                                                                  │
│ X:\AI\automatic\modules\call_queue.py:36 in f                                                                    │
│                                                                                                                  │
│    35 │   │   │   try:                                                                                           │
│ ❱  36 │   │   │   │   res = func(*args, **kwargs)                                                                │
│    37 │   │   │   finally:                                                                                       │
│                                                                                                                  │
│                                             ... 17 frames hidden ...                                             │
│                                                                                                                  │
│ X:\AI\automatic\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py:196 in <lambda>     │
│                                                                                                                  │
│   195 │   │   task_queue.append(                                                                                 │
│ ❱ 196 │   │   │   ('attn', lambda x, net=net: xformer_attn_forward(net, x)))                                     │
│   197 │   │   task_queue.append(['add_res', None])                                                               │
│                                                                                                                  │
│ X:\AI\automatic\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py:172 in              │
│ xformer_attn_forward                                                                                             │
│                                                                                                                  │
│   171 │   )                                                                                                      │
│ ❱ 172 │   out = xformers.ops.memory_efficient_attention(                                                         │
│   173 │   │   q, k, v, attn_bias=None, op=self.attention_op)                                                     │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
NameError: name 'xformers' is not defined

At this point, I'll have to switch to xformers and restart the UI (because otherwise xformers doesn't kick in), I believe. But then, after doing so and generating an image with xformers, I can switch to SDP, re-load a model (I need to, I think, for the chosen cross-attention-optimization to kick in), and generate an image with the newly chosen SDP AND with Tiled VAE on.

Interesting.

After full UI restart, I have xformers ON:
image

...and the generation of the image will go as planned (with 4x-foolhardy-remacri upscaler hires fix, so VRAM usage is higher to actually "invite" Tiled VAE for some work at 1024x1472 resolution):
image

Platform Description

Windows 10 22H2, RTX 3090

[Issue]: CSS problem: resuse seed button hidden

Issue Description

The reuse seed button (CSS: id = "txt2img_reuse_seed") stays somehow hidden (display: none). This is marked down in the user.css but obviously then not shown via script.

Zwischenablagebild

Platform Description

Win10, Firefox 106

[Bug]: Crash at startup after updating: JSONDecodeError

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

After updating to latest commit, launching the program dies with a JSONDecodeError:

Steps to reproduce the problem

  1. Run automatic.sh with update and install parameters
  2. Try to launch the UI
  3. Death

What should have happened?

Not death

Commit where the problem happens

7eb805c

What platforms do you use to access the UI ?

Linux

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

I'm launching with the defaults.

List of extensions

No, I've disabled some of the built-in I don't use, like Clip Interrogator, Aesthetic scorer, etc.

Console logs

`SD server: optimized
Version: 7eb805ca Mon Mar 20 13:56:22 2023 -0400
Repository: https://github.com/vladmandic/automatic
Last Merge: Tue Mar 14 07:58:58 2023 -0400 Merge pull request #53 from AUTOMATIC1111/master
System
- Platform: Linux Mint 21.1 5.15.0-67-generic x86_64
- nVIDIA: NVIDIA GeForce RTX 3060, 525.85.05
Expecting property name enclosed in double quotes: line 177 column 1 (char 6136)
Launching server with arguments: --cors-allow-origins=http://127.0.0.1:7860
/usr/lib/python3/dist-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: 0.1.43ubuntu1 is an invalid version and will not be supported in a future release
  warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/user/automatic/launch.py:306 in <module>                                                   │
│                                                                                                  │
│   303                                                                                            │
│   304 if __name__ == "__main__":                                                                 │
│   305 │   prepare_environment()                                                                  │
│ ❱ 306 │   start()                                                                                │
│   307                                                                                            │
│                                                                                                  │
│ /home/user/automatic/launch.py:297 in start                                                      │
│                                                                                                  │
│   294 │   │   import traceback                                                                   │
│   295 │   │   pass # if rich is not installed do nothing                                         │
│   296 │                                                                                          │
│ ❱ 297 │   import webui                                                                           │
│   298 │   if '--nowebui' in sys.argv:                                                            │
│   299 │   │   webui.api_only()                                                                   │
│   300 │   else:                                                                                  │
│                                                                                                  │
│ /home/user/automatic/webui.py:29 in <module>                                                     │
│                                                                                                  │
│    26 import gradio                                                                              │
│    27 import ldm.modules.encoders.modules                                                        │
│    28                                                                                            │
│ ❱  29 from modules import extra_networks, ui_extra_networks_checkpoints                          │
│    30 from modules import extra_networks_hypernet, ui_extra_networks_hypernets, ui_extra_netwo   │
│    31 from modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call          │
│    32                                                                                            │
│                                                                                                  │
│ /home/user/automatic/modules/ui_extra_networks_checkpoints.py:5 in <module>                      │
│                                                                                                  │
│    2 import json                                                                                 │
│    3 import os                                                                                   │
│    4                                                                                             │
│ ❱  5 from modules import shared, ui_extra_networks, sd_models                                    │
│    6                                                                                             │
│    7                                                                                             │
│    8 class ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):                    │
│                                                                                                  │
│ /home/user/automatic/modules/shared.py:666 in <module>                                           │
│                                                                                                  │
│   663                                                                                            │
│   664 opts = Options()                                                                           │
│   665 if os.path.exists(config_filename):                                                        │
│ ❱ 666 │   opts.load(config_filename)                                                             │
│   667                                                                                            │
│   668 settings_components = None                                                                 │
│   669 """assinged from ui.py, a mapping on setting anmes to gradio components repsponsible for   │
│                                                                                                  │
│ /home/user/automatic/modules/shared.py:603 in load                                               │
│                                                                                                  │
│   600 │                                                                                          │
│   601 │   def load(self, filename):                                                              │
│   602 │   │   with open(filename, "r", encoding="utf8") as file:                                 │
│ ❱ 603 │   │   │   self.data = json.load(file)                                                    │
│   604 │   │                                                                                      │
│   605 │   │   bad_settings = 0                                                                   │
│   606 │   │   for k, v in self.data.items():                                                     │
│                                                                                                  │
│ /usr/lib/python3.10/json/__init__.py:293 in load                                                 │
│                                                                                                  │
│   290 │   To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
│   291 │   kwarg; otherwise ``JSONDecoder`` is used.                                              │
│   292 │   """
│ ❱ 293 │   return loads(fp.read(),                                                                │
│   294 │   │   cls=cls, object_hook=object_hook,                                                  │
│   295 │   │   parse_float=parse_float, parse_int=parse_int,                                      │
│   296 │   │   parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)          │
│                                                                                                  │
│ /usr/lib/python3.10/json/__init__.py:346 in loads                                                │
│                                                                                                  │
│   343 │   if (cls is None and object_hook is None and                                            │
│   344 │   │   │   parse_int is None and parse_float is None and                                  │
│   345 │   │   │   parse_constant is None and object_pairs_hook is None and not kw):              │
│ ❱ 346 │   │   return _default_decoder.decode(s)                                                  │
│   347 │   if cls is None:                                                                        │
│   348 │   │   cls = JSONDecoder                                                                  │
│   349 │   if object_hook is not None:                                                            │
│                                                                                                  │
│ /usr/lib/python3.10/json/decoder.py:337 in decode                                                │
│                                                                                                  │
│   334 │   │   containing a JSON document).                                                       │
│   335 │   │                                                                                      │
│   336 │   │   """
│ ❱ 337 │   │   obj, end = self.raw_decode(s, idx=_w(s, 0).end())                                  │
│   338 │   │   end = _w(s, end).end()                                                             │
│   339 │   │   if end != len(s):                                                                  │
│   340 │   │   │   raise JSONDecodeError("Extra data", s, end)                                    │
│                                                                                                  │
│ /usr/lib/python3.10/json/decoder.py:353 in raw_decode                                            │
│                                                                                                  │
│   350 │   │                                                                                      │
│   351 │   │   """
│   352 │   │   try:                                                                               │
│ ❱ 353 │   │   │   obj, end = self.scan_once(s, idx)                                              │
│   354 │   │   except StopIteration as err:                                                       │
│   355 │   │   │   raise JSONDecodeError("Expecting value", s, err.value) from None               │
│   356 │   │   return obj, end                                                                    │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
JSONDecodeError: Expecting property name enclosed in double quotes: line 177 column 1 (char 6136)
`

Additional information

No response

[Feature Request]: Add installer output

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

Installing on Fedora 37

I had to restart this installer 3 times before I finally though to dig into the script and tail the log to figure out why it was appearing to hang on "Installing requirements" for 10+ minutes

I learned it was actually doing all of the requirements installs, but without any output I have no idea what's happening, so I just assumed it was hung. Turns out it was just downloading a large package.

It would be preferable to have console output during the install process, especially when you're in a fresh venv so a lot of large packages need to be downloaded and installed. In the very least I think a --verbose option would be a good idea.

Proposed workflow

A) Script outputs logs while installing
B) Optional --verbose flag enables A

Additional information

No response

[Issue]: I just want to inform something about the User Name problem

Issue Description

WindowsTerminal_9lyyPxyX3x

I use Windows and I've noticed that weird errors happen to me using the SD in general even though this Fork or the main one happens to me. My User Profile contains a space between names and it seems that some things cannot read the path. The profile is called "Futon Gama" and the same goes for the user folder (which is the problem). I used some methods to change my username to "futongama" and now my problems are over, I was going to open a topic here talking about the problem but I decided to test changing the username first just to confirm and I saw that I was right. I don't know if you can "fix" this, or at least it would be interesting to inform you that profiles with separate names will cause problems. I don't understand programming but from what I know Linux is different and you don't use spaces in the same way and that's why it causes this, at least I think so, you guys tell me. I hope there's a way to fix things like this. You can close the topic if you want since it's already fixed, but I wanted to let you know because I think it could be the problem of many.

Platform Description

Windows 11 using Explorer patcher to be like Windows 10 Interface in general.
Firefox Browser

Intel i9 10900
RTX 2080 Super 8 GB
32 RAM
5 TB HD, 1 TB SSD Nvme.
1º Monitor 1080p 240hz, 2º Monitor 4K 60hz.

[Issue]: Issue with updating extensions inside the UI

Issue Description

Gradio sure is a nightmare to manage sometimes. 🙂
This error happens either when I try to check for extension updates, or try to install a new one. I'm on 149329c commit (latest at the time of writing).

gradio call: AttributeError
╭─────────────────────────────────────── Traceback (most recent call last) ────────────────────────────────────────╮
│ X:\AI\automatic\modules\call_queue.py:57 in f                                                                    │
│                                                                                                                  │
│    56 │   │   │   │   pr.enable()                                                                                │
│ ❱  57 │   │   │   res = list(func(*args, **kwargs))                                                              │
│    58 │   │   │   if shared.cmd_opts.profile:                                                                    │
│                                                                                                                  │
│ X:\AI\automatic\modules\call_queue.py:36 in f                                                                    │
│                                                                                                                  │
│    35 │   │   │   try:                                                                                           │
│ ❱  36 │   │   │   │   res = func(*args, **kwargs)                                                                │
│    37 │   │   │   finally:                                                                                       │
│                                                                                                                  │
│ X:\AI\automatic\modules\ui_extensions.py:52 in check_updates                                                     │
│                                                                                                                  │
│    51 def check_updates(id_task, disable_list):                                                                  │
│ ❱  52 │   check_access()                                                                                         │
│    53                                                                                                            │
│                                                                                                                  │
│ X:\AI\automatic\modules\ui_extensions.py:20 in check_access                                                      │
│                                                                                                                  │
│    19 def check_access():                                                                                        │
│ ❱  20 │   assert not shared.cmd_opts.disable_extension_access, "extension access disabled beca                   │
│    21                                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'Namespace' object has no attribute 'disable_extension_access'

Platform Description

Windows 10 22H2, RTX 3090, I have WSL, not used though (probably should uninstall it).

[Bug]: New progress bars in console are always zeroed for models loading

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

New progress bars in console are always zeroed for models loading. No errors, models loaded, generation works well.

image

Steps to reproduce the problem

  1. Run the UI.
  2. Wait for model load.
    OR
  3. Change the model in drop-down.
  4. Wait for model load.

What should have happened?

Progress bars should reflect the progress.

Commit where the problem happens

426268d

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--ckpt-dir="E:\.ai\models" --vae-dir="E:\.ai\vae" --autolaunch --opt-channelslast --upcast-sampling --api --gradio-queue --update-all-extensions --theme dark --cors-allow-origins="http://127.0.0.1:7860"

List of extensions

None.

Console logs

Nothing to log.

Additional information

No response

[Issue]: LoRa not working

Issue Description

After loading LoRas from the 'extra networks' menu I receive an error: RuntimeError: output with shape [128, 320] doesn't match the broadcast shape [128, 320, 128, 320]
The error occurs with different LoRa files, all in .SafeTesnor format, downloaded from civitai, for both 2.1 and 1.5 models.

Here's an image for more details:
image

Platform Description

Win10 on commit f256fb8b6ace49f201ea351edf9cf835dbb0fd62

[Bug]: ModuleNotFoundError: No module named 'rich'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

webui.bat exits with ModuleNotFoundError: No module named 'rich'

Steps to reproduce the problem

  1. python -m pip install -r .\requirements.txt
  2. run webui.bat

What should have happened?

webui opens

Commit where the problem happens

ffc54d0

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

List of extensions

No

Console logs

PS C:\Users\alect\Downloads\automatic> pip show rich
Name: rich
Version: 13.3.3
Summary: Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal
Home-page: https://github.com/Textualize/rich
Author: Will McGugan
Author-email: [email protected]
License: MIT
Location: c:\users\alect\appdata\local\programs\python\python310\lib\site-packages
Requires: markdown-it-py, pygments
Required-by:
PS C:\Users\alect\Downloads\automatic> .\webui.bat
venv "C:\Users\alect\Downloads\automatic\venv\Scripts\Python.exe"
Traceback (most recent call last):
  File "C:\Users\alect\Downloads\automatic\launch.py", line 12, in <module>
    from rich import print
ModuleNotFoundError: No module named 'rich'
Press any key to continue . . .

Additional information

No response

[Bug]: MAC M1 not working

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Install nor run works.

Steps to reproduce the problem

Git clone this repo.
./automatic.sh install

What should have happened?

Install

Commit where the problem happens

ffc54d0

What platforms do you use to access the UI ?

No response

What browsers do you use to access the UI ?

No response

Command Line Arguments

install

List of extensions

none

Console logs

First tells:
zsh: ./automatic.sh: bad interpreter: /bin/env: no such file or directory

after changing hashbang line to /bin/bash the script runs but then I get this error:
Installing general requirements
ERROR: Could not find a version that satisfies the requirement mediapipe (from versions: none)
ERROR: No matching distribution found for mediapipe
Installing versioned requirements
ERROR: Could not find a version that satisfies the requirement tensorflow==2.12.0 (from versions: none)
ERROR: No matching distribution found for tensorflow==2.12.0

Additional information

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.