Giter Club home page Giter Club logo

stable-diffusion-webui's Introduction

Docker Build

Stable Diffusion WebUI Docker Image

Run Automatic1111 WebUI in a docker container locally or in the cloud.

Note

These images do not bundle models or third-party configurations. You should use a provisioning script to automatically configure your container. You can find examples in config/provisioning.

Documentation

All AI-Dock containers share a common base which is designed to make running on cloud services such as vast.ai and runpod.io as straightforward and user friendly as possible.

Common features and options are documented in the base wiki but any additional features unique to this image will be detailed below.

Version Tags

The :latest tag points to :latest-cuda

Tags follow these patterns:

CUDA
  • :cuda-[x.x.x]-[base|runtime]-[ubuntu-version]

  • :latest-cuda:cuda-11.8.0-base-22.04

ROCm
  • :rocm-[x.x.x]-runtime-[ubuntu-version]

  • :latest-rocm:rocm-5.7-runtime-22.04

CPU
  • :cpu-ubuntu-[ubuntu-version]

  • :latest-cpu:cpu-22.04

Browse here for an image suitable for your target environment.

Supported Python versions: 3.10

Supported Platforms: NVIDIA CUDA, AMD ROCm, CPU

Additional Environment Variables

Variable Description
AUTO_UPDATE Update A1111 Web UI on startup (default false)
WEBUI_BRANCH WebUI branch/commit hash for auto update. (default master)
WEBUI_FLAGS Startup flags. eg. --no-half
WEBUI_PORT_HOST Web UI port (default 7860)
WEBUI_URL Override $DIRECT_ADDRESS:port with URL for Web UI

See the base environment variables here for more configuration options.

Additional Micromamba Environments

Environment Packages
webui AUTOMATIC1111 WebUI and dependencies

This micromamba environment will be activated on shell login.

See the base micromamba environments here.

Additional Services

The following services will be launched alongside the default services provided by the base image.

Stable Diffusion WebUI

The service will launch on port 7860 unless you have specified an override with WEBUI_PORT.

WebUI will be updated to the latest version on container start. You can pin the version to a branch or commit hash by setting the WEBUI_BRANCH variable.

You can set startup flags by using variable WEBUI_FLAGS.

To manage this service you can use supervisorctl [start|stop|restart] webui.

Note

All services are password protected by default. See the security and environment variables documentation for more information.

Pre-Configured Templates

Vast.​ai


Runpod.​io


The author (@robballantyne) may be compensated if you sign up to services linked in this document. Testing multiple variants of GPU images in many different environments is both costly and time-consuming; This helps to offset costs

stable-diffusion-webui's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

stable-diffusion-webui's Issues

Can't load newly created textual embedding .pt file

As the title states whenever I create a new textual inversion embedding the file is created, but when I attempt to load the embedding into my training tab it gives me an error. I have tried disabling extra extensions and that did not solve the issue. I can train/use embeddings just fine if I use --disable-safe-unpickling but this is a bandaid fix. ChatGPT isn't very helpful on the subject and I am led to think this is either a setting that is incorrectly set or its an underlying bug with the version of pytorch installed.

I am renting a RTX 4090 from Vast.ai. I am using Python 3.10.13 with Pytorch version 2.1.1. I am using the same versions to run on my local machine which works fine safely pickling and unpickling.

*** Error verifying pickled file from /opt/stable-diffusion-webui/embeddings/[Redacted]
*** The file may be malicious, so the program is not going to read it.
*** You can skip this check with --disable-safe-unpickle commandline argument.
***
    Traceback (most recent call last):
      File "/opt/stable-diffusion-webui/modules/safe.py", line 137, in load_with_extra
        check_pt(filename, extra_handler)
      File "/opt/stable-diffusion-webui/modules/safe.py", line 84, in check_pt
        check_zip_filenames(filename, z.namelist())
      File "/opt/stable-diffusion-webui/modules/safe.py", line 76, in check_zip_filenames
        raise Exception(f"bad file inside {filename}: {name}")
    Exception: bad file inside /opt/stable-diffusion-webui/embeddings/[Redacted]: [Redacted]/byteorder
---
*** Error loading embedding [Redacted]
    Traceback (most recent call last):
      File "/opt/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 203, in load_from_dir
        self.load_from_file(fullfn, fn)
      File "/opt/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 184, in load_from_file
        embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
      File "/opt/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 284, in create_embedding_from_data
        if 'string_to_param' in data:  # textual inversion embeddings
    TypeError: argument of type 'NoneType' is not iterable
---

Error running on vultr? Or what am i doing wrong??

I build and all goes fine. but when i try to docker-compose up this i get the following error. Torch is installed and nvidia-smi shows that my drivers are good? Web-Ui runs for about 5 seconds and shuts down and enters a loop doing this until i stop it. I am able to get to port 1111 but cannot connect to web-ui. Any suggestions? Thanks!!

RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Can't run more than one container on different ports

When i started 3 containers with different ports and one workspace i am having stucking state of loading in all containers. Apparelently problem in syncthing . Should i make different configs for syncthing or smth else ?

logs :

Applying attention optimization: Doggettx... done.
Model loaded in 5.8s (load weights from disk: 1.0s, create model: 1.4s, apply weights to model: 3.0s, calculate empty prompt: 0.2s).

Waiting for syncthing server...
[start] 2024/04/18 11:51:21 INFO: syncthing v1.27.4 "Gold Grasshopper" (go1.22.0 linux-amd64) [email protected] 2024-02-27 12:05:19 UTC
[start] 2024/04/18 11:51:21 WARNING: Error opening database: resource temporarily unavailable (is another instance of Syncthing running?)
[monitor] 2024/04/18 11:51:21 INFO: Syncthing exited: exit status 1
Waiting for syncthing server...
[start] 2024/04/18 11:51:22 INFO: syncthing v1.27.4 "Gold Grasshopper" (go1.22.0 linux-amd64) [email protected] 2024-02-27 12:05:19 UTC
[start] 2024/04/18 11:51:22 WARNING: Error opening database: resource temporarily unavailable (is another instance of Syncthing running?)
[monitor] 2024/04/18 11:51:22 INFO: Syncthing exited: exit status 1
Waiting for syncthing server...
[start] 2024/04/18 11:51:23 INFO: syncthing v1.27.4 "Gold Grasshopper" (go1.22.0 linux-amd64) [email protected] 2024-02-27 12:05:19 UTC
[start] 2024/04/18 11:51:23 WARNING: Error opening database: resource temporarily unavailable (is another instance of Syncthing running?)
[monitor] 2024/04/18 11:51:23 INFO: Syncthing exited: exit status 1
Waiting for syncthing server...
[start] 2024/04/18 11:51:24 INFO: syncthing v1.27.4 "Gold Grasshopper" (go1.22.0 linux-amd64) [email protected] 2024-02-27 12:05:19 UTC
[start] 2024/04/18 11:51:24 WARNING: Error opening database: resource temporarily unavailable (is another instance of Syncthing running?)
[monitor] 2024/04/18 11:51:24 INFO: Syncthing exited: exit status 1
Waiting for syncthing server...
[monitor] 2024/04/18 11:51:25 WARNING: 4 restarts in 4.144379812s; not retrying further


my docker-compose :

version: "3.8"


x-settings: &common_settings
  platform: linux/amd64
  build:
    context: ./build
    args:
      IMAGE_BASE: ${IMAGE_BASE:-ghcr.io/ai-dock/jupyter-pytorch:2.2.1-py3.10-cuda-11.8.0-runtime-22.04}
    tags:
      - "ghcr.io/ai-dock/stable-diffusion-webui:${IMAGE_TAG:-jupyter-pytorch-2.2.1-py3.10-cuda-11.8.0-runtime-22.04}"
  image: ghcr.io/ai-dock/stable-diffusion-webui:${IMAGE_TAG:-jupyter-pytorch-2.2.1-py3.10-cuda-11.8.0-runtime-22.04}
  devices:
    - "/dev/dri:/dev/dri"
  volumes:
    - ./workspace:${WORKSPACE:-/workspace/}:rshared
    - ./config/authorized_keys:/root/.ssh/authorized_keys_mount
    - ./config/provisioning/default.sh:/opt/ai-dock/bin/provisioning.sh
  restart: always
  environment:
    - DIRECT_ADDRESS=${DIRECT_ADDRESS:-127.0.0.1}
    - DIRECT_ADDRESS_GET_WAN=${DIRECT_ADDRESS_GET_WAN:-false}
    - WORKSPACE=${WORKSPACE:-/workspace}
    - WORKSPACE_SYNC=${WORKSPACE_SYNC:-false}
    - CF_TUNNEL_TOKEN=${CF_TUNNEL_TOKEN:-}
    - CF_QUICK_TUNNELS=${CF_QUICK_TUNNELS:-true}
    - WEB_ENABLE_AUTH=${WEB_ENABLE_AUTH:-false}
    - WEB_USER=${WEB_USER:-user}
    - WEB_PASSWORD=${WEB_PASSWORD:-password}
    - SSH_PORT_HOST=${SSH_PORT_HOST:-2222}
    - SSH_PORT_LOCAL=${SSH_PORT_LOCAL:-22}
    - SERVICEPORTAL_PORT_HOST=${SERVICEPORTAL_PORT_HOST:-1111}
    - SERVICEPORTAL_METRICS_PORT=${SERVICEPORTAL_METRICS_PORT:-21111}
    - SERVICEPORTAL_URL=${SERVICEPORTAL_URL:-}
    - WEBUI_BRANCH=${WEBUI_BRANCH:-}
    - WEBUI_FLAGS=${WEBUI_FLAGS:-}
    - WEBUI_PORT_HOST=${WEBUI_PORT_HOST:-9090}
    - WEBUI_PORT_LOCAL=${WEBUI_PORT_LOCAL:-17860}
    - WEBUI_METRICS_PORT=${WEBUI_METRICS_PORT:-27860}
    - WEBUI_URL=${WEBUI_URL:-}
    - JUPYTER_PORT_HOST=${JUPYTER_PORT_HOST:-8888}
    - JUPYTER_METRICS_PORT=${JUPYTER_METRICS_PORT:-28888}
    - JUPYTER_URL=${JUPYTER_URL:-}
    - SERVERLESS=${SERVERLESS:-false}
    - SYNCTHING_UI_PORT_HOST=${SYNCTHING_UI_PORT_HOST:-8384}
    - SYNCTHING_TRANSPORT_PORT_HOST=${SYNCTHING_TRANSPORT_PORT_HOST:-22999}
    - SYNCTHING_URL=${SYNCTHING_URL:-}

  
services:
  workspace1:
    <<: *common_settings
    ports:
      - "2222:22" # SSH
      - "1111:1111" # Service portal
      - "9090:9090" # WEBUI web interface
      - "8888:8888" # Jupyter server
      - "8384:8384" # Syncthing UI
      - "22999:22999" # Syncthing Transport

  workspace2:
    <<: *common_settings
    ports:
      - "2223:22" # SSH
      - "1112:1112" # Service portal
      - "9091:9090" # WEBUI web interface
      - "8889:8888" # Jupyter server
      - "8385:8384" # Syncthing UI
      - "23000:22999" # Syncthing Transport

  workspace3:
    <<: *common_settings
    ports:
      - "2224:22" # SSH
      - "1113:1113" # Service portal
      - "9092:9090" # WEBUI web interface
      - "8890:8888" # Jupyter server
      - "8386:8384" # Syncthing UI
      - "23001:22999" # Syncthing Transport

Can't customize embeddings and other things

I looked a at the provisioning script and saw no place to define a custom embedding when the template boots up on vast.ai

Preferrably & alternatively, I want to boot the instance, and then sync my models on the cloud to the new instance. I have a copy of the /stable-diffusion-webui/ folder on my gg drive with the exact same structure as the template, including a custom embedding and a custom model. When I try syncing, only the model (epic realisism) comes over, while the embedding folder is still empty (should contain "bad dream" after syncing).

image image image image

Please help! Thank you so much

Error Running Locally : Torch is not able to use GPU

I cloned the repo, added env, set up my provisioning script and ran docker compose up

After install and building the container I received this error when it was trying to start the webui
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

WebUI crashes on ROCm with default provisioning script (probably due to bitsandbytes)

Trying to run this locally using a simple docker compose up, but on ROCm instead of CUDA. Sadly the webui keeps crashing down on me using the default provisioning script. It appears the error is related to bitsandbytes trying to find CUDA in vain, which is understandable as there is no official ROCm support in bitsandbytes to this date to my knowledge.

AMD does appear to maintain their own fork of bitsandbytes here however: https://github.com/ROCm/bitsandbytes/tree/rocm_enabled

So I think there are two questions then.

  1. Shouldn't one somehow try to provide the ROCm images built with AMD's fork of bitsandbytes?

  2. Or maybe one should identify all the things that include bitsandbytes as a dependency and remove them? Or at least provide people with some documentation which things don't work on ROCm at the moment? Sadly I don't know what that would entail, the provisioning scripts are quite sizable.

Any ideas, comments?

Thanks for any help in advance!


I am running this on NixOS, kernel 6.1.77, AMD Radeon Pro W6800 GPU

This is my docker-compose.yml:

version: "3.8"
# Compose file build variables set in .env
services:
  supervisor:
    platform: linux/amd64
    build:
      context: ./build
      args:
        IMAGE_BASE: ${IMAGE_BASE:-ghcr.io/ai-dock/jupyter-pytorch:2.2.0-py3.10-rocm-5.7-runtime-22.04}
      tags:
        - "ghcr.io/ai-dock/stable-diffusion-webui:${IMAGE_TAG:-jupyter-pytorch-2.2.0-py3.10-rocm-5.7-runtime-22.04}"
        
    image: ghcr.io/ai-dock/stable-diffusion-webui:${IMAGE_TAG:-jupyter-pytorch-2.2.0-py3.10-rocm-5.7-runtime-22.04}

    devices:
      - "/dev/dri:/dev/dri"
      # For AMD GPU
      - "/dev/kfd:/dev/kfd"
    
    volumes:
      # Workspace
      - ./workspace:${WORKSPACE:-/workspace/}:rshared
      # You can share /workspace/storage with other non-WEBUI containers. See README
      #- /path/to/common_storage:${WORKSPACE:-/workspace/}storage/:rshared
      # Will echo to root-owned authorized_keys file;
      # Avoids changing local file owner
      - ./config/authorized_keys:/root/.ssh/authorized_keys_mount
      - ./config/provisioning/default.sh:/opt/ai-dock/bin/provisioning.sh
    
    ports:
        # SSH available on host machine port 2222 to avoid conflict. Change to suit
        - ${SSH_PORT_HOST:-2222}:${SSH_PORT_LOCAL:-22}
        # Caddy port for service portal
        - ${SERVICEPORTAL_PORT_HOST:-1111}:${SERVICEPORTAL_PORT_HOST:-1111}
        # WEBUI web interface
        - ${WEBUI_PORT_HOST:-7860}:${WEBUI_PORT_HOST:-7860}
        # Jupyter server
        - ${JUPYTER_PORT_HOST:-8888}:${JUPYTER_PORT_HOST:-8888}
        # Syncthing
        - ${SYNCTHING_UI_PORT_HOST:-8384}:${SYNCTHING_UI_PORT_HOST:-8384}
        - ${SYNCTHING_TRANSPORT_PORT_HOST:-22999}:${SYNCTHING_TRANSPORT_PORT_HOST:-22999}
   
    environment:
        # Don't enclose values in quotes
        - DIRECT_ADDRESS=${DIRECT_ADDRESS:-127.0.0.1}
        - DIRECT_ADDRESS_GET_WAN=${DIRECT_ADDRESS_GET_WAN:-false}
        - WORKSPACE=${WORKSPACE:-/workspace}
        - WORKSPACE_SYNC=${WORKSPACE_SYNC:-false}
        - CF_TUNNEL_TOKEN=${CF_TUNNEL_TOKEN:-}
        - CF_QUICK_TUNNELS=${CF_QUICK_TUNNELS:-true}
        - WEB_ENABLE_AUTH=${WEB_ENABLE_AUTH:-true}
        - WEB_USER=${WEB_USER:-user}
        - WEB_PASSWORD=${WEB_PASSWORD:-password}
        - SSH_PORT_HOST=${SSH_PORT_HOST:-2222}
        - SSH_PORT_LOCAL=${SSH_PORT_LOCAL:-22}
        - SERVICEPORTAL_PORT_HOST=${SERVICEPORTAL_PORT_HOST:-1111}
        - SERVICEPORTAL_METRICS_PORT=${SERVICEPORTAL_METRICS_PORT:-21111}
        - WEBUI_BRANCH=${WEBUI_BRANCH:-}
        - WEBUI_FLAGS=${WEBUI_FLAGS:-}
        - WEBUI_PORT_HOST=${WEBUI_PORT_HOST:-7860}
        - WEBUI_PORT_LOCAL=${WEBUI_PORT_LOCAL:-17860}
        - WEBUI_METRICS_PORT=${WEBUI_METRICS_PORT:-27860}
        - JUPYTER_PORT_HOST=${JUPYTER_PORT_HOST:-8888}
        - JUPYTER_METRICS_PORT=${JUPYTER_METRICS_PORT:-28888}
        - SERVERLESS=${SERVERLESS:-false}
        - SYNCTHING_UI_PORT_HOST=${SYNCTHING_UI_PORT_HOST:-8384}
        - SYNCTHING_TRANSPORT_PORT_HOST=${SYNCTHING_TRANSPORT_PORT_HOST:-22999}
        #- PROVISIONING_SCRIPT=${PROVISIONING_SCRIPT:-}

And here is my error message for completeness sake, in case I am interpreting this the wrong way.

supervisor-1  | ==> /var/log/supervisor/webui.log <==
supervisor-1  | Starting A1111 SD Web UI...
supervisor-1  | Starting A1111 SD Web UI...
supervisor-1  | Python 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
supervisor-1  | Version: v1.8.0
supervisor-1  | Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
supervisor-1  | 2024-03-19 17:32:20,668 INFO success: webui entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
supervisor-1  | Installing requirements
supervisor-1  |
supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
supervisor-1  | 2024-03-19 17:32:20,668 INFO success: webui entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
supervisor-1  |
supervisor-1  | ==> /var/log/supervisor/webui.log <==
supervisor-1  | False
supervisor-1  | 'CUDASetup' object has no attribute 'cuda_available'
supervisor-1  | no module 'xformers'. Processing without...
supervisor-1  | no module 'xformers'. Processing without...
supervisor-1  | No module 'xformers'. Proceeding without it.
supervisor-1  | If submitting an issue on github, please provide the full startup log for debugging purposes.
supervisor-1  |
supervisor-1  | Initializing Dreambooth
supervisor-1  | Dreambooth revision: 45a12fe5950bf93205b6ef2b7511eb94052a241f
supervisor-1  | Checking xformers...
supervisor-1  | Checking bitsandbytes...
supervisor-1  | Checking bitsandbytes (ALL!)
supervisor-1  | Checking Dreambooth requirements...
supervisor-1  | Installed version of bitsandbytes: 0.43.0
supervisor-1  | [Dreambooth] bitsandbytes v0.43.0 is already installed.
supervisor-1  | Installed version of accelerate: 0.21.0
supervisor-1  | [Dreambooth] accelerate v0.21.0 is already installed.
supervisor-1  | Installed version of dadaptation: 3.2
supervisor-1  | [Dreambooth] dadaptation v3.2 is already installed.
supervisor-1  | Installed version of diffusers: 0.27.1
supervisor-1  | [Dreambooth] diffusers v0.25.0 is already installed.
supervisor-1  | Installed version of discord-webhook: 1.3.0
supervisor-1  | [Dreambooth] discord-webhook v1.3.0 is already installed.
supervisor-1  | Installed version of fastapi: 0.94.0
supervisor-1  | [Dreambooth] fastapi is already installed.
supervisor-1  | Installed version of gitpython: 3.1.32
supervisor-1  | [Dreambooth] gitpython v3.1.40 is not installed.
supervisor-1  | Successfully installed gitpython-3.1.42
supervisor-1  | Installed version of pytorch_optimizer: 2.12.0
supervisor-1  | [Dreambooth] pytorch_optimizer v2.12.0 is already installed.
supervisor-1  | Installed version of Pillow: 9.5.0
supervisor-1  | [Dreambooth] Pillow is already installed.
supervisor-1  | Installed version of tqdm: 4.66.2
supervisor-1  | [Dreambooth] tqdm is already installed.
supervisor-1  | Installed version of tomesd: 0.1.3
supervisor-1  | [Dreambooth] tomesd v0.1.2 is already installed.
supervisor-1  | Installed version of tensorboard: 2.13.0
supervisor-1  | [Dreambooth] tensorboard v2.13.0 is already installed.
supervisor-1  | [+] torch version 2.2.0+rocm5.7 installed.
supervisor-1  | [+] torchvision version 0.17.0+rocm5.7 installed.
supervisor-1  | [+] accelerate version 0.21.0 installed.
supervisor-1  | [+] diffusers version 0.27.1 installed.
supervisor-1  | [+] bitsandbytes version 0.43.0 installed.
supervisor-1  | [!] xformers NOT installed.
supervisor-1  | False
supervisor-1  | 'CUDASetup' object has no attribute 'cuda_available'
supervisor-1  | no module 'xformers'. Processing without...
supervisor-1  | no module 'xformers'. Processing without...
supervisor-1  | No module 'xformers'. Proceeding without it.
supervisor-1  | Installing requirements for Face Editor
supervisor-1  | CUDA None
supervisor-1  | Launching Web UI with arguments: --port 17860
supervisor-1  | Traceback (most recent call last):
supervisor-1  |   File "/workspace/stable-diffusion-webui/launch.py", line 48, in <module>
supervisor-1  |     main()
supervisor-1  |   File "/workspace/stable-diffusion-webui/launch.py", line 44, in main
supervisor-1  |     start()
supervisor-1  |   File "/workspace/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
supervisor-1  |     import webui
supervisor-1  |   File "/workspace/stable-diffusion-webui/webui.py", line 13, in <module>
supervisor-1  |     initialize.imports()
supervisor-1  |   File "/workspace/stable-diffusion-webui/modules/initialize.py", line 26, in imports
supervisor-1  |     from modules import paths, timer, import_hook, errors  # noqa: F401
supervisor-1  |   File "/workspace/stable-diffusion-webui/modules/paths.py", line 60, in <module>
supervisor-1  |     import sgm  # noqa: F401
supervisor-1  |   File "/workspace/stable-diffusion-webui/repositories/generative-models/sgm/__init__.py", line 1, in <module>
supervisor-1  |     from .models import AutoencodingEngine, DiffusionEngine
supervisor-1  |   File "/workspace/stable-diffusion-webui/repositories/generative-models/sgm/models/__init__.py", line 1, in <module>
supervisor-1  |     from .autoencoder import AutoencodingEngine
supervisor-1  |   File "/workspace/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 12, in <module>
supervisor-1  |     from ..modules.diffusionmodules.model import Decoder, Encoder
supervisor-1  |   File "/workspace/stable-diffusion-webui/repositories/generative-models/sgm/modules/__init__.py", line 1, in <module>
supervisor-1  |     from .encoders.modules import GeneralConditioner
supervisor-1  |   File "/workspace/stable-diffusion-webui/repositories/generative-models/sgm/modules/encoders/modules.py", line 5, in <module>
supervisor-1  |     import kornia
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/kornia/__init__.py", line 11, in <module>
supervisor-1  |     from . import augmentation, color, contrib, core, enhance, feature, io, losses, metrics, morphology, tracking, utils, x
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/kornia/x/__init__.py", line 2, in <module>
supervisor-1  |     from .trainer import Trainer
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/kornia/x/trainer.py", line 11, in <module>
supervisor-1  |     from accelerate import Accelerator
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/accelerate/__init__.py", line 3, in <module>
supervisor-1  |     from .accelerator import Accelerator
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/accelerate/accelerator.py", line 35, in <module>
supervisor-1  |     from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/accelerate/checkpointing.py", line 24, in <module>
supervisor-1  |     from .utils import (
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/accelerate/utils/__init__.py", line 131, in <module>
supervisor-1  |     from .bnb import has_4bit_bnb_layers, load_and_quantize_model
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/accelerate/utils/bnb.py", line 42, in <module>
supervisor-1  |     import bitsandbytes as bnb
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/__init__.py", line 6, in <module>
supervisor-1  |     from . import cuda_setup, research, utils
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/research/__init__.py", line 2, in <module>
supervisor-1  |     from .autograd._functions import (
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/research/autograd/_functions.py", line 8, in <module>
supervisor-1  |     from bitsandbytes.autograd._functions import GlobalOutlierPooler, MatmulLtState
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/autograd/__init__.py", line 1, in <module>
supervisor-1  |     from ._functions import get_inverse_transform_indices, undo_layout
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 10, in <module>
supervisor-1  |     import bitsandbytes.functional as F
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/functional.py", line 17, in <module>
supervisor-1  |     from .cextension import COMPILED_WITH_CUDA, lib
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/cextension.py", line 10, in <module>
supervisor-1  |     setup.run_cuda_setup()
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 137, in run_cuda_setup
supervisor-1  |     binary_name, cudart_path, cc, cuda_version_string = evaluate_cuda_setup()
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 367, in evaluate_cuda_setup
supervisor-1  |     cuda_version_string = get_cuda_version()
supervisor-1  |   File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py", line 335, in get_cuda_version
supervisor-1  |     major, minor = map(int, torch.version.cuda.split("."))
supervisor-1  | AttributeError: 'NoneType' object has no attribute 'split'
supervisor-1  | 2024-03-19 17:32:37,349 INFO exited: webui (exit status 1; not expected)
supervisor-1  |
supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
supervisor-1  | 2024-03-19 17:32:37,349 INFO exited: webui (exit status 1; not expected)
supervisor-1  | 2024-03-19 17:32:37,350 INFO spawned: 'webui' with pid 1746
supervisor-1  | 2024-03-19 17:32:37,350 INFO spawned: 'webui' with pid 1746

Automatic 1111 WEBUI API returns Error 401 Unauthorized

When setting --api in the webui startup arguments, the /docs endpoint can be reached, but when trying to send a post request to /sdapi/v1/txt2img, I always get Error 401 Unauthorized. I have a username and password set for web authentication but do not know in which part of the POST request to send it along (propably in the header but what is the exact structure)?

API Endpoint: POST http://127.0.0.1:7860/sdapi/v1/txt2img
Header:
{
    "Content-Type": "application/json"
}
Payload:
{
    "prompt": "Astronauts on the moon",
    "guidance_scale": 7,
    "seed": -1,
    "width": 512,
    "height": 512,
    "steps": 20,
    "samples": 1,
    "send_images": true,
    "sampler_index": "Euler"
}

Response: 
401 Unauthorized

How to use Automatic1111 API with this container?

I extend the webui flags with --api --listen to enable the webui API by setting the environment variable WEBUI_FLAGS="--api --listen". In the logs I see that the command line flag is used. Usually the API runs on the same port as the GUI.

In the past I had run stable diffusion in a paperspace notebook without a container and just relying on a custom notebook. It just generated a gradle URL the API just worked with that URL. Recently I transitioned to vast.ai and use the Automatic1111 Jupyter template provided there, which uses this docker container. I tried both the cloudfare quick tunnel and direct-access URLs but didn't work with either.

I'm no API/networking expert, I just had used project which allows for generating images if you give it an SD API URL. But for debugging I tested the URL in Python:

requests.post(
   url="<Same URL as WebUI GUI>",
   json={"prompt": "maltese puppy", "steps": 5},
).content.decode()

And got a return of '{"detail":"Method Not Allowed"}'. Is maybe cloudfare blocking API access? Am I missing something in the container configuration, e.g. should I forward/allow some more ports?

Btw, I thought maybe the html auth is messing with the UI, so I tried WEB_ENABLE_AUTH=false. Though I think I could just use user:pw@url for the API request.

Not sure if this is the right place to ask but I just can't figure that out...

Provision Script wget should have --content-disposition flag

For checkpoints, wget need to add --content-disposition flag, otherwise the files won’t be saved correctly.

Many Civitai download links don’t have the the filename in the URL, so this flag is required.

Example:

CHECKPOINT_MODELS=(\
    "https://civitai.com/api/download/models/181248?type=Model&format=SafeTensor&size=pruned&fp=fp16" 
)

currently will be saved in the folder as this:

$ ls
'181248?type=Model&format=SafeTensor&size=pruned&fp=fp16'

but if instead you wget with the flag:

cd /workspace/stable-diffusion-webui/models/Stable-diffusion
wget 'https://civitai.com/api/download/models/181248?type=Model&format=SafeTensor&size=pruned&fp=fp16' --content-disposition

then it will be saved correctly as: virileReality_v30BETA3.safetensors

How do I disable Syncthing and Jupyter Notebook on startup or remove them completely from build?

Hello @robballantyne! Thanks a lot for maintaining the docker container for A1111 webui, it's been very helpful and handy to use for me. Service portal is fire in means of monitoring and observing the container. Caddy proxy is lit.

Currently my aim is to create a minimal container to provide Stable Diffusion services via API, and I was wondering whether or not there is a possibility to remove Jupyter Notebook and Syncthing services from the docker container, or atleast, disable them on container startup to reduce overhead and bloat in logs?

I've skimmed through Dockerfile, init.sh, other .sh scripts and couldn't find where those services are being boot. In README.md and base image container documentation there is only enough information to come up with script for disabling services after the startup via supervisorctl. (README.md#Additional services + 1.0-Included-Software)

Apparently, there is an opportunity to use base container without jupyter somehow as described in 5.0-Building-the-Image, but there is scarcely any information about installation / disabling of Syntching server.

Can you give me a hint on how to disable those services on startup or remove them from build completely? Or where at least I should look for to achieve what I want?

I believe that such possibility to disable unnescessary services would be handy to have for other people too.

Hope to hear from you soon

Ilya

Fails to start on Paperspace Gradient with GPU machine

Since a couple of days ago the container fails to start when running in a GPU instance in Paperspace Gradient, for some reason it works fine on a CPU only instance but Paperspace has refused to explain why or explore the issue.

Seems to be related to the service portal and port 1111

{"what":"agent","message":"Loading settings configuration...","level":"info","timestamp":"2023-11-29T09:26:39.278Z"}
{"port":8889,"level":"info","message":"Started on","timestamp":"2023-11-29T09:26:39.279Z"}
{"message":"Finished initializing","level":"info","timestamp":"2023-11-29T09:26:39.280Z"}
/opt/ai-dock/bin/init.sh: line 335: =: command not found
Environment unsuitable for rclone mount...
rclone remains available via CLI
Looking for config.sh...
Not found
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/caddy.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/cloudflared.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/logtail.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/quicktunnel.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/rclone_mount.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serverless.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serviceportal.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/sshd.conf" during parsing
2023-11-29 09:26:40,704 INFO Set uid to user 0 succeeded
2023-11-29 09:26:40,711 INFO RPC interface 'supervisor' initialized
2023-11-29 09:26:40,711 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2023-11-29 09:26:40,712 INFO RPC interface 'supervisor' initialized
2023-11-29 09:26:40,712 INFO supervisord started with pid 92
2023-11-29 09:26:41,715 INFO spawned: 'logtail' with pid 98
2023-11-29 09:26:41,718 INFO spawned: 'serverless' with pid 99
2023-11-29 09:26:41,720 INFO spawned: 'serviceportal' with pid 100
2023-11-29 09:26:41,722 INFO spawned: 'sshd' with pid 102
2023-11-29 09:26:41,723 INFO spawned: 'caddy' with pid 103
Starting logtail service...
2023-11-29 09:26:43,723 INFO success: serverless entered RUNNING state, process has stayed up for \u003e than 2 seconds (startsecs)
Gathering logs...==\u003e /var/log/config.log \u003c==

==\u003e /var/log/sync.log \u003c==
Skipping workspace sync: Mamba environments remain in /opt

==\u003e /var/log/preflight.log \u003c==
Looking for preflight.sh...
Empty preflight.sh...

==\u003e /var/log/debug.log \u003c==

==\u003e /var/log/provisioning.log \u003c==
Looking for provisioning.sh...
Not found

==\u003e /var/log/supervisor/caddy.log \u003c==
{"level":"info","ts":1701250003.848939,"msg":"using provided configuration","config_file":"/opt/caddy/etc/Caddyfile","config_adapter":""}
{"level":"warn","ts":1701250003.8504975,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv0"}
{"level":"warn","ts":1701250003.8505342,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv1"}
Error: loading initial config: loading new config: http app module: start: listening on :11111: listen tcp :11111: bind: address already in use

==\u003e /var/log/supervisor/serverless.log \u003c==
Refusing to start serverless worker without $SERVERLESS=true

==\u003e /var/log/supervisor/serviceportal.log \u003c==
Starting Service Portal...
INFO:     Started server process [100]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:11111 (Press CTRL+C to quit)

==\u003e /var/log/supervisor/sshd.log \u003c==
/root/.ssh/authorized_keys is not a public key file.\r
Skipping SSH server: No public key

==\u003e /var/log/supervisor/supervisor.log \u003c==
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/caddy.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/cloudflared.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/logtail.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/quicktunnel.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/rclone_mount.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serverless.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serviceportal.conf" during parsing
2023-11-29 09:26:40,703 INFO Included extra file "/etc/supervisor/supervisord/conf.d/sshd.conf" during parsing
2023-11-29 09:26:40,704 INFO Set uid to user 0 succeeded
2023-11-29 09:26:40,711 INFO RPC interface 'supervisor' initialized
2023-11-29 09:26:40,711 CRIT Server 'inet_http_server' running without any HTTP authentication checking
2023-11-29 09:26:40,712 INFO RPC interface 'supervisor' initialized
2023-11-29 09:26:40,712 INFO supervisord started with pid 92
2023-11-29 09:26:41,715 INFO spawned: 'logtail' with pid 98
2023-11-29 09:26:41,718 INFO spawned: 'serverless' with pid 99
2023-11-29 09:26:41,720 INFO spawned: 'serviceportal' with pid 100
2023-11-29 09:26:41,722 INFO spawned: 'sshd' with pid 102
2023-11-29 09:26:41,723 INFO spawned: 'caddy' with pid 103
2023-11-29 09:26:43,723 INFO success: serverless entered RUNNING state, process has stayed up for \u003e than 2 seconds (startsecs)
2023-11-29 09:26:44,727 INFO exited: caddy (exit status 1; not expected)
2023-11-29 09:26:44,727 INFO exited: caddy (exit status 1; not expected)
2023-11-29 09:26:45,728 INFO spawned: 'caddy' with pid 141
2023-11-29 09:26:45,728 INFO spawned: 'caddy' with pid 141
2023-11-29 09:26:46,728 INFO success: logtail entered RUNNING state, process has stayed up for \u003e than 5 seconds (startsecs)
2023-11-29 09:26:46,728 INFO success: serviceportal entered RUNNING state, process has stayed up for \u003e than 5 seconds (startsecs)
2023-11-29 09:26:46,728 INFO success: sshd entered RUNNING state, process has stayed up for \u003e than 5 seconds (startsecs)
2023-11-29 09:26:46,728 INFO success: logtail entered RUNNING state, process has stayed up for \u003e than 5 seconds (startsecs)
2023-11-29 09:26:46,728 INFO success: serviceportal entered RUNNING state, process has stayed up for \u003e than 5 seconds (startsecs)
2023-11-29 09:26:46,728 INFO success: sshd entered RUNNING state, process has stayed up for \u003e than 5 seconds (startsecs)
2023-11-29 09:26:47,861 INFO exited: caddy (exit status 1; not expected)

==\u003e /var/log/supervisor/caddy.log \u003c==
{"level":"info","ts":1701250007.8570542,"msg":"using provided configuration","config_file":"/opt/caddy/etc/Caddyfile","config_adapter":""}
{"level":"warn","ts":1701250007.8587267,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv0"}
{"level":"warn","ts":1701250007.8587873,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv1"}
Error: loading initial config: loading new config: http app module: start: listening on :11111: listen tcp :11111: bind: address already in use

==\u003e /var/log/supervisor/supervisor.log \u003c==
2023-11-29 09:26:47,861 INFO exited: caddy (exit status 1; not expected)
2023-11-29 09:26:50,732 INFO spawned: 'caddy' with pid 174
2023-11-29 09:26:51,723 INFO exited: serverless (exit status 0; expected)
2023-11-29 09:26:50,732 INFO spawned: 'caddy' with pid 174
2023-11-29 09:26:51,723 INFO exited: serverless (exit status 0; expected)
2023-11-29 09:26:52,732 INFO exited: sshd (exit status 0; expected)

==\u003e /var/log/supervisor/caddy.log \u003c==
{"level":"info","ts":1701250012.8670478,"msg":"using provided configuration","config_file":"/opt/caddy/etc/Caddyfile","config_adapter":""}
{"level":"warn","ts":1701250012.8683994,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv0"}
{"level":"warn","ts":1701250012.8684745,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv1"}
Error: loading initial config: loading new config: http app module: start: listening on :11111: listen tcp :11111: bind: address already in use

==\u003e /var/log/supervisor/supervisor.log \u003c==
2023-11-29 09:26:52,732 INFO exited: sshd (exit status 0; expected)
2023-11-29 09:26:53,729 INFO exited: caddy (exit status 1; not expected)
2023-11-29 09:26:53,729 INFO exited: caddy (exit status 1; not expected)
2023-11-29 09:26:56,732 INFO spawned: 'caddy' with pid 204
2023-11-29 09:26:56,732 INFO spawned: 'caddy' with pid 204

==\u003e /var/log/supervisor/caddy.log \u003c==
{"level":"info","ts":1701250018.8984635,"msg":"using provided configuration","config_file":"/opt/caddy/etc/Caddyfile","config_adapter":""}
{"level":"warn","ts":1701250018.8995972,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv0"}
{"level":"warn","ts":1701250018.8996172,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv1"}
Error: loading initial config: loading new config: http app module: start: listening on :11111: listen tcp :11111: bind: address already in use
2023-11-29 09:26:59,730 INFO exited: caddy (exit status 1; not expected)

==\u003e /var/log/supervisor/supervisor.log \u003c==
2023-11-29 09:26:59,730 INFO exited: caddy (exit status 1; not expected)
2023-11-29 09:27:00,730 INFO gave up: caddy entered FATAL state, too many start retries too quickly
2023-11-29 09:27:00,730 INFO gave up: caddy entered FATAL state, too many start retries too quickly
{"message":"Received stop signal SIGTERM, shutting down...","level":"info","timestamp":"2023-11-29T09:28:09.025Z"}
{"message":"All mounts released. Exiting...","level":"info","timestamp":"2023-11-29T09:28:09.030Z"}
[INFO  tini (1)] Spawned child process './integrations-sidecar' with pid '7'
[INFO  tini (1)] Main child exited normally (with status '0')

I'm not familiar with the workings of the container but would it be possible to disable the service portal using an environment variable or using the provisioning script?

Cloud templates failing to start with latest images

  1. Select any appropriate GPU on Runpod/Vast.ai.
  2. Select the template from Readme, leave the default args.
  3. Wait for provisioning to finish.
Cloning assets into /opt/stable-diffusion-webui/repositories/stable-diffusion-webui-assets...
fatal: could not create work tree dir '/opt/stable-diffusion-webui/repositories/stable-diffusion-webui-assets': Permission denied
Traceback (most recent call last):
  File "/opt/stable-diffusion-webui/launch.py", line 48, in
    main()
  File "/opt/stable-diffusion-webui/launch.py", line 39, in main
    prepare_environment()
  File "/opt/stable-diffusion-webui/modules/launch_utils.py", line 410, in prepare_environment
    git_clone(assets_repo, repo_dir('stable-diffusion-webui-assets'), "assets", assets_commit_hash)
  File "/opt/stable-diffusion-webui/modules/launch_utils.py", line 191, in git_clone
    run(f'"{git}" clone --config core.filemode=false "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
  File "/opt/stable-diffusion-webui/modules/launch_utils.py", line 115, in run
    raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't clone assets.
Command: "git" clone --config core.filemode=false "https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git" "/opt/stable-diffusion-webui/repositories/stable-diffusion-webui-assets"
Error code: 128

Running Locally : Torch is not able to use GPU

Running Locally : Torch is not able to use GPU
Pulled the repo, edited the env, added my own provisioning script, and ran docker compose up
Received this error after the build process as it was starting:docker
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

A111 Extension: Use Reactor in place of ROOP

The ROOP extension that you’re using right now is the original implementation of ROOP: https://github.com/s0md3v/sd-webui-roop

But it has limitations. Specifically, it has NSFW detection, which is not an issue in and by itself, except that all of these libraries could give false positives. The issue with false positives is obvious — you render an image that’s SFW, but it prevents you from rendering regardless. In some ways, for web applications such as Replicate (which uses a different Python library), the false positives may be required (though I could also show you hilarious examples of it which shows just how ridiculous some of these false positives are). Since this is an end-user image, I feel like that having these restrictions is unnecessary.

There’s a fork of that removes this limitation: https://github.com/Gourieff/sd-webui-reactor — originally, that was the only implementation, but now that fork author has added new features to it and so now it’s far superior than the original.

I tried to use it directly in the provision script but I couldn’t get it to run. Maybe you could? I know that this is not supposed to be an “opinionated” docker image, but the main issue with the false positives makes the one you use unsuitable for production use. I hope that you would consider changing it.

Thanks for all the hard work as always!

WebUI Forge?

I've been using your images for a long time and enjoy them, but I'd also love to see WebUI Forge images from you. This shouldn't be a huge issue or hard to maintain since the two are mostly compatible.

CUDA out of Memory Issue

Even with a 24GB Nvidia RTX 4090, when trying to run an XL model I'm constantly getting a CUDA out of memory error. Any thoughts on exactly how to fix this?

OutOfMemoryError: CUDA out of memory. Tried to allocate 1008.00 MiB. GPU 0 has a total capacty of 23.65 GiB of which 927.19 MiB is free. Process 4050195 has 22.73 GiB memory in use. Of the allocated memory 21.74 GiB is allocated by PyTorch, and 537.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

[Question/Suggestion] Where to set default settings for A1111?

Hello there!

I thank you a lot for providing such a great container for stable-diffusion-webui, with such thorough finetuning capabilities in terms of settings.

I've looked through documentation, saw the init.sh in layer1 file, however no information about A1111 default settings being set have I found. I may be mistaken and haven't read thoroughly enough, but I don't see where to do that. And I'm partly unsure whether it should be done during build or in workspace after.

To be exact, I'm setting up a pipeline to build a container with controlnet extension and Multi-ControlNet: ControlNet unit number (requires restart) setting set to 4 instead of default 3. (it is also known as control_net_unit_count) I'd like it to set beforehand, without the need to use UI, for I am going to use only HTTP API interface of A1111. It would also be helpful if you could pinpoint the current settings file in workspace directory because I'm unable to locate it, here's what I tried

Result of grep:

grep: ./extensions/stable-diffusion-webui-images-browser/scripts/__pycache__/image_browser.cpython-310.pyc: binary file matches
./extensions/stable-diffusion-webui-images-browser/scripts/image_browser.py:                                if "control_net_unit_count" in opts.data:
./extensions/stable-diffusion-webui-images-browser/scripts/image_browser.py:                                    controlnet_max = opts.data.get("control_net_unit_count", 3)
./extensions/sd-webui-controlnet/tests/web_api/txt2img_test.py:                shared.opts.data.get("control_net_unit_count", 3),
./extensions/sd-webui-controlnet/tests/external_code_api/external_code_test.py:        self.initial_max_models = shared.opts.data.get("control_net_unit_count", 3)
./extensions/sd-webui-controlnet/tests/external_code_api/external_code_test.py:        shared.opts.data.update(control_net_unit_count=self.max_models)
./extensions/sd-webui-controlnet/tests/external_code_api/external_code_test.py:        shared.opts.data.update(control_net_unit_count=self.initial_max_models)
grep: ./extensions/sd-webui-controlnet/scripts/controlnet_ui/__pycache__/controlnet_ui_group.cpython-310.pyc: binary file matches
./extensions/sd-webui-controlnet/scripts/controlnet_ui/controlnet_ui_group.py:        unit_count = shared.opts.data.get("control_net_unit_count", 3)
grep: ./extensions/sd-webui-controlnet/scripts/__pycache__/api.cpython-310.pyc: binary file matches
grep: ./extensions/sd-webui-controlnet/scripts/__pycache__/movie2movie.cpython-310.pyc: binary file matches
grep: ./extensions/sd-webui-controlnet/scripts/__pycache__/controlnet.cpython-310.pyc: binary file matches
./extensions/sd-webui-controlnet/scripts/controlnet.py:        max_models = shared.opts.data.get("control_net_unit_count", 3)
./extensions/sd-webui-controlnet/scripts/controlnet.py:    shared.opts.add_option("control_net_unit_count", shared.OptionInfo(
./extensions/sd-webui-controlnet/scripts/movie2movie.py:        max_models = opts.data.get("control_net_unit_count", 3)
./extensions/sd-webui-controlnet/scripts/movie2movie.py:        contents_num = opts.data.get("control_net_unit_count", 3)
./extensions/sd-webui-controlnet/scripts/api.py:        return {"control_net_unit_count": max_models_num}
grep: ./extensions/sd-webui-controlnet/internal_controlnet/__pycache__/external_code.cpython-310.pyc: binary file matches
./extensions/sd-webui-controlnet/internal_controlnet/external_code.py:    max_models_num = shared.opts.data.get("control_net_unit_count", 3)
./extensions/sd-webui-controlnet/internal_controlnet/external_code.py:    max_models = shared.opts.data.get("control_net_unit_count", 3)
./extensions/sd-webui-controlnet/internal_controlnet/external_code.py:    max_models = shared.opts.data.get("control_net_unit_count", 3)

I think it would be a great addition to README.md on information on how to set those beforehand, if it is possible / information on how to override them.

Many thanks.

Protovision script - It doesn't download links containing spaces

Closing the link in apostrophes, I thought the download manager would handle spaces, but it doesn't.
So I changed spaces to %20 symbols. It didn't help.

Example Lora links that won't be downloaded:

'https://huggingface.co/datasets/AddictiveFuture/sdxl-pony-models-backup/resolve/main/LORA/Line Art Style LoRA XL.safetensors'
'https://huggingface.co/datasets/AddictiveFuture/sdxl-pony-models-backup/resolve/main/LORA/Line%20Art%20Style%20LoRA%20XL.safetensors'

Vast pre-configured template fails to start

I provide machines on vast.ai for renting. I noticed that any time user tries to use pre-configured template from this repository (as it is): https://link.ai-dock.org/template-vast-sd-webui
It fails with following error (docker container inspect):

failed to create task for container: failed to create shim task: OCI runtime create failed: 
runc create failed: unable to start container process: 
exec: \"env | grep _ >> /etc/environment; /opt/ai-dock/bin/init.sh;\": stat env | grep _ >> /etc/environment; /opt/ai-dock/bin/init.sh;: 
no such file or directory: unknown

To make it work, the template needs to be changed ("Launch Mode" needs to be changed to "Run a jupyter-python notebook" instead of "Docker Run: use docker ENTRYPOINT." in "Template Editor"). Any chance the pre-configured template can be modified to have this mode by default?

Or is there some workaround for this issue? I am running Ubuntu 22.04 with the docker:

Client: Docker Engine - Community
 Version:           25.0.1
 API version:       1.44
 Go version:        go1.21.6
 Git commit:        29cf629
 Built:             Tue Jan 23 23:09:23 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          25.0.1
  API version:      1.44 (minimum version 1.24)
  Go version:       go1.21.6
  Git commit:       71fa3ab
  Built:            Tue Jan 23 23:09:23 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.27
  GitCommit:        a1496014c916f9e62104b33d1bb5bd03b0858e59
 runc:
  Version:          1.1.11
  GitCommit:        v1.1.11-0-g4bccb38
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0 

TensorRT support?

Tried to add the TensorRT github repo to the provisioning script on vast.ai, but the install was not successful. I also tried to install it from the Extensions tab, but this broke the whole WebUI.

Has anyone figured out how to install TensorRT? Or is there any know issue why this does not work with this docker image?

Thanks in advance!

Update: after installing everything, this is the error message I always get:
{"level":"error","ts":1712759896.8992972,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:17860: connect: connection refused","request":{"remote_ip":"127.0.0.1","remote_port":"57394","client_ip":"127.0.0.1","proto":"HTTP/1.1","method":"GET","host":"confident-backup-parameters-software.trycloudflare.com","uri":"/","headers":{"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"],"Accept-Encoding":["gzip"],"Cf-Ray":["87237acb601e5bb9-VIE"],"Cf-Warp-Tag-Id":["9f39b7a1-932f-4a6b-bd34-b6d7aa242faf"],"Connection":["keep-alive"],"Cdn-Loop":["cloudflare; subreqs=1"],"Cf-Ew-Via":["15"],"Cf-Worker":["trycloudflare.com"],"Sec-Ch-Ua-Mobile":["?0"],"X-Forwarded-For":["176.63.202.78"],"Upgrade-Insecure-Requests":["1"],"Accept-Language":["en-GB,en-US;q=0.9,en;q=0.8"],"Cookie":[],"Referer":["http://213.181.123.100:28045/"],"Sec-Ch-Ua":["\"Google Chrome\";v=\"123\", \"Not:A-Brand\";v=\"8\", \"Chromium\";v=\"123\""],"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-Site":["cross-site"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36"],"Sec-Fetch-Dest":["document"],"X-Forwarded-Proto":["https"],"Cache-Control":["max-age=0"],"Cf-Connecting-Ip":["176.63.202.78"],"Cf-Ipcountry":["HU"],"Cf-Visitor":["{\"scheme\":\"https\"}"],"Sec-Ch-Ua-Platform":["\"macOS\""],"Sec-Fetch-User":["?1"]}},"duration":0.000448417,"status":502,"err_id":"b6mptumeq","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}

I have a small problem to download wildcards and config.json

Sorry, it's a beginner question.
I am using vast.ai and noob about code.

I modified provisioning script like this to download wiledcard txt files and config.json file for webui setting.
https://gist.githubusercontent.com/Ineman/eb6e35efedda1dcf1ec49af89d4ff117/raw

then after install complete, file name is crush.
HAIR.txt become Wildcards%2FHAIR.txt and ui-config.json become config%2Fui-config.json. config.json become config%2Fconfig.json.

But checkpoint and lora files name located in 'workspace/storage/stable_diffusion/models/ckpt/' are not crush.

I need help. Why it happen to me?

Screenshot_20240620-115414_Chrome

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.