Giter Club home page Giter Club logo

stable-diffusion's Introduction

stable-diffusion's People

Contributors

alexgoldsmith avatar apolinario avatar chrisacrobat avatar cpacker avatar hlky avatar jamesmoore avatar lex-drl avatar oc013 avatar owenvincent avatar percimar avatar pesser avatar pkiage avatar rromb avatar shinkonet avatar tavrin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stable-diffusion's Issues

Readme suggestion

A few suggestions for the quick install steps that may make them more accessible to a less technical user:

  1. add the link to download miniconda3 if possible
  2. explicitly mention the ip-address / port to connect to once the server is running
  3. clearly state how to relaunch the server. I know it's in the text, but it's a bit hidden. Perhaps a new heading about running it after install
  4. explicitly mention how to quit the server

Linux installation script issues (Module named 'frontend' not found)

Just followed the Linux installation guide and got this issue in step 8

Traceback (most recent call last):
  File "scripts/webui.py", line 3, in <module>
    from frontend.frontend import draw_gradio_ui
ModuleNotFoundError: No module named 'frontend'
Relauncher: Process is ending. Relaunching in 0.5s...

Conda can't install pip dependencies on Windows 10

Maybe, it's something wrong with my conda setup (I'm more of a pipenv adept). But In my case, any attempt to just use environment.yaml as is fails with an error.

However, everyfing goes just fine if I:

  • manually remove all the pip lines from environment.yaml;
  • create conda env with this modified env;
  • manually do conda activate ldXXX;
  • ... and do pip install ... (specified manually or with requirements.txt - works perfectly either way)

Here's the full error report when I just do conda env create -n ldZZZ -f environment.yaml.

My Miniconda installation is at: C:\Python\Miniconda3
The local repo is at: P:\1-Scripts\_Python\_neuralNets\StableDiffusion
I have CONDA_ENVS_PATH env var pointing to P:\1-Scripts\_Python\_envs
Command prompt is launched via Miniconda's "Anaconda prompt" shortcut in start menu (%windir%\System32\cmd.exe "/K" C:\Python\Miniconda3\Scripts\activate.bat C:\Python\Miniconda3)

(base) P:\1-Scripts\_Python\_neuralNets\StableDiffusion>call conda env create -n ldZZZ -f environment.yaml
Collecting package metadata (repodata.json): done
Solving environment: done
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Installing pip dependencies: / Ran pip subprocess with arguments:
['P:\\1-Scripts\\_Python\\_envs\\ldZZZ\\python.exe', '-m', 'pip', 'install', '-U', '-r', 'P:\\1-Scripts\\_Python\\_neuralNets\\StableDiffusion\\condaenv.2w25smqe.requirements.txt']
Pip subprocess output:
Obtaining taming-transformers from git+https://github.com/CompVis/taming-transformers#egg=taming-transformers (from -r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 21))
  Updating p:\1-scripts\_python\_neuralnets\stablediffusion\src\taming-transformers clone
Obtaining clip from git+https://github.com/openai/CLIP#egg=clip (from -r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 22))
  Updating p:\1-scripts\_python\_neuralnets\stablediffusion\src\clip clone
Obtaining GFPGAN from git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN (from -r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 23))
  Updating p:\1-scripts\_python\_neuralnets\stablediffusion\src\gfpgan clone
Obtaining realesrgan from git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan (from -r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 24))
  Updating p:\1-scripts\_python\_neuralnets\stablediffusion\src\realesrgan clone
Obtaining k_diffusion from git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion (from -r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 25))
  Updating p:\1-scripts\_python\_neuralnets\stablediffusion\src\k-diffusion clone
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Obtaining file:///P:/1-Scripts/_Python/_neuralNets/StableDiffusion (from -r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 26))
Requirement already satisfied: torch in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from clip->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 22)) (1.11.0)
Requirement already satisfied: torchvision in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from clip->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 22)) (0.12.0)
Requirement already satisfied: numpy<1.21 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from GFPGAN->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 23)) (1.19.2)
Requirement already satisfied: Pillow in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from k_diffusion->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 25)) (9.2.0)
Collecting accelerate==0.12.0
  Using cached accelerate-0.12.0-py3-none-any.whl (143 kB)
Collecting albumentations==0.4.3
  Using cached albumentations-0.4.3-py3-none-any.whl
Collecting einops==0.3.0
  Using cached einops-0.3.0-py2.py3-none-any.whl (25 kB)
Collecting gradio==3.1.6
  Using cached gradio-3.1.6-py3-none-any.whl (6.1 MB)
Requirement already satisfied: requests in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (2.28.1)
Collecting imageio==2.9.0
  Using cached imageio-2.9.0-py3-none-any.whl (3.3 MB)
Collecting imageio-ffmpeg==0.4.2
  Using cached imageio_ffmpeg-0.4.2-py3-none-win_amd64.whl (22.6 MB)
Collecting kornia==0.6
  Using cached kornia-0.6.0-py2.py3-none-any.whl (367 kB)
Collecting omegaconf==2.1.1
  Using cached omegaconf-2.1.1-py3-none-any.whl (74 kB)
Collecting opencv-python==4.1.2.30
  Using cached opencv_python-4.1.2.30-cp38-cp38-win_amd64.whl (33.0 MB)
Collecting opencv-python-headless==4.1.2.30
  Using cached opencv_python_headless-4.1.2.30-cp38-cp38-win_amd64.whl (33.0 MB)
Collecting pudb==2019.2
  Using cached pudb-2019.2-py3-none-any.whl
Collecting pynvml==11.4.1
  Using cached pynvml-11.4.1-py3-none-any.whl (46 kB)
Collecting pytorch-lightning==1.4.2
  Using cached pytorch_lightning-1.4.2-py3-none-any.whl (916 kB)
Requirement already satisfied: typing-extensions in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from pytorch-lightning==1.4.2->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 7)) (4.3.0)
Collecting torch-fidelity==0.3.0
  Using cached torch_fidelity-0.3.0-py3-none-any.whl (37 kB)
Collecting torchmetrics==0.6.0
  Using cached torchmetrics-0.6.0-py3-none-any.whl (329 kB)
Collecting transformers==4.19.2
  Using cached transformers-4.19.2-py3-none-any.whl (4.2 MB)
Collecting antlr4-python3-runtime==4.8
  Using cached antlr4_python3_runtime-4.8-py3-none-any.whl
Collecting pyDeprecate==0.3.1
  Using cached pyDeprecate-0.3.1-py3-none-any.whl (10 kB)
Collecting basicsr>=1.3.4.0
  Using cached basicsr-1.4.1-py3-none-any.whl
Collecting facexlib>=0.2.3
  Using cached facexlib-0.2.4-py3-none-any.whl (59 kB)
Collecting streamlit>=0.73.1
  Using cached streamlit-1.12.2-py2.py3-none-any.whl (9.1 MB)
Collecting test-tube>=0.7.5
  Using cached test_tube-0.7.5-py3-none-any.whl
Collecting altair>=3.2.0
  Using cached altair-4.2.0-py3-none-any.whl (812 kB)
Collecting blinker>=1.0.0
  Using cached blinker-1.5-py2.py3-none-any.whl (12 kB)
Collecting cachetools>=4.0
  Using cached cachetools-5.2.0-py3-none-any.whl (9.3 kB)
Collecting click>=7.0
  Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting fsspec
  Using cached fsspec-2022.7.1-py3-none-any.whl (141 kB)
Collecting future>=0.17.1
  Using cached future-0.18.2-py3-none-any.whl
Collecting gitpython!=3.1.19
  Using cached GitPython-3.1.27-py3-none-any.whl (181 kB)
Collecting gitdb<5,>=4.0.1
  Using cached gitdb-4.0.9-py3-none-any.whl (63 kB)
Collecting h11<0.13,>=0.11
  Using cached h11-0.12.0-py3-none-any.whl (54 kB)
Collecting huggingface-hub<1.0,>=0.1.0
  Using cached huggingface_hub-0.9.1-py3-none-any.whl (120 kB)
Collecting imgaug<0.2.7,>=0.2.5
  Using cached imgaug-0.2.6-py3-none-any.whl
Requirement already satisfied: six in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from imgaug<0.2.7,>=0.2.5->albumentations==0.4.3->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 1)) (1.16.0)
Collecting importlib-metadata>=1.4
  Using cached importlib_metadata-4.12.0-py3-none-any.whl (21 kB)
Collecting jsonschema>=3.0
  Using cached jsonschema-4.14.0-py3-none-any.whl (82 kB)
Collecting attrs>=17.4.0
  Using cached attrs-22.1.0-py2.py3-none-any.whl (58 kB)
Collecting importlib-resources>=1.4.0
  Using cached importlib_resources-5.9.0-py3-none-any.whl (33 kB)
Collecting packaging>=20.0
  Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting pandas
  Using cached pandas-1.4.3-cp38-cp38-win_amd64.whl (10.6 MB)
Collecting pkgutil-resolve-name>=1.3.10
  Using cached pkgutil_resolve_name-1.3.10-py3-none-any.whl (4.7 kB)
Collecting protobuf<4,>=3.12
  Using cached protobuf-3.20.1-cp38-cp38-win_amd64.whl (904 kB)
Collecting pyarrow>=4.0
  Using cached pyarrow-9.0.0-cp38-cp38-win_amd64.whl (19.6 MB)
Collecting pydeck>=0.1.dev5
  Using cached pydeck-0.8.0b1-py2.py3-none-any.whl (4.7 MB)
Collecting Jinja2
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting MarkupSafe>=2.0
  Using cached MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl (17 kB)
Collecting pygments>=1.0
  Using cached Pygments-2.13.0-py3-none-any.whl (1.1 MB)
Collecting pympler>=0.9
  Using cached Pympler-1.0.1-py3-none-any.whl (164 kB)
Collecting pyparsing!=3.0.5,>=2.0.2
  Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
  Using cached pyrsistent-0.18.1-cp38-cp38-win_amd64.whl (61 kB)
Collecting python-dateutil
  Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting pytz>=2020.1
  Using cached pytz-2022.2.1-py2.py3-none-any.whl (500 kB)
Collecting pyyaml
  Using cached PyYAML-6.0-cp38-cp38-win_amd64.whl (155 kB)
Collecting regex
  Using cached regex-2022.8.17-cp38-cp38-win_amd64.whl (263 kB)
Requirement already satisfied: charset-normalizer<3,>=2 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from requests->gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from requests->gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (3.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from requests->gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (1.26.11)
Requirement already satisfied: certifi>=2017.4.17 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from requests->gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (2022.6.15)
Collecting rich>=10.11.0
  Using cached rich-12.5.1-py3-none-any.whl (235 kB)
Collecting commonmark<0.10.0,>=0.9.0
  Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
Collecting scikit-image
  Using cached scikit_image-0.19.3-cp38-cp38-win_amd64.whl (12.2 MB)
Collecting networkx>=2.2
  Using cached networkx-2.8.6-py3-none-any.whl (2.0 MB)
Collecting PyWavelets>=1.1.1
  Using cached PyWavelets-1.3.0-cp38-cp38-win_amd64.whl (4.2 MB)
Collecting scipy
  Using cached scipy-1.9.1-cp38-cp38-win_amd64.whl (38.6 MB)
Collecting smmap<6,>=3.0.1
  Using cached smmap-5.0.0-py3-none-any.whl (24 kB)
Collecting tensorboard>=2.2.0
  Using cached tensorboard-2.10.0-py3-none-any.whl (5.9 MB)
Requirement already satisfied: setuptools>=41.0.0 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from tensorboard>=2.2.0->pytorch-lightning==1.4.2->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 7)) (63.4.1)
Requirement already satisfied: wheel>=0.26 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from tensorboard>=2.2.0->pytorch-lightning==1.4.2->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 7)) (0.37.1)
Collecting absl-py>=0.4
  Using cached absl_py-1.2.0-py3-none-any.whl (123 kB)
Collecting google-auth<3,>=1.6.3
  Using cached google_auth-2.11.0-py2.py3-none-any.whl (167 kB)
Collecting google-auth-oauthlib<0.5,>=0.4.1
  Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Collecting grpcio>=1.24.3
  Using cached grpcio-1.47.0-cp38-cp38-win_amd64.whl (3.6 MB)
Collecting markdown>=2.6.8
  Using cached Markdown-3.4.1-py3-none-any.whl (93 kB)
Collecting protobuf<4,>=3.12
  Using cached protobuf-3.19.4-cp38-cp38-win_amd64.whl (895 kB)
Collecting pyasn1-modules>=0.2.1
  Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting pyasn1<0.5.0,>=0.4.6
  Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting requests-oauthlib>=0.7.0
  Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Collecting oauthlib>=3.0.0
  Using cached oauthlib-3.2.0-py3-none-any.whl (151 kB)
Collecting rsa<5,>=3.1.4
  Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
  Using cached tensorboard_data_server-0.6.1-py3-none-any.whl (2.4 kB)
Collecting tensorboard-plugin-wit>=1.6.0
  Using cached tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB)
Collecting tifffile>=2019.7.26
  Using cached tifffile-2022.8.12-py3-none-any.whl (208 kB)
Collecting tokenizers!=0.11.3,<0.13,>=0.11.1
  Using cached tokenizers-0.12.1-cp38-cp38-win_amd64.whl (3.3 MB)
Collecting tornado>=5.0
  Using cached tornado-6.2-cp37-abi3-win_amd64.whl (425 kB)
Collecting tqdm
  Using cached tqdm-4.64.0-py2.py3-none-any.whl (78 kB)
Collecting tzlocal>=1.1
  Using cached tzlocal-4.2-py3-none-any.whl (19 kB)
Collecting urwid>=1.1.1
  Using cached urwid-2.1.2-py3-none-any.whl
Collecting validators>=0.2
  Using cached validators-0.20.0-py3-none-any.whl
Collecting decorator>=3.4.0
  Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting werkzeug>=1.0.1
  Using cached Werkzeug-2.2.2-py3-none-any.whl (232 kB)
Collecting zipp>=0.5
  Using cached zipp-3.8.1-py3-none-any.whl (5.6 kB)
Collecting addict
  Using cached addict-2.4.0-py3-none-any.whl (3.8 kB)
Collecting aiohttp
  Using cached aiohttp-3.8.1-cp38-cp38-win_amd64.whl (555 kB)
Collecting aiosignal>=1.1.2
  Using cached aiosignal-1.2.0-py3-none-any.whl (8.2 kB)
Collecting async-timeout<5.0,>=4.0.0a3
  Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting frozenlist>=1.1.1
  Using cached frozenlist-1.3.1-cp38-cp38-win_amd64.whl (34 kB)
Collecting multidict<7.0,>=4.5
  Using cached multidict-6.0.2-cp38-cp38-win_amd64.whl (28 kB)
Collecting yarl<2.0,>=1.0
  Using cached yarl-1.8.1-cp38-cp38-win_amd64.whl (56 kB)
Collecting analytics-python
  Using cached analytics_python-1.4.0-py2.py3-none-any.whl (15 kB)
Collecting backoff==1.10.0
  Using cached backoff-1.10.0-py2.py3-none-any.whl (31 kB)
Collecting monotonic>=1.5
  Using cached monotonic-1.6-py2.py3-none-any.whl (8.2 kB)
Collecting backports.zoneinfo
  Using cached backports.zoneinfo-0.2.1-cp38-cp38-win_amd64.whl (38 kB)
Collecting clean-fid
  Using cached clean_fid-0.1.28-py3-none-any.whl (23 kB)
Collecting requests
  Using cached requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting chardet<5,>=3.0.2
  Using cached chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting idna<4,>=2.5
  Using cached idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting colorama
  Using cached colorama-0.4.5-py2.py3-none-any.whl (16 kB)
Collecting entrypoints
  Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting fastapi
  Using cached fastapi-0.81.0-py3-none-any.whl (54 kB)
Collecting starlette==0.19.1
  Using cached starlette-0.19.1-py3-none-any.whl (63 kB)
Collecting anyio<5,>=3.4.0
  Using cached anyio-3.6.1-py3-none-any.whl (80 kB)
Collecting pydantic
  Using cached pydantic-1.9.2-cp38-cp38-win_amd64.whl (2.1 MB)
Collecting sniffio>=1.1
  Using cached sniffio-1.2.0-py3-none-any.whl (10 kB)
Collecting ffmpy
  Using cached ffmpy-0.3.0-py3-none-any.whl
Collecting filelock
  Using cached filelock-3.8.0-py3-none-any.whl (10 kB)
Collecting filterpy
  Using cached filterpy-1.4.5-py3-none-any.whl
Collecting ftfy
  Using cached ftfy-6.1.1-py3-none-any.whl (53 kB)
Collecting wcwidth>=0.2.5
  Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting httpx
  Using cached httpx-0.23.0-py3-none-any.whl (84 kB)
Collecting httpcore<0.16.0,>=0.15.0
  Using cached httpcore-0.15.0-py3-none-any.whl (68 kB)
Collecting rfc3986[idna2008]<2,>=1.3
  Using cached rfc3986-1.5.0-py2.py3-none-any.whl (31 kB)
Collecting jsonmerge
  Using cached jsonmerge-1.8.0-py3-none-any.whl
Collecting lmdb
  Using cached lmdb-1.3.0-cp38-cp38-win_amd64.whl (106 kB)
Collecting markdown-it-py[linkify,plugins]
  Using cached markdown_it_py-2.1.0-py3-none-any.whl (84 kB)
Collecting linkify-it-py~=1.0
  Using cached linkify_it_py-1.0.3-py3-none-any.whl (19 kB)
Collecting mdurl~=0.1
  Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting matplotlib
  Using cached matplotlib-3.5.3-cp38-cp38-win_amd64.whl (7.2 MB)
Collecting cycler>=0.10
  Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting fonttools>=4.22.0
  Using cached fonttools-4.37.1-py3-none-any.whl (957 kB)
Collecting kiwisolver>=1.0.1
  Using cached kiwisolver-1.4.4-cp38-cp38-win_amd64.whl (55 kB)
Collecting mdit-py-plugins
  Using cached mdit_py_plugins-0.3.0-py3-none-any.whl (43 kB)
Collecting numba
  Using cached numba-0.56.0-cp38-cp38-win_amd64.whl (2.5 MB)
Collecting llvmlite<0.40,>=0.39.0dev0
  Using cached llvmlite-0.39.0-cp38-cp38-win_amd64.whl (23.2 MB)
Collecting orjson
  Using cached orjson-3.8.0-cp38-none-win_amd64.whl (197 kB)
Collecting paramiko
  Using cached paramiko-2.11.0-py2.py3-none-any.whl (212 kB)
Requirement already satisfied: cryptography>=2.5 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from paramiko->gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (37.0.1)
Collecting bcrypt>=3.1.3
  Using cached bcrypt-4.0.0-cp36-abi3-win_amd64.whl (153 kB)
Requirement already satisfied: cffi>=1.12 in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from cryptography>=2.5->paramiko->gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (1.15.1)
Requirement already satisfied: pycparser in e:\p-projects\1-scripts\_python\_envs\ldzzz\lib\site-packages (from cffi>=1.12->cryptography>=2.5->paramiko->gradio==3.1.6->-r P:\1-Scripts\_Python\_neuralNets\StableDiffusion\condaenv.2w25smqe.requirements.txt (line 16)) (2.21)
Collecting pynacl>=1.0.1
  Using cached PyNaCl-1.5.0-cp36-abi3-win_amd64.whl (212 kB)
Collecting psutil
  Using cached psutil-5.9.1-cp38-cp38-win_amd64.whl (246 kB)
Collecting pycryptodome
  Using cached pycryptodome-3.15.0-cp35-abi3-win_amd64.whl (1.9 MB)
Collecting pydub
  Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting python-multipart
  Using cached python_multipart-0.0.5-py3-none-any.whl
Collecting pytz-deprecation-shim
  Using cached pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl (15 kB)
Collecting resize-right
  Using cached resize_right-0.0.2-py3-none-any.whl (8.9 kB)
Collecting semver
  Using cached semver-2.13.0-py2.py3-none-any.whl (12 kB)
Collecting tb-nightly
  Using cached tb_nightly-2.11.0a20220827-py3-none-any.whl (5.9 MB)
Collecting toml
  Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting toolz
  Using cached toolz-0.12.0-py3-none-any.whl (55 kB)
Collecting torchdiffeq
  Using cached torchdiffeq-0.2.3-py3-none-any.whl (31 kB)
Collecting tzdata
  Using cached tzdata-2022.2-py2.py3-none-any.whl (336 kB)
Collecting uc-micro-py
  Using cached uc_micro_py-1.0.1-py3-none-any.whl (6.2 kB)
Collecting uvicorn
  Using cached uvicorn-0.18.3-py3-none-any.whl (57 kB)
Collecting wandb
  Using cached wandb-0.13.2-py2.py3-none-any.whl (1.8 MB)
Collecting docker-pycreds>=0.4.0
  Using cached docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)
Collecting promise<3,>=2.0
  Using cached promise-2.3-py3-none-any.whl
Collecting sentry-sdk>=1.0.0
  Using cached sentry_sdk-1.9.5-py2.py3-none-any.whl (157 kB)
Collecting shortuuid>=0.5.0
  Using cached shortuuid-1.0.9-py3-none-any.whl (9.4 kB)
Collecting pathtools
  Using cached pathtools-0.1.2-py3-none-any.whl
Collecting setproctitle
  Using cached setproctitle-1.3.2-cp38-cp38-win_amd64.whl (11 kB)
Collecting watchdog
  Using cached watchdog-2.1.9-py3-none-win_amd64.whl (78 kB)
Collecting websockets
  Using cached websockets-10.3-cp38-cp38-win_amd64.whl (98 kB)
Collecting yapf
  Using cached yapf-0.32.0-py2.py3-none-any.whl (190 kB)
Installing collected packages: pyasn1, idna, chardet, zipp, rsa, requests, pyparsing, pyasn1-modules, oauthlib, cachetools, requests-oauthlib, python-dateutil, packaging, MarkupSafe, kiwisolver, importlib-metadata, google-auth, fonttools, cycler, werkzeug, tifffile, tensorboard-plugin-wit, tensorboard-data-server, sniffio, smmap, scipy, PyWavelets, protobuf, networkx, multidict, mdurl, matplotlib, markdown, llvmlite, imageio, grpcio, google-auth-oauthlib, frozenlist, colorama, absl-py, yarl, yapf, uc-micro-py, tzdata, tqdm, tb-nightly, scikit-image, rfc3986, pyyaml, pytz, pyrsistent, pkgutil-resolve-name, opencv-python, numba, markdown-it-py, lmdb, importlib-resources, h11, gitdb, future, filterpy, backports.zoneinfo, attrs, async-timeout, anyio, aiosignal, addict, wcwidth, toolz, starlette, shortuuid, setproctitle, sentry-sdk, pytz-deprecation-shim, pynacl, pygments, pydantic, psutil, promise, pathtools, pandas, monotonic, mdit-py-plugins, linkify-it-py, jsonschema, Jinja2, httpcore, gitpython, fsspec, filelock, facexlib, entrypoints, docker-pycreds, decorator, commonmark, click, bcrypt, basicsr, backoff, aiohttp, websockets, watchdog, wandb, validators, uvicorn, urwid, tzlocal, tornado, torchmetrics, torchdiffeq, toml, tokenizers, tensorboard, semver, rich, resize-right, regex, python-multipart, pympler, pydub, pyDeprecate, pydeck, pycryptodome, pyarrow, paramiko, orjson, opencv-python-headless, kornia, jsonmerge, imgaug, huggingface-hub, httpx, GFPGAN, ftfy, ffmpy, fastapi, einops, clean-fid, blinker, antlr4-python3-runtime, analytics-python, altair, accelerate, transformers, torch-fidelity, test-tube, taming-transformers, streamlit, realesrgan, pytorch-lightning, pynvml, pudb, omegaconf, latent-diffusion, k-diffusion, imageio-ffmpeg, gradio, clip, albumentations
  Attempting uninstall: idna
    Found existing installation: idna 3.3
    Uninstalling idna-3.3:
      Successfully uninstalled idna-3.3
  Attempting uninstall: requests
    Found existing installation: requests 2.28.1
    Uninstalling requests-2.28.1:
      Successfully uninstalled requests-2.28.1
  Running setup.py develop for GFPGAN
  Running setup.py develop for taming-transformers
  Running setup.py develop for realesrgan
  Attempting uninstall: latent-diffusion
    Found existing installation: latent-diffusion 0.0.1
    Can't uninstall 'latent-diffusion'. No files were found to uninstall.
  Running setup.py develop for latent-diffusion
  Running setup.py develop for k-diffusion
  Running setup.py develop for clip
Successfully installed GFPGAN Jinja2-3.1.2 MarkupSafe-2.1.1 PyWavelets-1.3.0 absl-py-1.2.0 accelerate-0.12.0 addict-2.4.0 aiohttp-3.8.1 aiosignal-1.2.0 albumentations-0.4.3 altair-4.2.0 analytics-python-1.4.0 antlr4-python3-runtime-4.8 anyio-3.6.1 async-timeout-4.0.2 attrs-22.1.0 backoff-1.10.0 backports.zoneinfo-0.2.1 basicsr-1.4.1 bcrypt-4.0.0 blinker-1.5 cachetools-5.2.0 chardet-4.0.0 clean-fid-0.1.28 click-8.1.3 clip colorama-0.4.5 commonmark-0.9.1 cycler-0.11.0 decorator-5.1.1 docker-pycreds-0.4.0 einops-0.3.0 entrypoints-0.4 facexlib-0.2.4 fastapi-0.81.0 ffmpy-0.3.0 filelock-3.8.0 filterpy-1.4.5 fonttools-4.37.1 frozenlist-1.3.1 fsspec-2022.7.1 ftfy-6.1.1 future-0.18.2 gitdb-4.0.9 gitpython-3.1.27 google-auth-2.11.0 google-auth-oauthlib-0.4.6 gradio-3.1.6 grpcio-1.47.0 h11-0.12.0 httpcore-0.15.0 httpx-0.23.0 huggingface-hub-0.9.1 idna-2.10 imageio-2.9.0 imageio-ffmpeg-0.4.2 imgaug-0.2.6 importlib-metadata-4.12.0 importlib-resources-5.9.0 jsonmerge-1.8.0 jsonschema-4.14.0 k-diffusion kiwisolver-1.4.4 kornia-0.6.0 latent-diffusion linkify-it-py-1.0.3 llvmlite-0.39.0 lmdb-1.3.0 markdown-3.4.1 markdown-it-py-2.1.0 matplotlib-3.5.3 mdit-py-plugins-0.3.0 mdurl-0.1.2 monotonic-1.6 multidict-6.0.2 networkx-2.8.6 numba-0.56.0 oauthlib-3.2.0 omegaconf-2.1.1 opencv-python-4.1.2.30 opencv-python-headless-4.1.2.30 orjson-3.8.0 packaging-21.3 pandas-1.4.3 paramiko-2.11.0 pathtools-0.1.2 pkgutil-resolve-name-1.3.10 promise-2.3 protobuf-3.19.4 psutil-5.9.1 pudb-2019.2 pyDeprecate-0.3.1 pyarrow-9.0.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycryptodome-3.15.0 pydantic-1.9.2 pydeck-0.8.0b1 pydub-0.25.1 pygments-2.13.0 pympler-1.0.1 pynacl-1.5.0 pynvml-11.4.1 pyparsing-3.0.9 pyrsistent-0.18.1 python-dateutil-2.8.2 python-multipart-0.0.5 pytorch-lightning-1.4.2 pytz-2022.2.1 pytz-deprecation-shim-0.1.0.post0 pyyaml-6.0 realesrgan regex-2022.8.17 requests-2.25.1 requests-oauthlib-1.3.1 resize-right-0.0.2 rfc3986-1.5.0 rich-12.5.1 rsa-4.9 scikit-image-0.19.3 scipy-1.9.1 semver-2.13.0 sentry-sdk-1.9.5 setproctitle-1.3.2 shortuuid-1.0.9 smmap-5.0.0 sniffio-1.2.0 starlette-0.19.1 streamlit-1.12.2 taming-transformers tb-nightly-2.11.0a20220827 tensorboard-2.10.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 test-tube-0.7.5 tifffile-2022.8.12 tokenizers-0.12.1 toml-0.10.2 toolz-0.12.0 torch-fidelity-0.3.0 torchdiffeq-0.2.3 torchmetrics-0.6.0 tornado-6.2 tqdm-4.64.0 transformers-4.19.2 tzdata-2022.2 tzlocal-4.2 uc-micro-py-1.0.1 urwid-2.1.2 uvicorn-0.18.3 validators-0.20.0 wandb-0.13.2 watchdog-2.1.9 wcwidth-0.2.5 websockets-10.3 werkzeug-2.2.2 yapf-0.32.0 yarl-1.8.1 zipp-3.8.1

Pip subprocess error:
Usage: chcp infile outfile
Usage: chcp infile outfile
Usage: chcp infile outfile

failed

CondaEnvException: Pip failed

Update wiki

I tried to push an update my self, but got a Read only response.
I tried to change from:

Windows Setup (WIP)

Make sure that your git is not converted newlines into Windows format differently from the repository

git config --global core.autocrlf input

Please comment if you got everything running successfully and any special steps needed.

To:

Windows Setup (WIP*)

Make sure that your git is not converted newlines into Windows format differently from the repository

git config core.autocrlf false
git reset --hard HEAD

* Please comment if you got everything running successfully and any special steps needed.

With comment:

* Changed `git config --global` to local.
* Highlighted the WIP comment.

Update release notes

Suggest adding a section to README.md or adding a changelog to highlight major changes in master. This repo has taken off and the changes are coming so fast it can be difficult to keep up with what each new release is actually doing.

Add @altryne to collaborators on main repo

Since we are using the webui work I'm doing over at dev branch here, would be helpful to be a collaborator here as well.
I can help with maintenance, wiki, and issues regarding the colab and webui

your bs broke my chcp

delete this whole github until you can figure out how to make a program that doesn't change paths, delete github and ruin entire systems for the unexperienced. thanks for the coming hours of troubleshooting

gradio share url does not work, only on localhost

Title. Adding share=True at
self.demo.launch(show_error=True, server_name='0.0.0.0', share=True)
works as expected, you get a shareable public link, however, it does not actually work. It shows the web UI, but that is about it. Generation functionality is lost, aka the backend. Localhost, however, works flawlessly.

Unfortunately, I couldn't find the solution myself, so I'll leave this here as an issue.

[BUG] facexlib version conflict with gfpgan

The repo located here: https://github.com/hlky/facexlib is on version 0.2.4 and the latest gfpgan has a dependency requirement of 0.2.5.

The following error occurs when trying to build environment:

INFO: pip is looking at multiple versions of facexlib to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of clip to determine which version is compatible with other requirements. This could take a while.

The conflict is caused by:
The user requested facexlib 0.2.4 (from git+https://github.com/hlky/facexlib#egg=facexlib)
gfpgan 1.3.4 depends on facexlib>=0.2.5

Resize mode crashes on img2img

I get this error if I change resize mode to "crop and resize" or "resize and fill" on img2img.

!!Runtime error (img2img)!! Given groups=1, weight of size [128, 3, 3, 3], expected input[1, 4, 512, 512] to have 3 channels, but got 4 channels instead exiting...calling os._exit(0)

Negative Classifier Free Guidance Scale

Let the slider on the for Classifier Free Guidance Scale go to negative numbers, it works and yields very nice results by removing and substituting the things associated with the prompt.

An exemple:
Untitled

Multi-GPU

Thank you for this lovely project! Are there any plans to allow for more than 1 GPU? There are a few transformer projects (like aitextgen) that allow you to combine 2 or more GPUs to perform work. Thank you in advance.

FileNotFoundError: [Errno 2] resulting in no Yaml nor image being pushed to webui output

I've been getting this error with both img2img, and txt2img, but I don't always seem to get it, and I'm not quite sure what is happening. Often when this error happens with img2img I get no results at all, when it happens with txt2img, I get the image in my output folder, but no yaml, and the result isn't shown in the webui output. I just updated this morning from the stable branch.

Traceback (most recent call last): File "C:\Users\theha\.conda\envs\ldo\lib\site-packages\gradio\routes.py", line 247, in run_predict output = await app.blocks.process_api( File "C:\Users\theha\.conda\envs\ldo\lib\site-packages\gradio\blocks.py", line 641, in process_api predictions, duration = await self.call_function(fn_index, processed_input) File "C:\Users\theha\.conda\envs\ldo\lib\site-packages\gradio\blocks.py", line 556, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\theha\.conda\envs\ldo\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\theha\.conda\envs\ldo\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\theha\.conda\envs\ldo\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "scripts/webui.py", line 926, in txt2img output_images, seed, info, stats = process_images( File "scripts/webui.py", line 826, in process_images save_sample(image, sample_path_i, filename, jpg_sample, prompts, seeds, width, height, steps, cfg_scale, File "scripts/webui.py", line 553, in save_sample with open(f"{filename_i}.yaml", "w", encoding="utf8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'outputs/txt2img-samples\\samples\\a_beautiful_furry_fox-goddess_standing_in_front_of_a_portal_to_the_astral_plane.__by_James_Gurney_and_Greg_Rutowski.__Trending_o\\00001-50_k_euler_a_2544649136.yaml'

Masking Issues

Finally I have updated but it looks like the same problems I opened an issue about before remain. Foremost, the cursor is offset from the 'mask cursor' palette circle that you paint with. I am not sure why this happens and sometimes resizing the window will fix it a bit but the cursor is often still trailing behind the mask palette.

Also, why is there no slider to adjust palette size for mask? This is an essential efficiency feature that should come before the more experimental upgrades. It simply makes the masking process too tedious to be working with a tiny pixel brush. This would make the workflow for masking images much smoother.

Launch() Share=true location

Hello, in previous versions there was a Launch() at the bottom of the webgui script. Where would we put it now? I attempted to put it in the only Launch() that was tied to the relaunch script and it errored out.

Suggestion - Add a changelog

There is the possibility that it's already present but I can't find it;

thank you for your effort in producing this content, as a suggestion I'd request a changelog to better understand what's changing between the commits / releases.

This context is changing so fast that's very hard to see immediately if 10 commits are refinements or a big new feature without checking the specific commit contents.

facexlib version conflict: gfpgan 1.3.4 depends on facexlib>=0.2.5

Summary

I've followed the installation instructions carefully, the server successfully starts, and I can use both Stable Diffusion and GFPGAN. However, when I run webui.cmd, I see a conflict message:

The conflict is caused by:
    The user requested facexlib 0.2.4 (from git+https://github.com/hlky/facexlib#egg=facexlib)
    gfpgan 1.3.4 depends on facexlib>=0.2.5

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict


Pip subprocess error:
ERROR: Cannot install -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 23) and facexlib 0.2.4 (from git+https://github.com/hlky/facexlib#egg=facexlib) because these package versions have conflicting dependencies.
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies

Analysis

I'm a Python novice, but I think I can see why this is happening. This repo's environment.yaml refers to https://github.com/hlky/facexlib :

https://github.com/hlky/stable-diffusion/blob/fa977b3d6f9d0b264035c949fd70415476f00036/environment.yaml#L33

And https://github.com/hlky/facexlib/blob/master/VERSION currently identifies itself as 0.2.4.

environment.yaml also refers to https://github.com/TencentARC/GFPGAN :

https://github.com/hlky/stable-diffusion/blob/fa977b3d6f9d0b264035c949fd70415476f00036/environment.yaml#L36

But https://github.com/TencentARC/GFPGAN/blob/master/requirements.txt#L2 currently requires facexlib>=0.2.5. This appeared in a commit literally one hour ago as I'm typing this, TencentARC/GFPGAN@3e27784 , so I may be the first person who's seen this.

It appears that TencentARC/GFPGAN is expecting the upstream repo https://github.com/xinntao/facexlib where https://github.com/xinntao/facexlib/blob/master/VERSION currently identifies itself as 0.2.5, due to a commit eight hours ago, xinntao/facexlib@7655b7c .

Possible fixes

My novice guess is that hlky/facexlib should take an update from upstream xinntao/facexlib. Alternatively, perhaps the version of TencentARC/GFPGAN used by this repo needs to be "pinned".

Full webui.cmd output

If it helps, here is the exact text I'm seeing:

Click to expand verbatim output from webui.cmd:
D:\GitHub\stable-diffusion>webui
anaconda3/miniconda3 detected in C:\Users\stl\miniconda3

CondaValueError: prefix already exists: C:\Users\stl\miniconda3\envs\ldo

Collecting package metadata (repodata.json): done
Solving environment: done
Installing pip dependencies: \ Ran pip subprocess with arguments:
['C:\\Users\\stl\\miniconda3\\envs\\ldo\\python.exe', '-m', 'pip', 'install', '-U', '-r', 'D:\\GitHub\\stable-diffusion\\condaenv.0cqgpxpp.requirements.txt']
Pip subprocess output:
Requirement already satisfied: albumentations==0.4.3 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 1)) (0.4.3)
Requirement already satisfied: opencv-python==4.1.2.30 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 2)) (4.1.2.30)
Requirement already satisfied: opencv-python-headless==4.1.2.30 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 3)) (4.1.2.30)
Requirement already satisfied: pudb==2019.2 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 4)) (2019.2)
Requirement already satisfied: imageio==2.9.0 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 5)) (2.9.0)
Requirement already satisfied: imageio-ffmpeg==0.4.2 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 6)) (0.4.2)
Requirement already satisfied: pytorch-lightning==1.4.2 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 7)) (1.4.2)
Requirement already satisfied: omegaconf==2.1.1 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 8)) (2.1.1)
Requirement already satisfied: test-tube>=0.7.5 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 9)) (0.7.5)
Requirement already satisfied: streamlit>=0.73.1 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 10)) (1.12.2)
Requirement already satisfied: einops==0.3.0 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 11)) (0.3.0)
Requirement already satisfied: torch-fidelity==0.3.0 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 12)) (0.3.0)
Requirement already satisfied: transformers==4.19.2 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 13)) (4.19.2)
Requirement already satisfied: torchmetrics==0.6.0 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 14)) (0.6.0)
Requirement already satisfied: kornia==0.6 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 15)) (0.6.0)
Requirement already satisfied: gradio==3.1.6 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 16)) (3.1.6)
Requirement already satisfied: accelerate==0.12.0 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 17)) (0.12.0)
Requirement already satisfied: pynvml==11.4.1 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 18)) (11.4.1)
Requirement already satisfied: basicsr>=1.3.4.0 in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 19)) (1.4.2)
Obtaining facexlib from git+https://github.com/hlky/facexlib#egg=facexlib (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 20))
  Updating d:\github\stable-diffusion\src\facexlib clone
Obtaining taming-transformers from git+https://github.com/CompVis/taming-transformers#egg=taming-transformers (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 21))
  Updating d:\github\stable-diffusion\src\taming-transformers clone
Obtaining clip from git+https://github.com/openai/CLIP#egg=clip (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 22))
  Updating d:\github\stable-diffusion\src\clip clone
Obtaining GFPGAN from git+https://github.com/TencentARC/GFPGAN#egg=GFPGAN (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 23))
  Updating d:\github\stable-diffusion\src\gfpgan clone
Obtaining realesrgan from git+https://github.com/xinntao/Real-ESRGAN#egg=realesrgan (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 24))
  Updating d:\github\stable-diffusion\src\realesrgan clone
Obtaining k_diffusion from git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 25))
  Updating d:\github\stable-diffusion\src\k-diffusion clone
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
    Preparing wheel metadata: started
    Preparing wheel metadata: finished with status 'done'
Obtaining file:///D:/GitHub/stable-diffusion (from -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 26))
Requirement already satisfied: ftfy in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from clip->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 22)) (6.1.1)
Requirement already satisfied: regex in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from clip->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 22)) (2022.8.17)
Requirement already satisfied: tqdm in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from clip->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 22)) (4.64.0)
Requirement already satisfied: torch in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from clip->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 22)) (1.11.0)
Requirement already satisfied: torchvision in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from clip->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 22)) (0.12.0)
Requirement already satisfied: filterpy in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from facexlib->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 20)) (1.4.5)
Requirement already satisfied: numba in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from facexlib->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 20)) (0.56.0)
Requirement already satisfied: numpy in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from facexlib->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 20)) (1.19.2)
Requirement already satisfied: Pillow in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from facexlib->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 20)) (9.2.0)
Requirement already satisfied: scipy in c:\users\stl\miniconda3\envs\ldo\lib\site-packages (from facexlib->-r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 20)) (1.9.1)
INFO: pip is looking at multiple versions of facexlib to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of clip to determine which version is compatible with other requirements. This could take a while.

The conflict is caused by:
    The user requested facexlib 0.2.4 (from git+https://github.com/hlky/facexlib#egg=facexlib)
    gfpgan 1.3.4 depends on facexlib>=0.2.5

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict


Pip subprocess error:
ERROR: Cannot install -r D:\GitHub\stable-diffusion\condaenv.0cqgpxpp.requirements.txt (line 23) and facexlib 0.2.4 (from git+https://github.com/hlky/facexlib#egg=facexlib) because these package versions have conflicting dependencies.
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies

failed

CondaEnvException: Pip failed

Relauncher: Launching...
Loaded GFPGAN
Loaded RealESRGAN with model RealESRGAN_x4plus
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Global Step: 470000
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Running on local URL:  http://localhost:7860/

To create a public link, set `share=True` in `launch()`.

Thanks!

(Thanks for your amazing repo! Getting started was very easy, and it runs much faster than other forks I've tried. 😸 Although this version conflict doesn't seem to interfere with usability, I'm reporting it to help improve the experience.)

The colab launches the webui from the wrong folder

In the colab, the final launch command is:
!python /content/stable-diffusion/scripts/webui.py \ --ckpt '{models_path}/sd-v1-4.ckpt' \ --outdir '{output_path}' \ --share {vars}

Which throws an error because webui.py tries to find optimizedSD/v1-inference.yaml in the /content directory.

An easy solution is:
%cd /content/stable-diffusion !python scripts/webui.py \ --ckpt '{models_path}/sd-v1-4.ckpt' \ --outdir '{output_path}' \ --share {vars}

img2img broken due to goBIG error (not involving optimized mode)

Started getting this error after updating a few minutes ago, currently running on Linux. I even deleted my setup and rebuilt it, didn't enable optimized mode, and this gets triggered when trying to do anything with img2img:

Traceback (most recent call last):
  File "/home/JKimsey/anaconda3/envs/lsd/lib/python3.8/site-packages/gradio/routes.py", line 247, in run_predict
    output = await app.blocks.process_api(
  File "/home/JKimsey/anaconda3/envs/lsd/lib/python3.8/site-packages/gradio/blocks.py", line 641, in process_api
    predictions, duration = await self.call_function(fn_index, processed_input)
  File "/home/JKimsey/anaconda3/envs/lsd/lib/python3.8/site-packages/gradio/blocks.py", line 556, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/JKimsey/anaconda3/envs/lsd/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/home/JKimsey/anaconda3/envs/lsd/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/home/JKimsey/anaconda3/envs/lsd/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "scripts/webui.py", line 1428, in img2img
    output_images, seed, info, stats = process_images(
TypeError: process_images() missing 2 required positional arguments: 'gobig_strength' and 'gobig_steps'

[Potential BUG] Variation seeds should be stored in generated image yaml files

I'm gonna put this here as it seems like it's a potential bug, or at least an issue that needs resolving.

I noticed when using the Variations feature that the seeds and stats for what were used for those images aren't saved in their respective yaml file. This makes it incredibly hard to be able to go back and figure out exactly what the parameters where that were used for that specific image. Given the other output data already stored in those files, it definitely seems like the variations data should be stored there as well.

RealESRGAN and GFPGAN tabs batch option

Just wondering if its possible to allow more than one image at a time for the upscaling and face fixing tabs? I have a large number of images I generated from before this was an option so would like to feed them through the upscaler in a large batch of 500+

Right now its a bit slow as I need to load the image then convert it then save the output file then close the loaded image and load another etc

Prompt longer than 123 characters crashes

Traceback (most recent call last):
File "C:\Users\My Username.conda\envs\ldo\lib\site-packages\gradio\routes.py", line 247, in run_predict
output = await app.blocks.process_api(
File "C:\Users\My Username.conda\envs\ldo\lib\site-packages\gradio\blocks.py", line 641, in process_api
predictions, duration = await self.call_function(fn_index, processed_input)
File "C:\Users\My Username.conda\envs\ldo\lib\site-packages\gradio\blocks.py", line 556, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\My Username.conda\envs\ldo\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\My Username.conda\envs\ldo\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\My Username.conda\envs\ldo\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "scripts/webui.py", line 926, in txt2img
output_images, seed, info, stats = process_images(
File "scripts/webui.py", line 826, in process_images
save_sample(image, sample_path_i, filename, jpg_sample, prompts, seeds, width, height, steps, cfg_scale,
File "scripts/webui.py", line 515, in save_sample
image.save(f"{filename_i}.png")
File "C:\Users\My Username.conda\envs\ldo\lib\site-packages\PIL\Image.py", line 2317, in save
fp = builtins.open(filename, "w+b")
FileNotFoundError: [Errno 2] No such file or directory: 'outputs/txt2img-samples\samples\A_baby_gorilla_sits_on_a_large_flat_rock_surface,_holds_a_bamboo_branch_in_one_hand,_and_hugs_a_larger_gorilla’s_leg_with_the_ot\00000-17_k_lms_1344767371.png'

This didn't happen for older version where folders were not created for each prompt.

Tested in Windows 11.

KeyError: 'state_dict'

I tried to copy the weights from "stable-diffusion-v1-4/unet/diffusion_pytorch_model.bin" to models/ldm/stable-diffusion-v1/model.ckpt and run webuildm.cmd on windows but I'm getting the following error:

Loaded GFPGAN
Loaded RealESRGAN with model RealESRGAN_x4plus
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Traceback (most recent call last):
  File "scripts/webui.py", line 344, in <module>
    model = load_model_from_config(config, "models/ldm/stable-diffusion-v1/model.ckpt")
  File "scripts/webui.py", line 123, in load_model_from_config
    sd = pl_sd["state_dict"]
KeyError: 'state_dict'
Relauncher: Process is ending. Relaunching in 0.5s...

Is that even the correct file?

Crop tool in img2img not working properly on larger images

Running this locally, everything is working great except the cropping function in img2img on some photos I took with my phone.

I can crop and properly run prompts on the demo image, and I can load and run my own images without cropping. Once I upload an image and click the pencil edit icon to crop, I can properly move the crop boundary lines, but once I finish cropping and enter my prompt and hit the 'Generate' button, the results field displays "loading" as expected, but in the terminal I can see the model is not running. After this, if I upload another image I cannot run the model even without cropping.
If I load an image and click the pencil button to edit but do not move the image boundaries, I can still run the model. But once I move a boundary, it will not run.

Refreshing the tab resets everything and I can run img2img again, but once I try to crop I again get locked out from running anything.

After further testing, the tools was working with some images downloaded from the web. I reduced the size of my photo which was having issues to half (from 3072x4080, 2.93 MB to 1536x2040, 1.64 MB) and the tool works. So it seems it may be something related to the size of the image.

Oddly enough the mask tool seems to work fine, it's just the crop tool (both basic and advanced editor) that have this problem.

Runtime error (img2img)

Ran img2img changed some settings to run again and got this which crashed the docker.

!!Runtime error (img2img)!! 
 Given groups=1, weight of size [128, 3, 3, 3], expected input[1, 4, 832, 384] to have 3 channels, but got 4 channels instead
exiting...calling os._exit(0)

docker-compose up error: No module named 'k_diffusion'

Ran docker-compose up and getting an error `ModuleNotFoundError: No module named 'k_diffusion'. Full output here:

$ docker-compose up
[+] Running 1/0
 ⠿ Container sd  Created                                                                                                                         0.0s
Attaching to sd
sd  | Validating model files...
sd  | checking model.ckpt...
sd  | model.ckpt is valid!
sd  | 
sd  | checking GFPGANv1.3.pth...
sd  | GFPGANv1.3.pth is valid!
sd  | 
sd  | checking RealESRGAN_x4plus.pth...
sd  | RealESRGAN_x4plus.pth is valid!
sd  | 
sd  | checking RealESRGAN_x4plus_anime_6B.pth...
sd  | RealESRGAN_x4plus_anime_6B.pth is valid!
sd  | 
sd  | Relauncher: Launching...
sd  | Traceback (most recent call last):
sd  |   File "scripts/webui.py", line 36, in <module>
sd  |     import k_diffusion as K
sd  | ModuleNotFoundError: No module named 'k_diffusion'
sd  | Relauncher: Process is ending. Relaunching in 0.5s...
sd  | Relauncher: Launching...
sd  |   Relaunch count: 1
sd  | Traceback (most recent call last):
sd  |   File "scripts/webui.py", line 36, in <module>
sd  |     import k_diffusion as K
sd  | ModuleNotFoundError: No module named 'k_diffusion'
sd  | Relauncher: Process is ending. Relaunching in 0.5s...

img2img_k.py - variable typo: Line 282: x_samples should be: x_samples_ddim

Simple typo in img2img_k.py as per below.
line 282: x_samples should be: x_samples_ddim

                    samples_ddim = K.sampling.sample_lms(model_wrap_cfg, xi, sigma_sched, extra_args=extra_args, disable=not accelerator.is_main_process)
                    x_samples_ddim = model.decode_first_stage(samples_ddim)
                    x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
                    x_samples_ddim = accelerator.gather(x_samples_ddim)

                    if accelerator.is_main_process and not opt.skip_save:
                        for x_sample in x_samples:
                            x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
                            Image.fromarray(x_sample.astype(np.uint8)).save(
                                os.path.join(sample_path, f"{base_count:05}.png"))
                            base_count += 1
                            
                    if accelerator.is_main_process and not opt.skip_grid:
                        all_samples.append(x_samples_ddim)

image

[BUG] Output not displayed when generating a 3 or more batches after a previous generation

Describe the bug
I am currently using altryne's colab for the WebUI, and any generations with 3 or more batches of images don't output onto Gradio. However, they save to Google Drive perfectly fine. I'll put much more detail to (hopefully) help debug in "additional context".

To Reproduce
Steps to reproduce the behavior:

  1. Run Stable Diffusion on the colab (run all -> run after -> open gradio link)
  2. Set batch count to 3 or more, keeping all other settings as default (I've tested with 3, 4, and 5 batches)
  3. After generation is complete. generate another set of images with the same settings

Expected behavior
Images would display on gallery component in the WebUI over multiple generations, not just the first one.

Screenshots
Screenshot 2022-08-29 at 20-37-54 Stable Diffusion WebUI

Desktop (please complete the following information):

  • OS: Arch Linux (though I am currently using the Colab notebook linked in the README)
  • Browser: Firefox (also tested running the WebUI in Chromium with Colab in Firefox)
  • Version: 104.0

Additional context
I monitored the console and network tabs in Firefox DevTools, and found that /predict POST request actually does complete, but a DOMException is thrown. Looking at the response JSON, I found that there was typically one image that was sent corrupted, and doesn't display in some image viewers. Additionally, pngcheck returns zlib: inflate error = -3 (data error). Since the images successfully save to Google Drive uncorrupted, the images are actually generated successfully, they just aren't sent properly to the browser client. I decoded the base64 PNGs for some generations, both uncorrupted and corrupted, and placed them with their uncorupted Google Drive outputs and the JSON response data for the generation:
4 batches generated twice with the same seed - 4-batch_sameseed.zip
generating images, increasing the number by 1 every time - n-batch.zip
The corrupted images have some amount of "correct" data at the start of the base64 string, but it eventually cuts off and is replaced with improper data I'm not sure the source of. My file manager preview thumbnail shows other images in the data, but GIMP shows it as cutting off, and my image viewer, nomacs, is unable to open the file entirely.
screenshot of thumbnail screenshot of gimp
What's perhaps even more baffling is that Chromium throws net::ERR_HTTP2_PROTOCOL_ERROR 200 in the exact same circumstances as the DOMException, where generating one or two images works fine, but three or more throws and error. In Chromium I am unable to access the JSON output. I have nothing else to add with this info, but I hope it helps.

Dockerfile Missing pynvml

I had to include this in the latest docker file:

RUN pip install pynvml

This was required to allow for launch (30/08/22) - Works fine one added.

Trying to use GFPGAN gives an Runtime Error - (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

image
After the last update, whenever I try to use GFPGAN it ends up giving the following error, both in img2img and txt2img.
I use the following code in relauncher.py
os.system("python scripts/webui.py --precision full --no-half --gfpgan-cpu --esrgan-cpu --optimized")
I've tried reinstalling it, removing the environment and even reinstalling miniconda entirely, but nothing has helped me with this issue.
GPU is GTX 1660 Ti 6GB.

ValueError whenever using a k_ sampler with img2img

I get this error wheenver running img2img with any sampler but DDIM

File "scripts/webui.py", line 1188, in img2img
output_images, seed, info, stats = process_images(
File "scripts/webui.py", line 757, in process_images
samples_ddim = func_sample(init_data=init_data, x=x, conditioning=c, unconditional_conditioning=uc, sampler_name=sampler_name)
File "scripts/webui.py", line 1112, in sample
samples_ddim, _ = K.sampling.dict[f'sample_{sampler.get_sampler_name()}'](model_wrap_cfg, xi, sigma_sched, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': cfg_scale}, disable=False)
ValueError: not enough values to unpack (expected 2, got 1)

img2img crahes every time

Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\envs\ldo\lib\site-packages\gradio\routes.py", line 247, in run_predict
output = await app.blocks.process_api(
File "C:\ProgramData\Miniconda3\envs\ldo\lib\site-packages\gradio\blocks.py", line 641, in process_api
predictions, duration = await self.call_function(fn_index, processed_input)
File "C:\ProgramData\Miniconda3\envs\ldo\lib\site-packages\gradio\blocks.py", line 556, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\ProgramData\Miniconda3\envs\ldo\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\ProgramData\Miniconda3\envs\ldo\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\ProgramData\Miniconda3\envs\ldo\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "scripts/webui.py", line 1131, in img2img
output_images, seed, info, stats = process_images(
File "scripts/webui.py", line 658, in process_images
assert prompt is not None
AssertionError

webui.py crashes in Docker on a Windows (WSL) machine

📢 Discussion from #71 continues here.

It crashes somewhere between the log sd | LatentDiffusion: Running in eps-prediction mode and sd | DiffusionWrapper has 859.52 M params..
I have tried a clean Docker install by deleting all containers, images and volumes and then run:

docker system prune -a
git clone https://github.com/hlky/stable-diffusion.git
cd stable-diffusion
docker-compose up

But nothing has worked so far and is getting stuck at:

[+] Running 1/0
 - Container sd  Created                                                                                           0.0s
Attaching to sd
sd  |      active environment : ldm
sd  |     active env location : /opt/conda/envs/ldm
sd  | Validating model files...
sd  | checking model.ckpt...
sd  | model.ckpt is valid!
sd  |
sd  | checking GFPGANv1.3.pth...
sd  | GFPGANv1.3.pth is valid!
sd  |
sd  | checking RealESRGAN_x4plus.pth...
sd  | RealESRGAN_x4plus.pth is valid!
sd  |
sd  | checking RealESRGAN_x4plus_anime_6B.pth...
sd  | RealESRGAN_x4plus_anime_6B.pth is valid!
sd  |
sd  | entrypoint.sh: Launching...'
sd  | Loaded GFPGAN
sd  | Loaded RealESRGAN with model RealESRGAN_x4plus
sd  | Loading model from models/ldm/stable-diffusion-v1/model.ckpt
sd  | Global Step: 470000
sd  | LatentDiffusion: Running in eps-prediction mode
sd  | entrypoint.sh: Process is ending. Relaunching in 0.5s...
sd  | /sd/entrypoint.sh: line 89:    29 Killed                  python -u scripts/webui.py
sd  | entrypoint.sh: Launching...'
sd  | Relaunch count: 1

And then it continuing relaunching until I close the program.

Is there anyone running Windows that have got the Docker solution to work?

The "Save individual images" doesn't seem to work.

I just updated today and have found that selecting the "Save individual images" check box doesn't seem to do anything now. I am using GFPGAN while creating the batch but always end up with the two images, whether the "Save individual images" is selected or not.

ModuleNotFoundError: No module named 'frontend'

Traceback (most recent call last):
File "C:\Users\beatt\Desktop\Offline AI Generation\stable-diffusion-main\stable-diffusion-main\scripts\webui.py", line 3, in
from frontend.frontend import draw_gradio_ui
ModuleNotFoundError: No module named 'frontend'
Relauncher: Process is ending. Relaunching in 0.5s...
Relauncher: Launching...
Relaunch count: 3

No idea what's going on here...

Tensor needs to be moved from one gpu to the other.

Great feature to be able to run esrgan and gfpgan on another GPU, however, tensor needs to be moved:
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper__cudnn_convolution)

mask mode selection toggle not showing up when mask mode is selected

The selection toggle that allows the user to select between "keep masked area" and "regenerate only masked area" no longer shows up in the UI when "Image editor mode" is set to "mask".

Took a quick look at the code and it looks like it was introduced in commit c0c2a7c.
Looks like a parameter was removed from the img2img_image_editor_mode.change call in frontend.py but the corresponding value in change_image_editor_mode (ui_functions.py) was not removed.

I didn't look further to see if there were any other areas that are affected, but when I make the change locally, it brought back the selection toggle.

Edit: Looks Like PR #118 has a fix for this already as part of the work that was done

Specifying gpu results in a crash when generating image

Started webui.py with arguments --gpu 2 and after clicking generate received this message

Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument index in method wrapper__index_select)

I found #114 with a similar error message. This is happening with default settings, just putting in a prompt, so gfpgan and esrgan are off.

cant complete a new fresh install

just deleted the stable diffusion main folder and deleted the lately created env (ldm and ldo) with miniconda

tried a new install fresh from the start. tried both webuildm and webui ; tried name "ldm" and "ldo" for env

but it gets stuck

(it worked fine 2-3 days ago)

""each time i get :

Installing pip dependencies: - Ran pip subprocess with arguments:
['C:\Users\11\.conda\envs\ldm\python.exe', '-m', 'pip', 'install', '-U', '-r', 'C:\stable-diffusion-main\condaenv.vu0qb86m.requirements.txt']
Pip subprocess output:
Obtaining taming-transformers from git+https://github.com/CompVis/taming-transformers#egg=taming-transformers (from -r C:\stable-diffusion-main\condaenv.vu0qb86m.requirements.txt (line 21))
Cloning https://github.com/CompVis/taming-transformers to c:\stable-diffusion-main\src\taming-transformers

Pip subprocess error:
ERROR: Command errored out with exit status 128: git clone -q https://github.com/CompVis/taming-transformers 'C:\stable-diffusion-main\src\taming-transformers' Check the logs for full command output.

failed

CondaEnvException: Pip failed

""and after that :

Traceback (most recent call last):
File "scripts/webui.py", line 31, in
import gradio as gr
ModuleNotFoundError: No module named 'gradio'
Relauncher: Process is ending. Relaunching in 0.5s...
Relauncher: Launching...

Unable to pass parameters to webui.cmd

If I pass parameters to webui.cmd they seem to be ignored.

Examples I've been trying to use are:

./webui.cmd --share
./webui.cmd --share-password

The webui.cmd script still runs but the additional parameters don't appear to be passed on and are ignored.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.