Giter Club home page Giter Club logo

kohya_ss's Introduction

Kohya's GUI

This repository primarily provides a Gradio GUI for Kohya's Stable Diffusion trainers. However, support for Linux OS is also offered through community contributions. macOS support is not optimal at the moment but might work if the conditions are favorable.

The GUI allows you to set the training parameters and generate and run the required CLI commands to train the model.

Table of Contents

๐Ÿฆ’ Colab

This Colab notebook was not created or maintained by me; however, it appears to function effectively. The source can be found at: https://github.com/camenduru/kohya_ss-colab.

I would like to express my gratitude to camendutu for their valuable contribution. If you encounter any issues with the Colab notebook, please report them on their repository.

Colab Info
Open In Colab kohya_ss_gui_colab

Installation

Windows

Windows Pre-requirements

To install the necessary dependencies on a Windows system, follow these steps:

  1. Install Python 3.10.11.

    • During the installation process, ensure that you select the option to add Python to the 'PATH' environment variable.
  2. Install CUDA 11.8 toolkit.

  3. Install Git.

  4. Install the Visual Studio 2015, 2017, 2019, and 2022 redistributable.

Setup Windows

To set up the project, follow these steps:

  1. Open a terminal and navigate to the desired installation directory.

  2. Clone the repository by running the following command:

    git clone --recursive https://github.com/bmaltais/kohya_ss.git
  3. Change into the kohya_ss directory:

    cd kohya_ss
  4. Run one of the following setup script by executing the following command:

    For systems with only python 3.10.11 installed:

    .\setup.bat

    For systems with only more than one python release installed:

    .\setup-3.10.bat

    During the accelerate config step, use the default values as proposed during the configuration unless you know your hardware demands otherwise. The amount of VRAM on your GPU does not impact the values used.

Optional: CUDNN 8.9.6.50

The following steps are optional but will improve the learning speed for owners of NVIDIA 30X0/40X0 GPUs. These steps enable larger training batch sizes and faster training speeds.

  1. Run .\setup.bat and select 2. (Optional) Install cudnn files (if you want to use the latest supported cudnn version).

Linux and macOS

Linux Pre-requirements

To install the necessary dependencies on a Linux system, ensure that you fulfill the following requirements:

  • Ensure that venv support is pre-installed. You can install it on Ubuntu 22.04 using the command:

    apt install python3.10-venv
  • Install the CUDA 11.8 Toolkit by following the instructions provided in this link.

  • Make sure you have Python version 3.10.9 or higher (but lower than 3.11.0) installed on your system.

Setup Linux

To set up the project on Linux or macOS, perform the following steps:

  1. Open a terminal and navigate to the desired installation directory.

  2. Clone the repository by running the following command:

    git clone --recursive https://github.com/bmaltais/kohya_ss.git
  3. Change into the kohya_ss directory:

    cd kohya_ss
  4. If you encounter permission issues, make the setup.sh script executable by running the following command:

    chmod +x ./setup.sh
  5. Run the setup script by executing the following command:

    ./setup.sh

    Note: If you need additional options or information about the runpod environment, you can use setup.sh -h or setup.sh --help to display the help message.

Install Location

The default installation location on Linux is the directory where the script is located. If a previous installation is detected in that location, the setup will proceed there. Otherwise, the installation will fall back to /opt/kohya_ss. If /opt is not writable, the fallback location will be $HOME/kohya_ss. Finally, if none of the previous options are viable, the installation will be performed in the current directory.

For macOS and other non-Linux systems, the installation process will attempt to detect the previous installation directory based on where the script is run. If a previous installation is not found, the default location will be $HOME/kohya_ss. You can override this behavior by specifying a custom installation directory using the -d or --dir option when running the setup script.

If you choose to use the interactive mode, the default values for the accelerate configuration screen will be "This machine," "None," and "No" for the remaining questions. These default answers are the same as the Windows installation.

Runpod

Manual installation

To install the necessary components for Runpod and run kohya_ss, follow these steps:

  1. Select the Runpod pytorch 2.0.1 template. This is important. Other templates may not work.

  2. SSH into the Runpod.

  3. Clone the repository by running the following command:

    cd /workspace
    git clone --recursive https://github.com/bmaltais/kohya_ss.git
  4. Run the setup script:

    cd kohya_ss
    ./setup-runpod.sh
  5. Run the GUI with:

    ./gui.sh --share --headless

    or with this if you expose 7860 directly via the runpod configuration:

    ./gui.sh --listen=0.0.0.0 --headless
  6. Connect to the public URL displayed after the installation process is completed.

Pre-built Runpod template

To run from a pre-built Runpod template, you can:

  1. Open the Runpod template by clicking on https://runpod.io/gsc?template=ya6013lj5a&ref=w18gds2n.

  2. Deploy the template on the desired host.

  3. Once deployed, connect to the Runpod on HTTP 3010 to access the kohya_ss GUI. You can also connect to auto1111 on HTTP 3000.

Docker

Get your Docker ready for GPU support

Windows

Once you have installed Docker Desktop, CUDA Toolkit, NVIDIA Windows Driver, and ensured that your Docker is running with WSL2, you are ready to go.

Here is the official documentation for further reference.
https://docs.nvidia.com/cuda/wsl-user-guide/index.html#nvidia-compute-software-support-on-wsl-2 https://docs.docker.com/desktop/wsl/use-wsl/#gpu-support

Linux, OSX

Install an NVIDIA GPU Driver if you do not already have one installed.
https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/index.html

Install the NVIDIA Container Toolkit with this guide.
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

Design of our Dockerfile

  • It is required that all training data is stored in the dataset subdirectory, which is mounted into the container at /dataset.
  • Please note that the file picker functionality is not available. Instead, you will need to manually input the folder path and configuration file path.
  • TensorBoard has been separated from the project.
    • TensorBoard is not included in the Docker image.
    • The "Start TensorBoard" button has been hidden.
    • TensorBoard is launched from a distinct container as shown here.
  • The browser won't be launched automatically. You will need to manually open the browser and navigate to http://localhost:7860/ and http://localhost:6006/
  • This Dockerfile has been designed to be easily disposable. You can discard the container at any time and restart it with the new code version.

Use the pre-built Docker image

git clone --recursive https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
docker compose up -d

To update the system, do docker compose down && docker compose up -d --pull always

Local docker build

Important

Clone the Git repository recursively to include submodules:
git clone --recursive https://github.com/bmaltais/kohya_ss.git

git clone --recursive https://github.com/bmaltais/kohya_ss.git
cd kohya_ss
docker compose up -d --build

Note

Building the image may take up to 20 minutes to complete.

To update the system, checkout to the new code version and rebuild using docker compose down && docker compose up -d --build --pull always

If you are running on Linux, an alternative Docker container port with fewer limitations is available here.

ashleykleynhans runpod docker builds

You may want to use the following repositories when running on runpod:

Upgrading

To upgrade your installation to a new version, follow the instructions below.

Windows Upgrade

If a new release becomes available, you can upgrade your repository by running the following commands from the root directory of the project:

  1. Pull the latest changes from the repository:

    git pull
  2. Run the setup script:

    .\setup.bat

Linux and macOS Upgrade

To upgrade your installation on Linux or macOS, follow these steps:

  1. Open a terminal and navigate to the root directory of the project.

  2. Pull the latest changes from the repository:

    git pull
  3. Refresh and update everything:

    ./setup.sh

Starting GUI Service

To launch the GUI service, you can use the provided scripts or run the kohya_gui.py script directly. Use the command line arguments listed below to configure the underlying service.

--listen: Specify the IP address to listen on for connections to Gradio.
--username: Set a username for authentication.
--password: Set a password for authentication.
--server_port: Define the port to run the server listener on.
--inbrowser: Open the Gradio UI in a web browser.
--share: Share the Gradio UI.
--language: Set custom language

Launching the GUI on Windows

On Windows, you can use either the gui.ps1 or gui.bat script located in the root directory. Choose the script that suits your preference and run it in a terminal, providing the desired command line arguments. Here's an example:

gui.ps1 --listen 127.0.0.1 --server_port 7860 --inbrowser --share

or

gui.bat --listen 127.0.0.1 --server_port 7860 --inbrowser --share

Launching the GUI on Linux and macOS

To launch the GUI on Linux or macOS, run the gui.sh script located in the root directory. Provide the desired command line arguments as follows:

gui.sh --listen 127.0.0.1 --server_port 7860 --inbrowser --share

Custom Path Defaults

The repository now provides a default configuration file named config.toml. This file is a template that you can customize to suit your needs.

To use the default configuration file, follow these steps:

  1. Copy the config example.toml file from the root directory of the repository to config.toml.
  2. Open the config.toml file in a text editor.
  3. Modify the paths and settings as per your requirements.

This approach allows you to easily adjust the configuration to suit your specific needs to open the desired default folders for each type of folder/file input supported in the GUI.

You can specify the path to your config.toml (or any other name you like) when running the GUI. For instance: ./gui.bat --config c:\my_config.toml

LoRA

To train a LoRA, you can currently use the train_network.py code. You can create a LoRA network by using the all-in-one GUI.

Once you have created the LoRA network, you can generate images using auto1111 by installing this extension.

Sample image generation during training

A prompt file might look like this, for example:

# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy, bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28

# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy, bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40

Lines beginning with # are comments. You can specify options for the generated image with options like --n after the prompt. The following options can be used:

  • --n: Negative prompt up to the next option.
  • --w: Specifies the width of the generated image.
  • --h: Specifies the height of the generated image.
  • --d: Specifies the seed of the generated image.
  • --l: Specifies the CFG scale of the generated image.
  • --s: Specifies the number of steps in the generation.

The prompt weighting such as ( ) and [ ] is working.

Troubleshooting

If you encounter any issues, refer to the troubleshooting steps below.

Page File Limit

If you encounter an X error related to the page file, you may need to increase the page file size limit in Windows.

No module called tkinter

If you encounter an error indicating that the module tkinter is not found, try reinstalling Python 3.10 on your system.

LORA Training on TESLA V100 - GPU Utilization Issue

Issue Summary

When training LORA on a TESLA V100, users reported low GPU utilization. Additionally, there was difficulty in specifying GPUs other than the default for training.

Potential Solutions

  • GPU Selection: Users can specify GPU IDs in the setup configuration to select the desired GPUs for training.
  • Improving GPU Load: Utilizing adamW8bit optimizer and increasing the batch size can help achieve 70-80% GPU utilization without exceeding GPU memory limits.

SDXL training

The documentation in this section will be moved to a separate document later.

Masked loss

The masked loss is supported in each training script. To enable the masked loss, specify the --masked_loss option.

The feature is not fully tested, so there may be bugs. If you find any issues, please open an Issue.

ControlNet dataset is used to specify the mask. The mask images should be the RGB images. The pixel value 255 in R channel is treated as the mask (the loss is calculated only for the pixels with the mask), and 0 is treated as the non-mask. The pixel values 0-255 are converted to 0-1 (i.e., the pixel value 128 is treated as the half weight of the loss). See details for the dataset specification in the LLLite documentation.

Change History

See release information.

kohya_ss's People

Contributors

ai-casanova avatar akx avatar bmaltais avatar breakcore2 avatar ddpn08 avatar dependabot[bot] avatar devnegative-asm avatar disty0 avatar fannovel16 avatar furkangozukara avatar hinablue avatar isotr0py avatar jim60105 avatar jstayco avatar ki-wimon avatar kohakublueleaf avatar kohya-ss avatar linaqruf avatar mgz-dev avatar p1atdev avatar rockerboo avatar sdbds avatar shirayu avatar space-nuko avatar tingtingin avatar tomj2ee avatar trojaner avatar tsukimiya avatar u-haru avatar wkpark avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kohya_ss's Issues

Dreambooth folder preparation does not work properly.

I specified the reg image directory, but it doesn't work. Is there something I'm missing?

Copy D:\Desktop\test\a to D:\Desktop\test\c\img/1000_ba-shiroko girl... Regularization images directory or repeats is missing... not copying regularisation images... Done creating kohya_ss training folder structure at D:\Desktop\test\c...

image

"slow_conv2d_cpu" not implemented for 'Half' AND returned non-zero exit status 1.

Traceback (most recent call last):
File "I:\kohya_ss\train_network.py", line 462, in
train(args)
File "I:\kohya_ss\train_network.py", line 94, in train
train_dataset.cache_latents(vae)
File "I:\kohya_ss\library\train_util.py", line 285, in cache_latents
info.latents = vae.encode(img_tensor).latent_dist.sample().squeeze(0).to("cpu")
File "I:\kohya_ss\venv\lib\site-packages\diffusers\models\vae.py", line 566, in encode
h = self.encoder(x)
File "I:\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "I:\kohya_ss\venv\lib\site-packages\diffusers\models\vae.py", line 130, in forward
sample = self.conv_in(sample)
File "I:\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "I:\kohya_ss\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "I:\kohya_ss\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'
Traceback (most recent call last):
File "C:\Users\Naught\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Naught\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "I:\kohya_ss\venv\Scripts\accelerate.exe_main
.py", line 7, in
File "I:\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "I:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
simple_launcher(args)
File "I:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['I:\kohya_ss\venv\Scripts\python.exe', 'train_network.py', '--v_parameterization', '--enable_bucket', '--pretrained_model_name_or_path=I:\stable-diffusion-webui\models\Stable-diffusion\anything-v4.5-pruned.ckpt', '--train_data_dir=H:\SD-GUI-1.5.0\abmayo_MIKU\LoRA\pre\img', '--reg_data_dir=H:\SD-GUI-1.5.0\abmayo_MIKU\LoRA\pre\reg', '--resolution=512,512', '--output_dir=H:\SD-GUI-1.5.0\abmayo_MIKU\LoRA\pre\model', '--logging_dir=H:\SD-GUI-1.5.0\abmayo_MIKU\LoRA\pre\log', '--network_alpha=1', '--save_model_as=ckpt', '--network_module=networks.lora', '--text_encoder_lr=1.5e-5', '--unet_lr=1.5e-4', '--network_dim=128', '--output_name=abmk', '--learning_rate=1e-5', '--lr_scheduler=constant_with_warmup', '--lr_warmup_steps=200', '--train_batch_size=1', '--max_train_steps=2000', '--save_every_n_epochs=5', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--cache_latents', '--clip_skip=2', '--shuffle_caption', '--xformers', '--use_8bit_adam']' returned non-zero exit status 1.

TI errors.

Traceback (most recent call last):
File "D:\kohya_ss\train_textual_inversion.py", line 498, in
train(args)
File "D:\kohya_ss\train_textual_inversion.py", line 260, in train
train_util.patch_accelerator_for_fp16_training(accelerator)
File "D:\kohya_ss\library\train_util.py", line 1345, in patch_accelerator_for_fp16_training
org_unscale_grads = accelerator.scaler.unscale_grads
AttributeError: 'NoneType' object has no attribute 'unscale_grads'

I am getting a few errors when trying to do a TI training and this is just one of them. Another is about - File "D:\kohya_ss\train_textual_inversion.py", line 356, in train
unwrap_model(text_encoder).get_input_embeddings().weight[index_no_updates] = orig_embeds_params[index_no_updates]
RuntimeError: Index put requires the source and destination dtypes match, got Half for the destination and Float for the source.

File "D:\kohya_ss\train_textual_inversion.py", line 498, in
train(args)
File "D:\kohya_ss\train_textual_inversion.py", line 260, in train
train_util.patch_accelerator_for_fp16_training(accelerator)
File "D:\kohya_ss\library\train_util.py", line 1345, in patch_accelerator_for_fp16_training
org_unscale_grads = accelerator.scaler.unscale_grads
AttributeError: 'NoneType' object has no attribute 'unscale_grads'

No data found. Please verify arguments

C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\numpy\core_methods.py:192: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
mean ar error (without repeats): nan
No data found. Please verify arguments / ็”ปๅƒใŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚ๅผ•ๆ•ฐๆŒ‡ๅฎšใ‚’็ขบ่ชใ—ใฆใใ ใ•ใ„

Cannot find libcudart.so

When I was trying to train Dreambooth LoRA and Dreambooth, an error occured. I have tried reinstall CUDA but it doesn't work.

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cuda_setup\paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
warn(
WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
CUDA SETUP: Loading binary C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cpu.so...
Traceback (most recent call last):
File "E:\kohya\kohya_ss\train_network.py", line 427, in
train(args)
File "E:\kohya\kohya_ss\train_network.py", line 114, in train
import bitsandbytes as bnb
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes_init_.py", line 6, in
from .autograd._functions import (
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\autograd_functions.py", line 5, in
import bitsandbytes.functional as F
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 13, in
from .cextension import COMPILED_WITH_CUDA, lib
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cextension.py", line 41, in
lib = CUDALibrary_Singleton.get_instance().lib
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cextension.py", line 37, in get_instance
cls.instance.initialize()
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\bitsandbytes\cextension.py", line 31, in initialize
self.lib = ct.cdll.LoadLibrary(binary_path)
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\ctypes_init
.py", line 452, in LoadLibrary
return self.dlltype(name)
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\ctypes_init
.py", line 364, in init
if '/' in name or '\' in name:
TypeError: argument of type 'WindowsPath' is not iterable
Traceback (most recent call last):
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\Scripts\accelerate.exe_main
.py", line 7, in
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
simple_launcher(args)
File "C:\Users\moti9\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\Users\moti9\AppData\Local\Programs\Python\Python310\python.exe', 'train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=E:/dreambooth/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0.ckpt', '--train_data_dir=E:\dataset\aaa', '--resolution=512,512', '--output_dir=E:\dataset\aaa', '--logging_dir=', '--network_module=networks.lora', '--text_encoder_lr=1.5e-5', '--unet_lr=1.5e-4', '--network_dim=128', '--output_name=layla', '--learning_rate=1.5e-5', '--lr_scheduler=constant_with_warmup', '--lr_warmup_steps=163', '--train_batch_size=1', '--max_train_steps=3255', '--save_every_n_epochs=5', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--cache_latents', '--clip_skip=2', '--shuffle_caption', '--xformers', '--use_8bit_adam']' returned non-zero exit status 1.

No module named 'tkinter

Microsoft Windows [Version 10.0.19045.2486]
(c) Microsoft Corporation. All rights reserved.

C:\Users\Administrator\SDLoRA>git clone https://github.com/bmaltais/kohya_ss.git
Cloning into 'kohya_ss'...
remote: Enumerating objects: 825, done.
remote: Counting objects: 100% (421/421), done.
remote: Compressing objects: 100% (196/196), done.
remote: Total 825 (delta 293), reused 330 (delta 221), pack-reused 404
Receiving objects: 100% (825/825), 2.46 MiB | 10.61 MiB/s, done.
Resolving deltas: 100% (480/480), done.

C:\Users\Administrator\SDLoRA>cd kohya_ss

C:\Users\Administrator\SDLoRA\kohya_ss>python -m venv --system-site-packages venv

C:\Users\Administrator\SDLoRA\kohya_ss>.\venv\Scripts\activate

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu116
Collecting torch==1.12.1+cu116
  Using cached https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl (2388.4 MB)
Collecting torchvision==0.13.1+cu116
  Using cached https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl (2.6 MB)
Collecting typing-extensions
  Using cached typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached Pillow-9.4.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting numpy
  Using cached numpy-1.24.1-cp310-cp310-win_amd64.whl (14.8 MB)
Collecting requests
  Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting urllib3<1.27,>=1.21.1
  Using cached urllib3-1.26.14-py2.py3-none-any.whl (140 kB)
Collecting certifi>=2017.4.17
  Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting idna<4,>=2.5
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-3.0.1-cp310-cp310-win_amd64.whl (96 kB)
Installing collected packages: charset-normalizer, urllib3, typing-extensions, pillow, numpy, idna, certifi, torch, requests, torchvision
Successfully installed certifi-2022.12.7 charset-normalizer-3.0.1 idna-3.4 numpy-1.24.1 pillow-9.4.0 requests-2.28.2 torch-1.12.1+cu116 torchvision-0.13.1+cu116 typing-extensions-4.4.0 urllib3-1.26.14

[notice] A new release of pip available: 22.2.1 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>pip install --upgrade -r requirements.txt
Processing c:\users\administrator\sdlora\kohya_ss
  Preparing metadata (setup.py) ... done
Collecting accelerate==0.15.0
  Using cached accelerate-0.15.0-py3-none-any.whl (191 kB)
Collecting transformers==4.25.1
  Using cached transformers-4.25.1-py3-none-any.whl (5.8 MB)
Collecting ftfy
  Using cached ftfy-6.1.1-py3-none-any.whl (53 kB)
Collecting albumentations
  Using cached albumentations-1.3.0-py3-none-any.whl (123 kB)
Collecting opencv-python
  Using cached opencv_python-4.7.0.68-cp37-abi3-win_amd64.whl (38.2 MB)
Collecting einops
  Using cached einops-0.6.0-py3-none-any.whl (41 kB)
Collecting diffusers[torch]==0.10.2
  Using cached diffusers-0.10.2-py3-none-any.whl (503 kB)
Collecting pytorch_lightning
  Using cached pytorch_lightning-1.8.6-py3-none-any.whl (800 kB)
Collecting bitsandbytes==0.35.0
  Using cached bitsandbytes-0.35.0-py3-none-any.whl (62.5 MB)
Collecting tensorboard
  Using cached tensorboard-2.11.2-py3-none-any.whl (6.0 MB)
Collecting safetensors==0.2.6
  Using cached safetensors-0.2.6-cp310-cp310-win_amd64.whl (268 kB)
Collecting gradio==3.15.0
  Using cached gradio-3.15.0-py3-none-any.whl (13.8 MB)
Collecting altair
  Using cached altair-4.2.0-py3-none-any.whl (812 kB)
Collecting easygui
  Using cached easygui-0.98.3-py2.py3-none-any.whl (92 kB)
Requirement already satisfied: requests in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 16)) (2.28.2)
Collecting timm
  Using cached timm-0.6.12-py3-none-any.whl (549 kB)
Collecting fairscale
  Using cached fairscale-0.4.13-py3-none-any.whl
Collecting tensorflow<2.11
  Using cached tensorflow-2.10.1-cp310-cp310-win_amd64.whl (455.9 MB)
Collecting huggingface-hub
  Using cached huggingface_hub-0.11.1-py3-none-any.whl (182 kB)
Requirement already satisfied: numpy>=1.17 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (1.24.1)
Collecting psutil
  Using cached psutil-5.9.4-cp36-abi3-win_amd64.whl (252 kB)
Collecting pyyaml
  Using cached PyYAML-6.0-cp310-cp310-win_amd64.whl (151 kB)
Collecting packaging>=20.0
  Using cached packaging-23.0-py3-none-any.whl (42 kB)
Requirement already satisfied: torch>=1.4.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (1.12.1+cu116)
Collecting tqdm>=4.27
  Using cached tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
Collecting filelock
  Using cached filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting regex!=2019.12.17
  Using cached regex-2022.10.31-cp310-cp310-win_amd64.whl (267 kB)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1
  Using cached tokenizers-0.13.2-cp310-cp310-win_amd64.whl (3.3 MB)
Collecting importlib-metadata
  Using cached importlib_metadata-6.0.0-py3-none-any.whl (21 kB)
Requirement already satisfied: Pillow in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from diffusers[torch]==0.10.2->-r requirements.txt (line 7)) (9.4.0)
Collecting websockets>=10.0
  Using cached websockets-10.4-cp310-cp310-win_amd64.whl (101 kB)
Collecting fsspec
  Using cached fsspec-2022.11.0-py3-none-any.whl (139 kB)
Collecting pydantic
  Using cached pydantic-1.10.4-cp310-cp310-win_amd64.whl (2.1 MB)
Collecting pydub
  Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting pycryptodome
  Using cached pycryptodome-3.16.0-cp35-abi3-win_amd64.whl (1.7 MB)
Collecting markdown-it-py[linkify,plugins]
  Using cached markdown_it_py-2.1.0-py3-none-any.whl (84 kB)
Collecting matplotlib
  Using cached matplotlib-3.6.3-cp310-cp310-win_amd64.whl (7.2 MB)
Collecting markupsafe
  Using cached MarkupSafe-2.1.1-cp310-cp310-win_amd64.whl (17 kB)
Collecting uvicorn
  Using cached uvicorn-0.20.0-py3-none-any.whl (56 kB)
Collecting fastapi
  Using cached fastapi-0.89.1-py3-none-any.whl (55 kB)
Collecting pandas
  Using cached pandas-1.5.2-cp310-cp310-win_amd64.whl (10.4 MB)
Collecting ffmpy
  Using cached ffmpy-0.3.0-py3-none-any.whl
Collecting httpx
  Using cached httpx-0.23.3-py3-none-any.whl (71 kB)
Collecting python-multipart
  Using cached python-multipart-0.0.5.tar.gz (32 kB)
  Preparing metadata (setup.py) ... done
Collecting aiohttp
  Using cached aiohttp-3.8.3-cp310-cp310-win_amd64.whl (319 kB)
Collecting orjson
  Using cached orjson-3.8.5-cp310-none-win_amd64.whl (202 kB)
Collecting jinja2
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting wcwidth>=0.2.5
  Using cached wcwidth-0.2.6-py2.py3-none-any.whl (29 kB)
Collecting opencv-python-headless>=4.1.1
  Using cached opencv_python_headless-4.7.0.68-cp37-abi3-win_amd64.whl (38.1 MB)
Collecting scipy
  Using cached scipy-1.10.0-cp310-cp310-win_amd64.whl (42.5 MB)
Collecting qudida>=0.0.4
  Using cached qudida-0.0.4-py3-none-any.whl (3.5 kB)
Collecting scikit-image>=0.16.1
  Using cached scikit_image-0.19.3-cp310-cp310-win_amd64.whl (12.0 MB)
Collecting tensorboardX>=2.2
  Using cached tensorboardX-2.5.1-py2.py3-none-any.whl (125 kB)
Collecting lightning-utilities!=0.4.0,>=0.3.0
  Using cached lightning_utilities-0.5.0-py3-none-any.whl (18 kB)
Collecting torchmetrics>=0.7.0
  Using cached torchmetrics-0.11.0-py3-none-any.whl (512 kB)
Requirement already satisfied: typing-extensions>=4.0.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pytorch_lightning->-r requirements.txt (line 8)) (4.4.0)
Collecting wheel>=0.26
  Using cached wheel-0.38.4-py3-none-any.whl (36 kB)
Collecting markdown>=2.6.8
  Using cached Markdown-3.4.1-py3-none-any.whl (93 kB)
Collecting protobuf<4,>=3.9.2
  Using cached protobuf-3.20.3-cp310-cp310-win_amd64.whl (904 kB)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
  Using cached tensorboard_data_server-0.6.1-py3-none-any.whl (2.4 kB)
Requirement already satisfied: setuptools>=41.0.0 in c:\users\administrator\sdlora\python\tools\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (63.2.0)
Collecting werkzeug>=1.0.1
  Using cached Werkzeug-2.2.2-py3-none-any.whl (232 kB)
Collecting google-auth<3,>=1.6.3
  Using cached google_auth-2.16.0-py2.py3-none-any.whl (177 kB)
Collecting google-auth-oauthlib<0.5,>=0.4.1
  Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Collecting grpcio>=1.24.3
  Using cached grpcio-1.51.1-cp310-cp310-win_amd64.whl (3.7 MB)
Collecting absl-py>=0.4
  Using cached absl_py-1.4.0-py3-none-any.whl (126 kB)
Collecting tensorboard-plugin-wit>=1.6.0
  Using cached tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB)
Collecting entrypoints
  Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting toolz
  Using cached toolz-0.12.0-py3-none-any.whl (55 kB)
Collecting jsonschema>=3.0
  Using cached jsonschema-4.17.3-py3-none-any.whl (90 kB)
Requirement already satisfied: idna<4,>=2.5 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (3.0.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (2022.12.7)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (1.26.14)
Requirement already satisfied: torchvision in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from timm->-r requirements.txt (line 17)) (0.13.1+cu116)
Collecting keras<2.11,>=2.10.0
  Using cached keras-2.10.0-py2.py3-none-any.whl (1.7 MB)
Collecting google-pasta>=0.1.1
  Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting protobuf<4,>=3.9.2
  Using cached protobuf-3.19.6-cp310-cp310-win_amd64.whl (895 kB)
Collecting gast<=0.4.0,>=0.2.1
  Using cached gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting opt-einsum>=2.3.2
  Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting wrapt>=1.11.0
  Using cached wrapt-1.14.1-cp310-cp310-win_amd64.whl (35 kB)
Collecting tensorflow-estimator<2.11,>=2.10.0
  Using cached tensorflow_estimator-2.10.0-py2.py3-none-any.whl (438 kB)
Collecting keras-preprocessing>=1.1.1
  Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Collecting tensorflow-io-gcs-filesystem>=0.23.1
  Using cached tensorflow_io_gcs_filesystem-0.29.0-cp310-cp310-win_amd64.whl (1.5 MB)
Collecting flatbuffers>=2.0
  Using cached flatbuffers-23.1.4-py2.py3-none-any.whl (26 kB)
Collecting termcolor>=1.1.0
  Using cached termcolor-2.2.0-py3-none-any.whl (6.6 kB)
Collecting astunparse>=1.6.0
  Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting tensorboard
  Using cached tensorboard-2.10.1-py3-none-any.whl (5.9 MB)
Collecting libclang>=13.0.0
  Using cached libclang-15.0.6.1-py2.py3-none-win_amd64.whl (23.2 MB)
Collecting h5py>=2.9.0
  Using cached h5py-3.7.0-cp310-cp310-win_amd64.whl (2.6 MB)
Collecting six>=1.12.0
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting frozenlist>=1.1.1
  Using cached frozenlist-1.3.3-cp310-cp310-win_amd64.whl (33 kB)
Collecting attrs>=17.3.0
  Using cached attrs-22.2.0-py3-none-any.whl (60 kB)
Collecting multidict<7.0,>=4.5
  Using cached multidict-6.0.4-cp310-cp310-win_amd64.whl (28 kB)
Collecting yarl<2.0,>=1.0
  Using cached yarl-1.8.2-cp310-cp310-win_amd64.whl (56 kB)
Collecting async-timeout<5.0,>=4.0.0a3
  Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting aiosignal>=1.1.2
  Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting pyasn1-modules>=0.2.1
  Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting rsa<5,>=3.1.4
  Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting cachetools<6.0,>=2.0.0
  Using cached cachetools-5.2.1-py3-none-any.whl (9.3 kB)
Collecting requests-oauthlib>=0.7.0
  Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
  Using cached pyrsistent-0.19.3-cp310-cp310-win_amd64.whl (62 kB)
Collecting pytz>=2020.1
  Using cached pytz-2022.7.1-py2.py3-none-any.whl (499 kB)
Collecting python-dateutil>=2.8.1
  Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting scikit-learn>=0.19.1
  Using cached scikit_learn-1.2.0-cp310-cp310-win_amd64.whl (8.2 MB)
Collecting PyWavelets>=1.1.1
  Using cached PyWavelets-1.4.1-cp310-cp310-win_amd64.whl (4.2 MB)
Collecting imageio>=2.4.1
  Using cached imageio-2.24.0-py3-none-any.whl (3.4 MB)
Collecting networkx>=2.2
  Using cached networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting tifffile>=2019.7.26
  Using cached tifffile-2022.10.10-py3-none-any.whl (210 kB)
Collecting colorama
  Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting starlette==0.22.0
  Using cached starlette-0.22.0-py3-none-any.whl (64 kB)
Collecting anyio<5,>=3.4.0
  Using cached anyio-3.6.2-py3-none-any.whl (80 kB)
Collecting rfc3986[idna2008]<2,>=1.3
  Using cached rfc3986-1.5.0-py2.py3-none-any.whl (31 kB)
Collecting sniffio
  Using cached sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting httpcore<0.17.0,>=0.15.0
  Using cached httpcore-0.16.3-py3-none-any.whl (69 kB)
Collecting zipp>=0.5
  Using cached zipp-3.11.0-py3-none-any.whl (6.6 kB)
Collecting mdurl~=0.1
  Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting linkify-it-py~=1.0
  Using cached linkify_it_py-1.0.3-py3-none-any.whl (19 kB)
Collecting mdit-py-plugins
  Using cached mdit_py_plugins-0.3.3-py3-none-any.whl (50 kB)
Collecting cycler>=0.10
  Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting contourpy>=1.0.1
  Using cached contourpy-1.0.7-cp310-cp310-win_amd64.whl (162 kB)
Collecting pyparsing>=2.2.1
  Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB)
Collecting kiwisolver>=1.0.1
  Using cached kiwisolver-1.4.4-cp310-cp310-win_amd64.whl (55 kB)
Collecting fonttools>=4.22.0
  Using cached fonttools-4.38.0-py3-none-any.whl (965 kB)
Collecting click>=7.0
  Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting h11>=0.8
  Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Collecting uc-micro-py
  Using cached uc_micro_py-1.0.1-py3-none-any.whl (6.2 kB)
Collecting pyasn1<0.5.0,>=0.4.6
  Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting oauthlib>=3.0.0
  Using cached oauthlib-3.2.2-py3-none-any.whl (151 kB)
Collecting threadpoolctl>=2.0.0
  Using cached threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Collecting joblib>=1.1.1
  Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Using legacy 'setup.py install' for library, since package 'wheel' is not installed.
Using legacy 'setup.py install' for python-multipart, since package 'wheel' is not installed.
Installing collected packages: wcwidth, tokenizers, tensorboard-plugin-wit, safetensors, rfc3986, pytz, pydub, pyasn1, library, libclang, keras, flatbuffers, ffmpy, easygui, bitsandbytes, zipp, wrapt, wheel, websockets, uc-micro-py, toolz, tifffile, threadpoolctl, termcolor, tensorflow-io-gcs-filesystem, tensorflow-estimator, tensorboard-data-server, sniffio, six, scipy, rsa, regex, pyyaml, PyWavelets, pyrsistent, pyparsing, pydantic, pycryptodome, pyasn1-modules, psutil, protobuf, packaging, orjson, opt-einsum, opencv-python-headless, opencv-python, oauthlib, networkx, multidict, mdurl, markupsafe, markdown, kiwisolver, joblib, imageio, h5py, h11, grpcio, gast, ftfy, fsspec, frozenlist, fonttools, filelock, entrypoints, einops, cycler, contourpy, colorama, charset-normalizer, cachetools, attrs, async-timeout, absl-py, yarl, werkzeug, tqdm, torchmetrics, tensorboardX, scikit-learn, scikit-image, python-multipart, python-dateutil, markdown-it-py, linkify-it-py, lightning-utilities, keras-preprocessing, jsonschema, jinja2, importlib-metadata, google-pasta, google-auth, fairscale, click, astunparse, anyio, aiosignal, accelerate, uvicorn, starlette, requests-oauthlib, qudida, pandas, mdit-py-plugins, matplotlib, huggingface-hub, httpcore, aiohttp, transformers, timm, httpx, google-auth-oauthlib, fastapi, diffusers, altair, albumentations, tensorboard, pytorch_lightning, gradio, tensorflow
  Running setup.py install for library ... done
  Attempting uninstall: charset-normalizer
    Found existing installation: charset-normalizer 3.0.1
    Uninstalling charset-normalizer-3.0.1:
      Successfully uninstalled charset-normalizer-3.0.1
  Running setup.py install for python-multipart ... done
Successfully installed PyWavelets-1.4.1 absl-py-1.4.0 accelerate-0.15.0 aiohttp-3.8.3 aiosignal-1.3.1 albumentations-1.3.0 altair-4.2.0 anyio-3.6.2 astunparse-1.6.3 async-timeout-4.0.2 attrs-22.2.0 bitsandbytes-0.35.0 cachetools-5.2.1 charset-normalizer-2.1.1 click-8.1.3 colorama-0.4.6 contourpy-1.0.7 cycler-0.11.0 diffusers-0.10.2 easygui-0.98.3 einops-0.6.0 entrypoints-0.4 fairscale-0.4.13 fastapi-0.89.1 ffmpy-0.3.0 filelock-3.9.0 flatbuffers-23.1.4 fonttools-4.38.0 frozenlist-1.3.3 fsspec-2022.11.0 ftfy-6.1.1 gast-0.4.0 google-auth-2.16.0 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 gradio-3.15.0 grpcio-1.51.1 h11-0.14.0 h5py-3.7.0 httpcore-0.16.3 httpx-0.23.3 huggingface-hub-0.11.1 imageio-2.24.0 importlib-metadata-6.0.0 jinja2-3.1.2 joblib-1.2.0 jsonschema-4.17.3 keras-2.10.0 keras-preprocessing-1.1.2 kiwisolver-1.4.4 libclang-15.0.6.1 library-1.0.1 lightning-utilities-0.5.0 linkify-it-py-1.0.3 markdown-3.4.1 markdown-it-py-2.1.0 markupsafe-2.1.1 matplotlib-3.6.3 mdit-py-plugins-0.3.3 mdurl-0.1.2 multidict-6.0.4 networkx-3.0 oauthlib-3.2.2 opencv-python-4.7.0.68 opencv-python-headless-4.7.0.68 opt-einsum-3.3.0 orjson-3.8.5 packaging-23.0 pandas-1.5.2 protobuf-3.19.6 psutil-5.9.4 pyasn1-0.4.8 pyasn1-modules-0.2.8 pycryptodome-3.16.0 pydantic-1.10.4 pydub-0.25.1 pyparsing-3.0.9 pyrsistent-0.19.3 python-dateutil-2.8.2 python-multipart-0.0.5 pytorch_lightning-1.8.6 pytz-2022.7.1 pyyaml-6.0 qudida-0.0.4 regex-2022.10.31 requests-oauthlib-1.3.1 rfc3986-1.5.0 rsa-4.9 safetensors-0.2.6 scikit-image-0.19.3 scikit-learn-1.2.0 scipy-1.10.0 six-1.16.0 sniffio-1.3.0 starlette-0.22.0 tensorboard-2.10.1 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorboardX-2.5.1 tensorflow-2.10.1 tensorflow-estimator-2.10.0 tensorflow-io-gcs-filesystem-0.29.0 termcolor-2.2.0 threadpoolctl-3.1.0 tifffile-2022.10.10 timm-0.6.12 tokenizers-0.13.2 toolz-0.12.0 torchmetrics-0.11.0 tqdm-4.64.1 transformers-4.25.1 uc-micro-py-1.0.1 uvicorn-0.20.0 wcwidth-0.2.6 websockets-10.4 werkzeug-2.2.2 wheel-0.38.4 wrapt-1.14.1 yarl-1.8.2 zipp-3.11.0

[notice] A new release of pip available: 22.2.1 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>pip install --upgrade -r requirements.txt
Processing c:\users\administrator\sdlora\kohya_ss
  Preparing metadata (setup.py) ... done
Requirement already satisfied: accelerate==0.15.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 1)) (0.15.0)
Requirement already satisfied: transformers==4.25.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 2)) (4.25.1)
Requirement already satisfied: ftfy in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 3)) (6.1.1)
Requirement already satisfied: albumentations in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 4)) (1.3.0)
Requirement already satisfied: opencv-python in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 5)) (4.7.0.68)
Requirement already satisfied: einops in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 6)) (0.6.0)
Requirement already satisfied: diffusers[torch]==0.10.2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 7)) (0.10.2)
Requirement already satisfied: pytorch_lightning in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 8)) (1.8.6)
Requirement already satisfied: bitsandbytes==0.35.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 9)) (0.35.0)
Requirement already satisfied: tensorboard in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 10)) (2.10.1)
Collecting tensorboard
  Using cached tensorboard-2.11.2-py3-none-any.whl (6.0 MB)
Requirement already satisfied: safetensors==0.2.6 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 11)) (0.2.6)
Requirement already satisfied: gradio==3.15.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 12)) (3.15.0)
Requirement already satisfied: altair in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 13)) (4.2.0)
Requirement already satisfied: easygui in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 14)) (0.98.3)
Requirement already satisfied: requests in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 16)) (2.28.2)
Requirement already satisfied: timm in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 17)) (0.6.12)
Requirement already satisfied: fairscale in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 18)) (0.4.13)
Requirement already satisfied: tensorflow<2.11 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 20)) (2.10.1)
Requirement already satisfied: huggingface-hub in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from -r requirements.txt (line 21)) (0.11.1)
Requirement already satisfied: numpy>=1.17 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (1.24.1)
Requirement already satisfied: pyyaml in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (6.0)
Requirement already satisfied: packaging>=20.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (23.0)
Requirement already satisfied: torch>=1.4.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (1.12.1+cu116)
Requirement already satisfied: psutil in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from accelerate==0.15.0->-r requirements.txt (line 1)) (5.9.4)
Requirement already satisfied: filelock in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from transformers==4.25.1->-r requirements.txt (line 2)) (3.9.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from transformers==4.25.1->-r requirements.txt (line 2)) (0.13.2)
Requirement already satisfied: tqdm>=4.27 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from transformers==4.25.1->-r requirements.txt (line 2)) (4.64.1)
Requirement already satisfied: regex!=2019.12.17 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from transformers==4.25.1->-r requirements.txt (line 2)) (2022.10.31)
Requirement already satisfied: importlib-metadata in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from diffusers[torch]==0.10.2->-r requirements.txt (line 7)) (6.0.0)
Requirement already satisfied: Pillow in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from diffusers[torch]==0.10.2->-r requirements.txt (line 7)) (9.4.0)
Requirement already satisfied: jinja2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (3.1.2)
Requirement already satisfied: python-multipart in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (0.0.5)
Requirement already satisfied: uvicorn in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (0.20.0)
Requirement already satisfied: pydantic in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (1.10.4)
Requirement already satisfied: fsspec in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (2022.11.0)
Requirement already satisfied: matplotlib in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (3.6.3)
Requirement already satisfied: pydub in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (0.25.1)
Requirement already satisfied: fastapi in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (0.89.1)
Requirement already satisfied: ffmpy in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (0.3.0)
Requirement already satisfied: httpx in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (0.23.3)
Requirement already satisfied: markdown-it-py[linkify,plugins] in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (2.1.0)
Requirement already satisfied: markupsafe in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (2.1.1)
Requirement already satisfied: aiohttp in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (3.8.3)
Requirement already satisfied: pandas in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (1.5.2)
Requirement already satisfied: pycryptodome in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (3.16.0)
Requirement already satisfied: websockets>=10.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (10.4)
Requirement already satisfied: orjson in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from gradio==3.15.0->-r requirements.txt (line 12)) (3.8.5)
Requirement already satisfied: wcwidth>=0.2.5 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from ftfy->-r requirements.txt (line 3)) (0.2.6)
Requirement already satisfied: scikit-image>=0.16.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from albumentations->-r requirements.txt (line 4)) (0.19.3)
Requirement already satisfied: opencv-python-headless>=4.1.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from albumentations->-r requirements.txt (line 4)) (4.7.0.68)
Requirement already satisfied: qudida>=0.0.4 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from albumentations->-r requirements.txt (line 4)) (0.0.4)
Requirement already satisfied: scipy in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from albumentations->-r requirements.txt (line 4)) (1.10.0)
Requirement already satisfied: torchmetrics>=0.7.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pytorch_lightning->-r requirements.txt (line 8)) (0.11.0)
Requirement already satisfied: tensorboardX>=2.2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pytorch_lightning->-r requirements.txt (line 8)) (2.5.1)
Requirement already satisfied: typing-extensions>=4.0.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pytorch_lightning->-r requirements.txt (line 8)) (4.4.0)
Requirement already satisfied: lightning-utilities!=0.4.0,>=0.3.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pytorch_lightning->-r requirements.txt (line 8)) (0.5.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (0.6.1)
Requirement already satisfied: absl-py>=0.4 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (1.4.0)
Requirement already satisfied: wheel>=0.26 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (0.38.4)
Requirement already satisfied: grpcio>=1.24.3 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (1.51.1)
Requirement already satisfied: protobuf<4,>=3.9.2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (3.19.6)
Requirement already satisfied: werkzeug>=1.0.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (2.2.2)
Requirement already satisfied: google-auth<3,>=1.6.3 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (2.16.0)
Requirement already satisfied: setuptools>=41.0.0 in c:\users\administrator\sdlora\python\tools\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (63.2.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (0.4.6)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (1.8.1)
Requirement already satisfied: markdown>=2.6.8 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorboard->-r requirements.txt (line 10)) (3.4.1)
Requirement already satisfied: entrypoints in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from altair->-r requirements.txt (line 13)) (0.4)
Requirement already satisfied: toolz in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from altair->-r requirements.txt (line 13)) (0.12.0)
Requirement already satisfied: jsonschema>=3.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from altair->-r requirements.txt (line 13)) (4.17.3)
Requirement already satisfied: idna<4,>=2.5 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (1.26.14)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (2.1.1)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests->-r requirements.txt (line 16)) (2022.12.7)
Requirement already satisfied: torchvision in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from timm->-r requirements.txt (line 17)) (0.13.1+cu116)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (0.29.0)
Requirement already satisfied: google-pasta>=0.1.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (0.2.0)
Requirement already satisfied: keras<2.11,>=2.10.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (2.10.0)
Requirement already satisfied: libclang>=13.0.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (15.0.6.1)
Requirement already satisfied: h5py>=2.9.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (3.7.0)
Requirement already satisfied: wrapt>=1.11.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (1.14.1)
Requirement already satisfied: six>=1.12.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (1.16.0)
Requirement already satisfied: tensorflow-estimator<2.11,>=2.10.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (2.10.0)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (0.4.0)
Requirement already satisfied: astunparse>=1.6.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (1.6.3)
Requirement already satisfied: termcolor>=1.1.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (2.2.0)
Requirement already satisfied: opt-einsum>=2.3.2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (3.3.0)
Requirement already satisfied: flatbuffers>=2.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (23.1.4)
Requirement already satisfied: keras-preprocessing>=1.1.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tensorflow<2.11->-r requirements.txt (line 20)) (1.1.2)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from aiohttp->gradio==3.15.0->-r requirements.txt (line 12)) (1.8.2)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from aiohttp->gradio==3.15.0->-r requirements.txt (line 12)) (4.0.2)
Requirement already satisfied: multidict<7.0,>=4.5 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from aiohttp->gradio==3.15.0->-r requirements.txt (line 12)) (6.0.4)
Requirement already satisfied: frozenlist>=1.1.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from aiohttp->gradio==3.15.0->-r requirements.txt (line 12)) (1.3.3)
Requirement already satisfied: attrs>=17.3.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from aiohttp->gradio==3.15.0->-r requirements.txt (line 12)) (22.2.0)
Requirement already satisfied: aiosignal>=1.1.2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from aiohttp->gradio==3.15.0->-r requirements.txt (line 12)) (1.3.1)
Requirement already satisfied: rsa<5,>=3.1.4 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard->-r requirements.txt (line 10)) (4.9)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard->-r requirements.txt (line 10)) (5.2.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard->-r requirements.txt (line 10)) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements.txt (line 10)) (1.3.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from jsonschema>=3.0->altair->-r requirements.txt (line 13)) (0.19.3)
Requirement already satisfied: pytz>=2020.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pandas->gradio==3.15.0->-r requirements.txt (line 12)) (2022.7.1)
Requirement already satisfied: python-dateutil>=2.8.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pandas->gradio==3.15.0->-r requirements.txt (line 12)) (2.8.2)
Requirement already satisfied: scikit-learn>=0.19.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from qudida>=0.0.4->albumentations->-r requirements.txt (line 4)) (1.2.0)
Requirement already satisfied: networkx>=2.2 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from scikit-image>=0.16.1->albumentations->-r requirements.txt (line 4)) (3.0)
Requirement already satisfied: tifffile>=2019.7.26 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from scikit-image>=0.16.1->albumentations->-r requirements.txt (line 4)) (2022.10.10)
Requirement already satisfied: imageio>=2.4.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from scikit-image>=0.16.1->albumentations->-r requirements.txt (line 4)) (2.24.0)
Requirement already satisfied: PyWavelets>=1.1.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from scikit-image>=0.16.1->albumentations->-r requirements.txt (line 4)) (1.4.1)
Requirement already satisfied: colorama in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from tqdm>=4.27->transformers==4.25.1->-r requirements.txt (line 2)) (0.4.6)
Requirement already satisfied: starlette==0.22.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from fastapi->gradio==3.15.0->-r requirements.txt (line 12)) (0.22.0)
Requirement already satisfied: anyio<5,>=3.4.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from starlette==0.22.0->fastapi->gradio==3.15.0->-r requirements.txt (line 12)) (3.6.2)
Requirement already satisfied: sniffio in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from httpx->gradio==3.15.0->-r requirements.txt (line 12)) (1.3.0)
Requirement already satisfied: httpcore<0.17.0,>=0.15.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from httpx->gradio==3.15.0->-r requirements.txt (line 12)) (0.16.3)
Requirement already satisfied: rfc3986[idna2008]<2,>=1.3 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from httpx->gradio==3.15.0->-r requirements.txt (line 12)) (1.5.0)
Requirement already satisfied: zipp>=0.5 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from importlib-metadata->diffusers[torch]==0.10.2->-r requirements.txt (line 7)) (3.11.0)
Requirement already satisfied: mdurl~=0.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from markdown-it-py[linkify,plugins]->gradio==3.15.0->-r requirements.txt (line 12)) (0.1.2)
Requirement already satisfied: linkify-it-py~=1.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from markdown-it-py[linkify,plugins]->gradio==3.15.0->-r requirements.txt (line 12)) (1.0.3)
Requirement already satisfied: mdit-py-plugins in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from markdown-it-py[linkify,plugins]->gradio==3.15.0->-r requirements.txt (line 12)) (0.3.3)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from matplotlib->gradio==3.15.0->-r requirements.txt (line 12)) (4.38.0)
Requirement already satisfied: pyparsing>=2.2.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from matplotlib->gradio==3.15.0->-r requirements.txt (line 12)) (3.0.9)
Requirement already satisfied: cycler>=0.10 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from matplotlib->gradio==3.15.0->-r requirements.txt (line 12)) (0.11.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from matplotlib->gradio==3.15.0->-r requirements.txt (line 12)) (1.0.7)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from matplotlib->gradio==3.15.0->-r requirements.txt (line 12)) (1.4.4)
Requirement already satisfied: h11>=0.8 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from uvicorn->gradio==3.15.0->-r requirements.txt (line 12)) (0.14.0)
Requirement already satisfied: click>=7.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from uvicorn->gradio==3.15.0->-r requirements.txt (line 12)) (8.1.3)
Requirement already satisfied: uc-micro-py in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from linkify-it-py~=1.0->markdown-it-py[linkify,plugins]->gradio==3.15.0->-r requirements.txt (line 12)) (1.0.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->-r requirements.txt (line 10)) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->-r requirements.txt (line 10)) (3.2.2)
Requirement already satisfied: joblib>=1.1.1 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from scikit-learn>=0.19.1->qudida>=0.0.4->albumentations->-r requirements.txt (line 4)) (1.2.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\administrator\sdlora\kohya_ss\venv\lib\site-packages (from scikit-learn>=0.19.1->qudida>=0.0.4->albumentations->-r requirements.txt (line 4)) (3.1.0)
Building wheels for collected packages: library
  Building wheel for library (setup.py) ... done
  Created wheel for library: filename=library-1.0.1-py3-none-any.whl size=49942 sha256=c39f4db9457b138e5fa6bc03935a04263ae9f8d7761a64111669d3cac03f48e9
  Stored in directory: C:\Users\Administrator\AppData\Local\Temp\pip-ephem-wheel-cache-6ukkav5d\wheels\bd\3d\4a\e85bfd69e3ee0b672b426bbc5c24e9566f2365dbe498991f9d
Successfully built library
Installing collected packages: library
  Attempting uninstall: library
    Found existing installation: library 1.0.1
    Uninstalling library-1.0.1:
      Successfully uninstalled library-1.0.1
Successfully installed library-1.0.1

[notice] A new release of pip available: 22.2.1 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>
(venv) C:\Users\Administrator\SDLoRA\kohya_ss>
(venv) C:\Users\Administrator\SDLoRA\kohya_ss>
(venv) C:\Users\Administrator\SDLoRA\kohya_ss>pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
Collecting xformers==0.0.14.dev0
  Downloading https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl (184.3 MB)
     ---------------------------------------- 184.3/184.3 MB 2.4 MB/s eta 0:00:00
Installing collected packages: xformers
Successfully installed xformers-0.0.14.dev0

[notice] A new release of pip available: 22.2.1 -> 22.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>copy .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
.\bitsandbytes_windows\libbitsandbytes_cpu.dll
.\bitsandbytes_windows\libbitsandbytes_cuda116.dll
        2 file(s) copied.

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>copy .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
Overwrite .\venv\Lib\site-packages\bitsandbytes\cextension.py? (Yes/No/All): a
        1 file(s) copied.

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>.\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>accelerate config
------------------------------------------------------------------------------------------------------------------------In which compute environment are you running?
This machine
------------------------------------------------------------------------------------------------------------------------Which type of machine are you using?
multi-GPU
How many different machines will you use (use more than 1 for multi-node training)? [1]:
Do you wish to optimize your script with torch dynamo?[yes/NO]:
Do you want to use DeepSpeed? [yes/NO]:
Do you want to use FullyShardedDataParallel? [yes/NO]:
Do you want to use Megatron-LM ? [yes/NO]:
How many GPU(s) should be used for distributed training? [1]:
What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:
------------------------------------------------------------------------------------------------------------------------Do you wish to use FP16 or BF16 (mixed precision)?
fp16
accelerate configuration saved at C:\Users\Administrator/.cache\huggingface\accelerate\default_config.yaml

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>.\venv\Scripts\activate

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>python lora_gui.py
Traceback (most recent call last):
  File "C:\Users\Administrator\SDLoRA\kohya_ss\lora_gui.py", line 13, in <module>
    from library.common_gui import (
  File "C:\Users\Administrator\SDLoRA\kohya_ss\library\common_gui.py", line 1, in <module>
    from tkinter import filedialog, Tk
ModuleNotFoundError: No module named 'tkinter'

(venv) C:\Users\Administrator\SDLoRA\kohya_ss>python -V
Python 3.10.6

When using xformers in Ubuntu, it couldn't find the forward function replacement (removing --xformers works)

libs installed:

xformers 0.0.14.dev0
torch 1.13.0

Stack trace:

Traceback (most recent call last): File "/home/ian/repositories/kohya_ss/train_db_fixed_v7.py", line 1609, in <module> train(args) File "/home/ian/repositories/kohya_ss/train_db_fixed_v7.py", line 1248, in train noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/ian/miniconda3/lib/python3.9/site-packages/accelerate/utils/operations.py", line 507, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast return func(*args, **kwargs) File "/home/ian/miniconda3/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 296, in forward sample, res_samples = downsample_block( File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/ian/miniconda3/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py", line 563, in forward hidden_states = attn(hidden_states, context=encoder_hidden_states) File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/ian/miniconda3/lib/python3.9/site-packages/diffusers/models/attention.py", line 169, in forward hidden_states = block(hidden_states, context=context) File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/ian/miniconda3/lib/python3.9/site-packages/diffusers/models/attention.py", line 217, in forward hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/ian/repositories/kohya_ss/train_db_fixed_v7.py", line 1552, in forward_xformers return self.to_out(out) File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/ian/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 244, in _forward_unimplemented raise NotImplementedError(f"Module [{type(self).__name__}] is missing the required \"forward\" function") NotImplementedError: Module [ModuleList] is missing the required "forward" function steps: 0%| | 0/2200 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/ian/miniconda3/bin/accelerate", line 8, in <module> sys.exit(main()) File "/home/ian/miniconda3/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 43, in main args.func(args) File "/home/ian/miniconda3/lib/python3.9/site-packages/accelerate/commands/launch.py", line 837, in launch_command simple_launcher(args) File "/home/ian/miniconda3/lib/python3.9/site-packages/accelerate/commands/launch.py", line 354, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

LoRA Recomendations / Guide

I am very excited about the concept of LoRA because I can quickly train on lots of family and friends and not take up several hundred more GB on my system. Also, you can apply the trained LoRA to other models on the fly.

I have trained about 15 or so models so far and have "decent" results. Faces look accurate if there is no other words in the prompt and sometimes start to look less and less like the person the longer the prompt gets.

Also, they seem to zero in on one outfit from one image in the bunch and use that over and over.

I would love some tricks or recommended settings for LoRA people training to get the best results.

How many images?
How many repeats?
Learning rate etc...

I have been using SD 1.5 model, about 10 cropped faces at 512x512 with 160 repeats and everything else left default.

Unclear instructions for installing CUDNN & no indication of failure

The readme section for installing CUDNN currently says:

To install simply unzip the directory and place the cudnn_windows folder in the root of the kohya_diffusers_fine_tuning repo.

It's unclear what is meant by kohya_diffusers_fine_tuning repo. The code looks for the cudnn_windows folder inside the root of the kohya_ss folder though, so this should be changed in the README.

Additionally, if the install script can't find the cudnn_windows folder in the repo root, it just silently ends with no indication of failure.
Some failure modes should be added to the installer(s) to give the user more useful feedback so they won't think cudnn is working when it's not. Even something simple like:

if os.path.exists(cudnn_src):
   # ...
else: 
    print(f"Installation Failed: \"{cudnn_src}\" could not be found. ")

Venv issues installing?

Trying to get the gui installed and working, the install asks for python 10 (which i have) and pip upgrades, (which i did), but it doesn't seem to 'see' them.

I've been told it's probably a venv issue, but to be frank, I have no real idea how to fix this. I'm new to python in general (most of the time i use it in Maya to script), and new to github and all of the backend stuff, so please assume I know nothing. ๐Ÿ˜

Auto1111 works fine, gradio and extensions and all that. I know there was a question about this being a auto extension, which I'd love if possible; then everything is in the same space?

This is what I get when I run the install

image

I can do that pip upgrade it asks for
image

And if I run the initialization script again, i get the same errors.

image

So, it's got to be that the venv i'm in (?) is the wrong one? Venv is a virtual environment, like a little dummy machine running virtually, right? How the heck do I a) tell 'which' venv i'm in, and b) how do I tell this script to use the right venv?

I love all the lora i see people making and would like to make some myself.

Thank you so much for any help!

ModuleNotFoundError: No module named 'xformers'

If --use_8bit_adam and --xformers are used, ModuleNotFoundError is raised. Activate venv and do pip list, xformers 0.0.14.dev0 and bitsandbytes 0.35.0 are in the list. am i missing something? The python version is 3.10.6. The rest works just fine.

ModuleNotFoundError: No module named 'xformers'

ModuleNotFoundError: No module named 'bitsandbytes'

RecursionError: maximum recursion depth exceeded while calling a Python object

Traceback (most recent call last):
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 856, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Programs\bmaltais GUI\kohya_ss\library\dreambooth_folder_creation_gui.py", line 69, in dreambooth_folder_preparation
shutil.copytree(util_training_images_dir_input, training_dir)
File "C:\Programs\Python\Python310\lib\shutil.py", line 558, in copytree
return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks,
File "C:\Programs\Python\Python310\lib\shutil.py", line 494, in _copytree
copytree(srcobj, dstname, symlinks, ignore, copy_function,
File "C:\Programs\Python\Python310\lib\shutil.py", line 558, in copytree
return _copytree(entries=entries, src=src, dst=dst, symlinks=symlinks,
File "C:\Programs\Python\Python310\lib\shutil.py", line 457, in _copytree
os.makedirs(dst, exist_ok=dirs_exist_ok)
File "C:\Programs\Python\Python310\lib\os.py", line 210, in makedirs
head, tail = path.split(name)
File "C:\Programs\Python\Python310\lib\ntpath.py", line 212, in split
seps = _get_bothseps(p)
File "C:\Programs\Python\Python310\lib\ntpath.py", line 36, in _get_bothseps
if isinstance(path, bytes):
RecursionError: maximum recursion depth exceeded while calling a Python object

Suggestion - /log and /model auto-creation?

Just running for first time, and got a "FileNotFoundError: [Errno 2] No such file or directory:" error on last.yaml, and worked out /log and /model subdirectories weren't auto-created when I set up via Utilities.

I know it's not technically part of copying data to the relevant directories, and will now know to make sure they exist in any new models I finetune, but might it be a useful QOL addition to auto-create those directories as part of the "Copy Info to Directories Tab" action?

Thanks
Al

kohya gui finetune: [WinError 2] The system cannot find the file specified

gui fails to create caption and latent metadata
fails at merge_captions_to_metadata.py
traceback:

Traceback (most recent call last):
File "W:\kohya\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict
output = await app.blocks.process_api(
File "W:\kohya\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "W:\kohya\venv\lib\site-packages\gradio\blocks.py", line 856, in call_function
prediction = await anyio.to_thread.run_sync(
File "W:\kohya\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "W:\kohya\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "W:\kohya\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "W:\kohya\2nd\finetune_gui.py", line 268, in train_model
subprocess.run(run_cmd)
File "W:\kohya\python\Lib\subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "W:\kohya\python\Lib\subprocess.py", line 966, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "W:\kohya\python\Lib\subprocess.py", line 1435, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified

AssertionError: resolution is required / resolution๏ผˆ่งฃๅƒๅบฆ๏ผ‰ๆŒ‡ๅฎšใฏๅฟ…้ ˆใงใ™

I'm testing training using Dreambooth LoRA
when i start the training i get this error
checked the issues and couldn't find a previous similar issue

Folder 1: 7 steps max_train_steps = 7 stop_text_encoder_training = 0 lr_warmup_steps = 0 accelerate launch --num_cpu_threads_per_process=2 "train_network.py" --pretrained_model_name_or_path="D:/martin AUTO1111/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt" --train_data_dir="D:\jason512newBG\" --resolution=512,512 --output_dir="D:\jason512newBG\" --logging_dir="" --network_alpha="128" --save_model_as=ckpt --network_module=networks.lora --text_encoder_lr=5e-5 --unet_lr=0.0001 --network_dim=128 --output_name="last" --learning_rate="0.0001" --lr_scheduler="constant" --train_batch_size="2" --max_train_steps="7" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --seed="1234" --caption_extension=".txt" --cache_latents --max_data_loader_n_workers="1" --clip_skip=2 --xformers --use_8bit_adam prepare tokenizer Use DreamBooth method. Traceback (most recent call last): File "D:\Kohya\kohya_ss\train_network.py", line 465, in <module> train(args) File "D:\Kohya\kohya_ss\train_network.py", line 55, in train train_dataset = DreamBoothDataset(args.train_batch_size, args.train_data_dir, args.reg_data_dir, File "D:\Kohya\kohya_ss\library\train_util.py", line 484, in __init__ assert resolution is not None, f"resolution is required / resolution๏ผˆ่งฃๅƒๅบฆ๏ผ‰ๆŒ‡ๅฎšใฏๅฟ…้ ˆใงใ™" AssertionError: resolution is required / resolution๏ผˆ่งฃๅƒๅบฆ๏ผ‰ๆŒ‡ๅฎšใฏๅฟ…้ ˆใงใ™ Traceback (most recent call last): File "C:\Users\person\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\person\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "D:\Kohya\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in <module> File "D:\Kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main args.func(args) File "D:\Kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command simple_launcher(args) File "D:\Kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['D:\\Kohya\\kohya_ss\\venv\\Scripts\\python.exe', 'train_network.py', '--pretrained_model_name_or_path=D:/martin AUTO1111/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt', '--train_data_dir=D:\\jason512newBG" --resolution=512,512 --output_dir=D:\\jason512newBG"', '--logging_dir=', '--network_alpha=128', '--save_model_as=ckpt', '--network_module=networks.lora', '--text_encoder_lr=5e-5', '--unet_lr=0.0001', '--network_dim=128', '--output_name=last', '--learning_rate=0.0001', '--lr_scheduler=constant', '--train_batch_size=2', '--max_train_steps=7', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--caption_extension=.txt', '--cache_latents', '--max_data_loader_n_workers=1', '--clip_skip=2', '--xformers', '--use_8bit_adam']' returned non-zero exit status 1.

returned non-zero exit status 1.

Traceback (most recent call last):
File "C:\Programs\bmaltais GUI\kohya_ss\train_db.py", line 337, in
train(args)
File "C:\Programs\bmaltais GUI\kohya_ss\train_db.py", line 178, in train
num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
ZeroDivisionError: division by zero
Traceback (most recent call last):
File "C:\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Programs\bmaltais GUI\kohya_ss\venv\Scripts\accelerate.exe_main
.py", line 7, in
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
simple_launcher(args)
File "C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\Programs\bmaltais GUI\kohya_ss\venv\Scripts\python.exe', 'train_db.py', '--enable_bucket', '--use_8bit_adam', '--xformers', '--pretrained_model_name_or_path=C:/Programs/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt', '--train_data_dir=C:/Users/pdept/Desktop/AI pics/training/train_data_dir/zdjecia', '--resolution=512,512', '--output_dir=C:/Users/pdept/Desktop/AI pics/training/train_data_dir/model', '--logging_dir=', '--output_name=last', '--max_data_loader_n_workers=1', '--learning_rate=1e-5', '--lr_scheduler=cosine', '--train_batch_size=1', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--cache_latents', '--max_data_loader_n_workers=1', '--gradient_checkpointing', '--xformers', '--use_8bit_adam']' returned non-zero exit status 1.

Save every n epochs does not match epoch

when i used to set epoch to 4 and save ever 1 epoch the result would be 4 models but now it only generates 2, not sure what im doing differently or if something changed but i cant seem to find anything that would have caused this

image

image

Should we install Deepspeed or not?

For anyone who is trying to install deepspeed. Should we install Deepspeed or not?

While installing when we execute accelerate config we are asked to install deepspeed or not. If you select yes, we will be asked to run below pip. That is when I am getting this error.

Luckily I opened https://github.com/kohya-ss/sd-scripts

there developer mentioned answers too. Where we are asked to not install deepspeed.

Steps to reproduce the problem

  1. In the VENV run accelerate config.
  2. When asked to install deepspeed select YES.
  3. Run pip install deepspeed

What should have happened?

deepshpeed should be installed. But we are getting CUDA error.

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

pip install deepspeed

Additional information, context and logs

(venv) PS D:\kohya_ss> pip install deepspeed
Collecting deepspeed
  Using cached deepspeed-0.7.7.tar.gz (712 kB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  ร— python setup.py egg_info did not run successfully.
  โ”‚ exit code: 1
  โ•ฐโ”€> [8 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\jan\AppData\Local\Temp\pip-install-bh_2b2bj\deepspeed_337b9c20b5434938ad45f7a28ad99337\setup.py", line 78, in <module>
          cupy = f"cupy-cuda{''.join(map(str,installed_cuda_version()))}"
        File "C:\Users\jan\AppData\Local\Temp\pip-install-bh_2b2bj\deepspeed_337b9c20b5434938ad45f7a28ad99337\op_builder\builder.py", line 41, in installed_cuda_version
          assert cuda_home is not None, "CUDA_HOME does not exist, unable to compile CUDA op(s)"
      AssertionError: CUDA_HOME does not exist, unable to compile CUDA op(s)
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

ร— Encountered error while generating package metadata.
โ•ฐโ”€> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
(venv) PS D:\kohya_ss>

ModuleNotFoundError: No module named 'albumentations'

Traceback (most recent call last):
File "C:\Users\Siddhesh\Desktop\kohya_ss\train_network.py", line 21, in
import albumentations as albu
ModuleNotFoundError: No module named 'albumentations'
Traceback (most recent call last):
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\Scripts\accelerate.exe_main
.py", line 7, in
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
simple_launcher(args)
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\python.exe', 'train_network.py', '--cache_latents', '--enable_bucket', '--use_8bit_adam', '--xformers', '--pretrained_model_name_or_path=C:/Users/Siddhesh/Desktop/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt', '--train_data_dir=C:/Users/Siddhesh/Desktop/test\img', '--resolution=512,512', '--output_dir=C:/Users/Siddhesh/Desktop/test\model', '--train_batch_size=1', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=800', '--use_8bit_adam', '--xformers', '--mixed_precision=fp16', '--save_every_n_epochs=1', '--seed=1234', '--save_precision=fp16', '--logging_dir=C:/Users/Siddhesh/Desktop/test\log', '--network_module=networks.lora', '--text_encoder_lr=1e-06', '--unet_lr=0.0001', '--network_dim=4']' returned non-zero exit status 1.

Cudnn initialized error

I followed the steps exactly as told in the YouTube video. But this issue is where I stopped progressing.

bitsandbytes package generating an OSError: [WinError 193] %1 is not a valid Win32 application

Replace CrossAttention.forward to use xformers
caching latents.
0it [00:00, ?it/s]
prepare optimizer, data loader etc.

Traceback (most recent call last):
File "C:\Programs\bmaltais GUI\kohya_ss\train_db.py", line 332, in
train(args)
File "C:\Programs\bmaltais GUI\kohya_ss\train_db.py", line 119, in train
import bitsandbytes as bnb
File "C:\Programs\Python\Python310\lib\site-packages\bitsandbytes_init_.py", line 5, in
from .optim import adam
File "C:\Programs\Python\Python310\lib\site-packages\bitsandbytes\optim_init_.py", line 5, in
from .adam import Adam, Adam8bit, Adam32bit
File "C:\Programs\Python\Python310\lib\site-packages\bitsandbytes\optim\adam.py", line 11, in
from bitsandbytes.optim.optimizer import Optimizer2State
File "C:\Programs\Python\Python310\lib\site-packages\bitsandbytes\optim\optimizer.py", line 6, in
import bitsandbytes.functional as F
File "C:\Programs\Python\Python310\lib\site-packages\bitsandbytes\functional.py", line 13, in
lib = ct.cdll.LoadLibrary(os.path.dirname(file) + '/libbitsandbytes.so')
File "C:\Programs\Python\Python310\lib\ctypes_init_.py", line 452, in LoadLibrary
return self.dlltype(name)
File "C:\Programs\Python\Python310\lib\ctypes_init
.py", line 374, in init
self._handle = _dlopen(self._name, mode)

OSError: [WinError 193] %1 is not a valid Win32 application

Traceback (most recent call last):
File "C:\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Programs\Python\Python310\Scripts\accelerate.exe_main
.py", line 7, in
File "C:\Programs\Python\Python310\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "C:\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
simple_launcher(args)
File "C:\Programs\Python\Python310\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

subprocess.CalledProcessError: Command '['C:\Programs\Python\Python310\python.exe', 'train_db.py', '--cache_latents', '--enable_bucket', '--use_8bit_adam', '--xformers', '--pretrained_model_name_or_path=C:/Programs/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt', '--train_data_dir=C:/Users/pdept/Desktop/AI pics/training/wrathshp/edited', '--resolution=512,512', '--output_dir=C:/Programs/stable-diffusion-webui/models/Stable-diffusion', '--train_batch_size=1', '--learning_rate=1e-06', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=0', '--use_8bit_adam', '--xformers', '--mixed_precision=fp16', '--save_every_n_epochs=1', '--seed=1234', '--save_precision=fp16', '--logging_dir=', '--output_name=wny']' returned non-zero exit status 1.

RuntimeWarning: Mean of empty slice. invalid value encountered in scalar divide

C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\numpy\core\fromnumeric.py:3464: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
C:\Programs\bmaltais GUI\kohya_ss\venv\lib\site-packages\numpy\core_methods.py:192: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
mean ar error (without repeats): nan

Out of memory

  • 1070 8GIG
  • xencoders works fine in isolcated enveoment A1111 and Stable Horde setup
  • tried also set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:464

Microsoft Windows [Version 10.0.19045.2486]
(c) Microsoft Corporation. All rights reserved.

C:\WINDOWS\system32>cd\

C:\>cd kohya_ss

C:\kohya_ss>cd kohya_ss

C:\kohya_ss\kohya_ss>.\venv\Scripts\activate

(venv) C:\kohya_ss\kohya_ss>python lora_gui.py
Load CSS...
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Copy C:/delete/RM_TRY1 to C:/delete/4\img/5_sks man...
Regularization images directory is missing... not copying regularisation images...
Done creating kohya_ss training folder structure at C:/delete/4...
Folder 5_sks man: 25 steps
max_train_steps = 25
stop_text_encoder_training = 0
lr_warmup_steps = 2
accelerate launch --num_cpu_threads_per_process=8 "train_network.py" --enable_bucket --pretrained_model_name_or_path="C:/_MODELS/f222.ckpt" --train_data_dir="C:/delete/4\img" --resolution=512,512 --output_dir="C:/delete/4\model" --use_8bit_adam --xformers --logging_dir="C:/delete/4\log" --network_module=networks.lora --text_encoder_lr=5e-5 --unet_lr=1e-3 --network_dim=8 --output_name="last" --learning_rate="1e-5" --lr_scheduler="cosine" --lr_warmup_steps="2" --train_batch_size="1" --max_train_steps="25" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --seed="1234" --cache_latents --xformers --use_8bit_adam
prepare tokenizer
Use DreamBooth method.
prepare train images.
found directory 5_sks man contains 5 image files
25 train images with repeating.
loading image sizes.
100%|รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†| 5/5 [00:00<00:00, 320.00it/s]
make buckets
number of images (including repeats) / รฅ๏ฟฝโ€žbucketรฃ๏ฟฝยฎรงโ€ยปรฅฦ’๏ฟฝรฆลพลกรฆโ€ขยฐรฏยผห†รงยนยฐรฃโ€šล รจยฟโ€รฃ๏ฟฝโ€”รฅโ€บลพรฆโ€ขยฐรฃโ€šโ€™รฅ๏ฟฝยซรฃโ€šโ‚ฌรฏยผโ€ฐ
bucket 0: resolution (256, 832), count: 0
bucket 1: resolution (256, 896), count: 0
bucket 2: resolution (256, 960), count: 0
bucket 3: resolution (256, 1024), count: 0
bucket 4: resolution (320, 704), count: 0
bucket 5: resolution (320, 768), count: 0
bucket 6: resolution (384, 640), count: 0
bucket 7: resolution (448, 576), count: 0
bucket 8: resolution (512, 512), count: 25
bucket 9: resolution (576, 448), count: 0
bucket 10: resolution (640, 384), count: 0
bucket 11: resolution (704, 320), count: 0
bucket 12: resolution (768, 320), count: 0
bucket 13: resolution (832, 256), count: 0
bucket 14: resolution (896, 256), count: 0
bucket 15: resolution (960, 256), count: 0
bucket 16: resolution (1024, 256), count: 0
mean ar error (without repeats): 0.0
prepare accelerator
Using accelerator 0.15.0 or above.
load StableDiffusion checkpoint
loading u-net: <All keys matched successfully>
loading vae: <All keys matched successfully>
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.layer_norm1.bias', 'logit_scale', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'visual_projection.weight', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'text_projection.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.bias']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
loading text encoder: <All keys matched successfully>
Replace CrossAttention.forward to use xformers
caching latents.
100%|รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†รขโ€“ห†| 5/5 [00:10<00:00,  2.02s/it]
import network module: networks.lora
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
prepare optimizer, data loader etc.

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: Loading binary C:\kohya_ss\kohya_ss\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll...
use 8-bit Adam optimizer
running training / รฅยญยฆรงยฟโ€™รฉโ€“โ€นรฅยงโ€น
  num train images * repeats / รฅยญยฆรงยฟโ€™รงโ€ยปรฅฦ’๏ฟฝรฃ๏ฟฝยฎรฆโ€ขยฐรƒโ€”รงยนยฐรฃโ€šล รจยฟโ€รฃ๏ฟฝโ€”รฅโ€บลพรฆโ€ขยฐ: 25
  num reg images / รฆยญยฃรฅโ€ฐโ€กรฅล’โ€“รงโ€ยปรฅฦ’๏ฟฝรฃ๏ฟฝยฎรฆโ€ขยฐ: 0
  num batches per epoch / 1epochรฃ๏ฟฝยฎรฃฦ’๏ฟฝรฃฦ’ฦ’รฃฦ’๏ฟฝรฆโ€ขยฐ: 25
  num epochs / epochรฆโ€ขยฐ: 1
  batch size per device / รฃฦ’๏ฟฝรฃฦ’ฦ’รฃฦ’๏ฟฝรฃโ€šยตรฃโ€šยครฃโ€šยบ: 1
  total train batch size (with parallel & distributed & accumulation) / รงยท๏ฟฝรฃฦ’๏ฟฝรฃฦ’ฦ’รฃฦ’๏ฟฝรฃโ€šยตรฃโ€šยครฃโ€šยบรฏยผห†รคยธยฆรฅห†โ€”รฅยญยฆรงยฟโ€™รฃโ‚ฌ๏ฟฝรฅโ€นยพรฉโ€ฆ๏ฟฝรฅ๏ฟฝห†รจยจห†รฅ๏ฟฝยซรฃโ€šโ‚ฌรฏยผโ€ฐ: 1
  gradient accumulation steps / รฅโ€นยพรฉโ€ฆ๏ฟฝรฃโ€šโ€™รฅ๏ฟฝห†รจยจห†รฃ๏ฟฝโ„ขรฃโ€šโ€นรฃโ€šยนรฃฦ’โ€ รฃฦ’ฦ’รฃฦ’โ€”รฆโ€ขยฐ = 1
  total optimization steps / รฅยญยฆรงยฟโ€™รฃโ€šยนรฃฦ’โ€ รฃฦ’ฦ’รฃฦ’โ€”รฆโ€ขยฐ: 25
steps:   0%|                                                                                                                  | 0/25 [00:00<?, ?it/s]epoch 1/1
Traceback (most recent call last):
  File "C:\kohya_ss\kohya_ss\train_network.py", line 427, in <module>
    train(args)
  File "C:\kohya_ss\kohya_ss\train_network.py", line 305, in train
    noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\accelerate\utils\operations.py", line 490, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\torch\amp\autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 381, in forward
    sample, res_samples = downsample_block(
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 612, in forward
    hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\diffusers\models\attention.py", line 216, in forward
    hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\diffusers\models\attention.py", line 484, in forward
    hidden_states = self.attn1(norm_hidden_states) + hidden_states
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\kohya_ss\kohya_ss\library\train_util.py", line 997, in forward_xformers
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)        # รฆล“โ‚ฌรฉ๏ฟฝยฉรฃ๏ฟฝยชรฃ๏ฟฝยฎรฃโ€šโ€™รฉ๏ฟฝยธรฃโ€šโ€œรฃ๏ฟฝยงรฃ๏ฟฝ๏ฟฝรฃโ€šล’รฃโ€šโ€น
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\xformers\ops.py", line 865, in memory_efficient_attention
    return op.apply(query, key, value, attn_bias, p).reshape(output_shape)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\xformers\ops.py", line 319, in forward
    out, lse = cls.FORWARD_OPERATOR(
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\torch\_ops.py", line 143, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
steps:   0%|                                                                                                                  | 0/25 [00:59<?, ?it/s]
Traceback (most recent call last):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\kohya_ss\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in <module>
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
    args.func(args)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
    simple_launcher(args)
  File "C:\kohya_ss\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\\kohya_ss\\kohya_ss\\venv\\Scripts\\python.exe', 'train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=C:/_MODELS/f222.ckpt', '--train_data_dir=C:/delete/4\\img', '--resolution=512,512', '--output_dir=C:/delete/4\\model', '--use_8bit_adam', '--xformers', '--logging_dir=C:/delete/4\\log', '--network_module=networks.lora', '--text_encoder_lr=5e-5', '--unet_lr=1e-3', '--network_dim=8', '--output_name=last', '--learning_rate=1e-5', '--lr_scheduler=cosine', '--lr_warmup_steps=2', '--train_batch_size=1', '--max_train_steps=25', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--cache_latents', '--xformers', '--use_8bit_adam']' returned non-zero exit status 1.
Keyboard interruption in main thread... closing server.

(venv) C:\kohya_ss\kohya_ss>

Defaults in GUI different from those suggested by Kohya, any reason?

Hey. I love this GUI. I have gotten results with training a person. But It makes me wonder if the chosen GUI defaults should be kept that way or changed? Perhaps with the new changes on cloneofsimo's original lora repo, things might have changed?

If anyone can point me to resources where I can learn more about the chosen defaults ( and why they were chosen) and how I can adjust them to better suit my specs and training concept(s), I would really appreciate it!

Also it would great if ' discussions ' can be opened on this repo.

state_dict for UNet2DConditionModel

I got this:
File "D:\LORA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
size mismatch for down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 768]) from checkpoint, the shape in current model is torch.Size([320, 1024]).

Please update readme file pip install --upgrade -r requirements.txt

Hi dear developer. I am getting below alert while installing. Because of high failure rate of getting dreambooth working. Such alerts makes us doubt and reinstall Kohya_ss again and again. Please look into this and if it works, please update this in readme.

DEPRECATION: library is being installed using the legacy 'setup.py install' method, because it does not have
a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change.
A possible replacement is to enable the '--use-pep517' option. Discussion can be found at
https://github.com/pypa/pip/issues/8559

DEPRECATION: ffmpy is being installed using the legacy 'setup.py install' method, because it does not have
a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. 
A possible replacement is to enable the '--use-pep517' option. Discussion can be found at
https://github.com/pypa/pip/issues/8559

DEPRECATION: python-multipart is being installed using the legacy 'setup.py install' method, because it does
not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour
change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at
https://github.com/pypa/pip/issues/8559

Possible solution:
pip install --use-pep517 --upgrade -r requirements.txt

Please update readme if this is found to help and suppress with those alerts.

Accelerate config step during installation won't work

Basically when I try to run accelerate config it will throw a bunch of errors:

(venv) PS G:!AI\SD\kohya_ss> accelerate config
------------------------------------------------------------------------------------------------------------------------In which compute environment are you running?
Please select a choice using the arrow or number keys, and selecting with enter
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "G:!AI\SD\kohya_ss\venv\Scripts\accelerate.exe_main
.py", line 7, in
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\config\config.py", line 67, in config_command
config = get_user_input()
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\config\config.py", line 32, in get_user_input
compute_environment = _ask_options(
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\config\config_utils.py", line 58, in _ask_options
result = menu.run(default_choice=default)
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\menu\selection_menu.py", line 118, in run
choice = self.handle_input()
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\menu\input.py", line 73, in handle_input
char = get_character()
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\menu\keymap.py", line 114, in get_character
char = get_raw_chars()
File "G:!AI\SD\kohya_ss\venv\lib\site-packages\accelerate\commands\menu\keymap.py", line 77, in get_raw_chars
if ch.encode(encoding) in (b"\x00", b"\xe0"):
UnicodeEncodeError: 'mbcs' codec can't encode characters in position 0--1: invalid character

I'm running Polish version of Windows and 852 (OEM - Latin II) codepage for the cmd / powershell - but I tried to switch to Unicode few days ago and it was failing the same way.

Edit: it's pretty much impossible to select options using arrows (will throw the error immediately) and the number keys won't work correctly too. For now I left the default options which should work.

Dataset Balancing issues

seems like there is a small bug with 1.8.5 data set balancing, it seems to do it partially correct.

though it comes up with this error

Traceback (most recent call last): File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 1006, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 847, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\library\dataset_balancing_gui.py", line 58, in dataset_balancing repeats = max(1, round(concept_repeats / images)) ZeroDivisionError: division by zero Traceback (most recent call last): File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 1006, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 847, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\library\dataset_balancing_gui.py", line 58, in dataset_balancing repeats = max(1, round(concept_repeats / images)) ZeroDivisionError: division by zero Traceback (most recent call last): File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 1006, in process_api result = await self.call_function(fn_index, inputs, iterator, request) File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\gradio\blocks.py", line 847, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\person\OneDrive\Documents\GitHub\kohya_ss\library\dataset_balancing_gui.py", line 58, in dataset_balancing repeats = max(1, round(concept_repeats / images)) ZeroDivisionError: division by zero

Request: Can we get this on Colab?

The Last Ben made a webui notebook on colab for automatic1111 so I know it can be done as I use it to train with on Colab free. Could you port this over as I know I can't for sure.

interrupt function for gui (feature request)

it seems training is incredibly slow, i have been testing different batches and data sets, it's running at 1.29it/s per second compared to atomics1111 gui dream booth training.

so, far I'm enjoying the flexibility of your script but i feel it needs the ability to specify save points at number of steps on top of the epochs and would be nice if there was a interrupt command to tell it to save right at the current step and stop training.

Would be useful.

Generate image preview of all concepts every X steps (allows the user to decide when the model is fully trained and prevent over training)
Allow to interrupt / save at current step though gui / python script
Allow save state to support saving at X number of steps (this could be set the same amount as the generate preview)

Error no kernel image is available for execution on the device

Error no kernel image is available for execution on the device at line 89 in file D:\ai\tool\bitsandbytes\csrc\ops.cu
Traceback (most recent call last):
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Siddhesh\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Users\Siddhesh\Desktop\kohya_ss\venv\Scripts\accelerate.exe_main
.py", line 7, in
File "C:\Users\Siddhesh\Desktop\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "C:\Users\Siddhesh\Desktop\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
simple_launcher(args)
File "C:\Users\Siddhesh\Desktop\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\Users\Siddhesh\Desktop\kohya_ss\venv\Scripts\python.exe', 'train_network.py', '--cache_latents', '--enable_bucket', '--use_8bit_adam', '--xformers', '--pretrained_model_name_or_path=C:/Users/Siddhesh/Desktop/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned.ckpt', '--train_data_dir=C:/Users/Siddhesh/Desktop/test\img', '--resolution=512,512', '--output_dir=C:/Users/Siddhesh/Desktop/test\model', '--train_batch_size=1', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=400', '--use_8bit_adam', '--xformers', '--mixed_precision=fp16', '--save_every_n_epochs=1', '--seed=1234', '--save_precision=fp16', '--logging_dir=C:/Users/Siddhesh/Desktop/test\log', '--network_module=networks.lora', '--text_encoder_lr=1e-06', '--unet_lr=0.0001', '--network_dim=4']' returned non-zero exit status 1.

(venv) PS C:\Users\Siddhesh\Desktop\kohya_ss> python
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

import torch
import sys
print('A', sys.version)
A 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
print('B', torch.version)
B 1.12.1+cu116
print('C', torch.cuda.is_available())
C True
print('D', torch.backends.cudnn.enabled)
D True
device = torch.device('cuda')
print('E', torch.cuda.get_device_properties(device))
E _CudaDeviceProperties(name='NVIDIA GeForce GTX 1060 6GB', major=6, minor=1, total_memory=6143MB, multi_processor_count=10)
print('F', torch.tensor([1.0, 2.0]).cuda())
F tensor([1., 2.], device='cuda:0')

EDIT: Error no kernel image is available for execution on the device at line 89 in file D:\ai\tool\bitsandbytes\csrc\ops.cu

^^ The bold part must be the error because I have no D:\ drive on my system!

cuda: Out of memory issue on rtx3090 (24GB vram)

Hi,
I succesfully managed to run your repo based on SD1.5 model.
now I'm trying to run SD2.0 768 model but I have CUDA: out of memory error.

I have 23 train images (768*768) in 20_person folder under train_person folder.
I tried lowering batch size and disabling cache latents to 0
here's the setting that I put into the powershell with venv(virtual environment)

variable values

$pretrained_model_name_or_path = "D:\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.ckpt"
$data_dir = "D:\kohya_ss\zwx_person_db\train_person"
$logging_dir = "D:\kohya_ss\log"
$output_dir = "D:\kohya_ss\output"
$resolution = "768,768"
$lr_scheduler="polynomial"
$cache_latents = 0 # 1 = true, 0 = false

$image_num = Get-ChildItem $data_dir -Recurse -File -Include *.png, *.jpg, *.webp | Measure-Object | %{$_.Count}

Write-Output "image_num: $image_num"

$dataset_repeats = 2000
$learning_rate = 1e-6
$train_batch_size = 1
$epoch = 1
$save_every_n_epochs=1
$mixed_precision="bf16"
$num_cpu_threads_per_process=6

You should not have to change values past this point

if ($cache_latents -eq 1) {
$cache_latents_value="--cache_latents"
}
else {
$cache_latents_value=""
}

$repeats = $image_num * $dataset_repeats
$mts = [Math]::Ceiling($repeats / $train_batch_size * $epoch)

Write-Output "Repeats: $repeats"

cd D:\kohya_ss
.\venv\Scripts\activate

accelerate launch --num_cpu_threads_per_process $num_cpu_threads_per_process train_db_fixed.py --v2
--v_parameterization --pretrained_model_name_or_path=$pretrained_model_name_or_path
--train_data_dir=$data_dir --output_dir=$output_dir
--resolution=$resolution --train_batch_size=$train_batch_size
--learning_rate=$learning_rate --max_train_steps=$mts
--use_8bit_adam --xformers
--mixed_precision=$mixed_precision $cache_latents_value
--save_every_n_epochs=$save_every_n_epochs --logging_dir=$logging_dir
--save_precision="fp16" --seed=494481440
--lr_scheduler=$lr_scheduler

Add the inference 768v yaml file along with the model for proper loading. Need to have the same name as model... Most likelly "last.yaml" in our case.

cp v2_inference\v2-inference-v.yaml $output_dir"\last.yaml"

and here's the error message I got

steps: 0%| | 0/46000 [00:00<?, ?it/s]epoch 1/100
Traceback (most recent call last):
File "D:\kohya_ss[train_db_fixed.py](http://train_db_fixed.py/)", line 2098, in
train(args)
File "D:\kohya_ss[train_db_fixed.py](http://train_db_fixed.py/)", line 1948, in train
optimizer.step()
File "D:\kohya_ss\venv\lib\site-packages\accelerate[optimizer.py](http://optimizer.py/)", line 134, in step
self.scaler.step(self.optimizer, closure)
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda\amp[grad_scaler.py](http://grad_scaler.py/)", line 338, in step
retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs)
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda\amp[grad_scaler.py](http://grad_scaler.py/)", line 285, in _maybe_opt_step
retval = optimizer.step(*args, **kwargs)
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim[lr_scheduler.py](http://lr_scheduler.py/)", line 65, in wrapper
return wrapped(*args, **kwargs)
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim[optimizer.py](http://optimizer.py/)", line 113, in wrapper
return func(*args, **kwargs)
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd[grad_mode.py](http://grad_mode.py/)", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\kohya_ss\venv\lib\site-packages\bitsandbytes\optim[optimizer.py](http://optimizer.py/)", line 263, in step
self.init_state(group, p, gindex, pindex)
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd[grad_mode.py](http://grad_mode.py/)", line 27, in decorate_context
return func(*args, **kwargs)
File "D:\kohya_ss\venv\lib\site-packages\bitsandbytes\optim[optimizer.py](http://optimizer.py/)", line 401, in init_state
state["state2"] = torch.zeros_like(
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 24.00 GiB total capacity; 10.13 GiB already allocated; 8.91 GiB free; 10.26 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
steps: 0%| | 0/46000 [00:30<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib[runpy.py](http://runpy.py/)", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Donny\AppData\Local\Programs\Python\Python310\lib[runpy.py](http://runpy.py/)", line 86, in _run_code
exec(code, run_globals)
File "D:\kohya_ss\venv\Scripts\accelerate.exe[main.py](http://main.py/)", line 7, in
File "D:\kohya_ss\venv\lib\site-packages\accelerate\commands[accelerate_cli.py](http://accelerate_cli.py/)", line 45, in main
args.func(args)
File "D:\kohya_ss\venv\lib\site-packages\accelerate\commands[launch.py](http://launch.py/)", line 1069, in launch_command
simple_launcher(args)
File "D:\kohya_ss\venv\lib\site-packages\accelerate\commands[launch.py](http://launch.py/)", line 551, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['D:\kohya_ss\venv\Scripts\python.exe', 'train_db_fixed.py', '--v2', '--v_parameterization', '--pretrained_model_name_or_path=D:\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.ckpt', '--train_data_dir=D:\kohya_ss\zwx_person_db\train_person', '--output_dir=D:\kohya_ss\output', '--resolution=768,768', '--train_batch_size=1', '--learning_rate=1E-06', '--max_train_steps=46000', '--use_8bit_adam', '--xformers', '--mixed_precision=bf16', '--save_every_n_epochs=1', '--logging_dir=D:\kohya_ss\log', '--save_precision=fp16', '--seed=494481440', '--lr_scheduler=polynomial']' returned non-zero exit status 1.

"MemoryError" (not CUDA) crashes training (dreambooth)

Specs

  • 3090
  • 32GB RAM
  • i7-5820K
  • Win10

Notes

  • Everything was working minus the epoch pausing #36
  • Installed Cudnn
  • Running CLI only

Error

Traceback (most recent call last):
  File "C:\Development\kohya_ss\train_db.py", line 337, in <module>
    train(args)
  File "C:\Development\kohya_ss\train_db.py", line 211, in train
    for step, batch in enumerate(train_dataloader):
  File "C:\Development\kohya_ss\venv\lib\site-packages\accelerate\data_loader.py", line 383, in __iter__
    next_batch = next(dataloader_iter)
  File "C:\Development\kohya_ss\venv\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
    data = self._next_data()
  File "C:\Development\kohya_ss\venv\lib\site-packages\torch\utils\data\dataloader.py", line 1356, in _next_data
    return self._process_data(data)
  File "C:\Development\kohya_ss\venv\lib\site-packages\torch\utils\data\dataloader.py", line 1402, in _process_data
    data.reraise()
  File "C:\Development\kohya_ss\venv\lib\site-packages\torch\_utils.py", line 461, in reraise
    raise exception
MemoryError: Caught MemoryError in DataLoader worker process 4.
Original Traceback (most recent call last):
  File "C:\Development\kohya_ss\venv\lib\site-packages\torch\utils\data\_utils\worker.py", line 302, in _worker_loop    data = fetcher.fetch(index)
  File "C:\Development\kohya_ss\venv\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Development\kohya_ss\venv\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Development\kohya_ss\library\train_util.py", line 419, in __getitem__
    img, face_cx, face_cy, face_w, face_h = self.load_image_with_face_info(image_info.absolute_path)
  File "C:\Development\kohya_ss\library\train_util.py", line 323, in load_image_with_face_info
    img = self.load_image(image_path)
  File "C:\Development\kohya_ss\library\train_util.py", line 270, in load_image
    img = np.array(image, np.uint8)
  File "C:\Development\kohya_ss\venv\lib\site-packages\PIL\Image.py", line 701, in __array_interface__
    new["data"] = self.tobytes()
  File "C:\Development\kohya_ss\venv\lib\site-packages\PIL\Image.py", line 779, in tobytes
    return b"".join(data)
MemoryError

Occurs when running the following for dreambooth

accelerate launch --num_cpu_threads_per_process=12 "train_db.py" --enable_bucket --gradient_checkpointing --no_token_padding --xformers --save_state --pretrained_model_name_or_path="C:\Development\stable-diffusion-webui\models\Stable-diffusion\animefull-final-pruned.ckpt" --train_data_dir="C:\Development\kohya_ss\db_models\xyz\img" --resolution=768,768 --output_dir="C:\Development\kohya_ss\db_models\xyz\model" --train_batch_size=4 --learning_rate=4e-06 --lr_scheduler=constant --lr_warmup_steps=0 --max_train_steps=1150 --xformers --mixed_precision=bf16 --save_every_n_epochs=100 --seed=69 --save_precision=float --logging_dir="C:\Development\kohya_ss\db_models\xyz\log" --caption_extension=.txt --stop_text_encoder_training=1150

unable to detect GPU (ValueError: fp16 mixed precision requires a GPU)

I am totally new to dreambooth so I am following this video based on that I have chosen 350 repeats for 16 images.

After Pressing on the Train button, i am getting below errors. Please look into this and guide me through the resolution.

ValueError: fp16 mixed precision requires a GPU

Save...
Folder 350_sonar123 woman: 5600 steps
Regularisation images are used... Will double the number of steps required...
max_train_steps = 11200
stop_text_encoder_training = 0
lr_warmup_steps = 0
accelerate launch --num_cpu_threads_per_process=14 "train_db.py" --cache_latents --enable_bucket --use_8bit_adam --xformers --pretrained_model_name_or_path="D:/models/dreamlike-diffusion-1.0.ckpt" --train_data_dir="E:/0train/sonar123\img" --reg_data_dir="E:/0train/sonar123\reg" --resolution=512,512 --output_dir="E:/0train/sonar123\model" --train_batch_size=1 --learning_rate=1e-06 --lr_scheduler=constant --lr_warmup_steps=0 --max_train_steps=11200 --use_8bit_adam --xformers --mixed_precision=fp16 --save_every_n_epochs=1 --seed=1234 --save_precision=fp16 --logging_dir="E:/0train/sonar123\log" --save_model_as=ckpt --output_name="sonar123"
prepare tokenizer
prepare train images.
found directory 350_sonar123 woman contains 16 image files
5600 train images with repeating.
prepare reg images.
found directory 1_woman contains 1516 image files
1516 reg images.
loading image sizes.
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1532/1532 [00:00<00:00, 12652.87it/s]
make buckets
number of images (including repeats) / ๅ„bucketใฎ็”ปๅƒๆžšๆ•ฐ๏ผˆ็นฐใ‚Š่ฟ”ใ—ๅ›žๆ•ฐใ‚’ๅซใ‚€๏ผ‰
bucket 0: resolution (256, 832), count: 0
bucket 1: resolution (256, 896), count: 0
bucket 2: resolution (256, 960), count: 0
bucket 3: resolution (256, 1024), count: 0
bucket 4: resolution (320, 704), count: 0
bucket 5: resolution (320, 768), count: 0
bucket 6: resolution (384, 640), count: 0
bucket 7: resolution (448, 576), count: 0
bucket 8: resolution (512, 512), count: 11200
bucket 9: resolution (576, 448), count: 0
bucket 10: resolution (640, 384), count: 0
bucket 11: resolution (704, 320), count: 0
bucket 12: resolution (768, 320), count: 0
bucket 13: resolution (832, 256), count: 0
bucket 14: resolution (896, 256), count: 0
bucket 15: resolution (960, 256), count: 0
bucket 16: resolution (1024, 256), count: 0
mean ar error (without repeats): 0.0
prepare accelerator
Traceback (most recent call last):
  File "D:\kohya_ss\train_db.py", line 332, in <module>
    train(args)
  File "D:\kohya_ss\train_db.py", line 56, in train
    accelerator, unwrap_model = train_util.prepare_accelerator(args)
  File "D:\kohya_ss\library\train_util.py", line 1162, in prepare_accelerator
    accelerator = Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps, mixed_precision=args.mixed_precision,
  File "D:\kohya_ss\venv\lib\site-packages\accelerate\accelerator.py", line 355, in __init__
    raise ValueError(err.format(mode="fp16", requirement="a GPU"))
ValueError: fp16 mixed precision requires a GPU
Traceback (most recent call last):
  File "C:\Users\jan\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\jan\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "D:\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in <module>
  File "D:\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
    args.func(args)
  File "D:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
    simple_launcher(args)
  File "D:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['D:\\kohya_ss\\venv\\Scripts\\python.exe', 'train_db.py', '--cache_latents', '--enable_bucket', '--use_8bit_adam', '--xformers', '--pretrained_model_name_or_path=D:/models/dreamlike-diffusion-1.0.ckpt', '--train_data_dir=E:/0train/sonar123\\img', '--reg_data_dir=E:/0train/sonar123\\reg', '--resolution=512,512', '--output_dir=E:/0train/sonar123\\model', '--train_batch_size=1', '--learning_rate=1e-06', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=11200', '--use_8bit_adam', '--xformers', '--mixed_precision=fp16', '--save_every_n_epochs=1', '--seed=1234', '--save_precision=fp16', '--logging_dir=E:/0train/sonar123\\log', '--save_model_as=ckpt', '--output_name=sonar123']' returned non-zero exit status 1.

[Errno 2] No such file or directory: '\\kohya_ss\\venv\\Scripts\\accelerate.exe\\__main__.py'

C:\Users\satya\AppData\Local\Programs\Python\Python310\lib\runpy.py:196 in _run_module_as_main โ”‚
โ”‚ โ”‚
โ”‚ 193 โ”‚ main_globals = sys.modules["main"].dict โ”‚
โ”‚ 194 โ”‚ if alter_argv: โ”‚
โ”‚ 195 โ”‚ โ”‚ sys.argv[0] = mod_spec.origin โ”‚
โ”‚ โฑ 196 โ”‚ return _run_code(code, main_globals, None, โ”‚
โ”‚ 197 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ "main", mod_spec) โ”‚
โ”‚ 198 โ”‚
โ”‚ 199 def run_module(mod_name, init_globals=None, โ”‚
โ”‚ C:\Users\satya\AppData\Local\Programs\Python\Python310\lib\runpy.py:86 in _run_code โ”‚
โ”‚ โ”‚
โ”‚ 83 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ loader = loader, โ”‚
โ”‚ 84 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ package = pkg_name, โ”‚
โ”‚ 85 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ spec = mod_spec) โ”‚
โ”‚ โฑ 86 โ”‚ exec(code, run_globals) โ”‚
โ”‚ 87 โ”‚ return run_globals โ”‚
โ”‚ 88 โ”‚
โ”‚ 89 def run_module_code(code, init_globals=None, โ”‚
โ”‚ โ”‚
โ”‚ G:\AI\kohya_ss\venv\Scripts\accelerate.exe_main
.py:7 in โ”‚
โ”‚ โ”‚
โ”‚ [Errno 2] No such file or directory: โ”‚
โ”‚ 'G:\AI\kohya_ss\venv\Scripts\accelerate.exe\main.py' โ”‚
โ”‚ โ”‚
โ”‚ G:\AI\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py:45 in main โ”‚
โ”‚ โ”‚
โ”‚ 42 โ”‚ โ”‚ exit(1) โ”‚
โ”‚ 43 โ”‚ โ”‚
โ”‚ 44 โ”‚ # Run โ”‚
โ”‚ โฑ 45 โ”‚ args.func(args) โ”‚
โ”‚ 46 โ”‚
โ”‚ 47 โ”‚
โ”‚ 48 if name == "main": โ”‚
โ”‚ โ”‚
โ”‚ G:\AI\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py:1104 in launch_command โ”‚
โ”‚ โ”‚
โ”‚ 1101 โ”‚ elif defaults is not None and defaults.compute_environment == ComputeEnvironment.AMA โ”‚
โ”‚ 1102 โ”‚ โ”‚ sagemaker_launcher(defaults, args) โ”‚
โ”‚ 1103 โ”‚ else: โ”‚
โ”‚ โฑ 1104 โ”‚ โ”‚ simple_launcher(args) โ”‚
โ”‚ 1105 โ”‚
โ”‚ 1106 โ”‚
โ”‚ 1107 def main(): โ”‚
โ”‚ โ”‚
โ”‚ G:\AI\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py:567 in simple_launcher โ”‚
โ”‚ โ”‚
โ”‚ 564 โ”‚ process = subprocess.Popen(cmd, env=current_env) โ”‚
โ”‚ 565 โ”‚ process.wait() โ”‚
โ”‚ 566 โ”‚ if process.returncode != 0: โ”‚
โ”‚ โฑ 567 โ”‚ โ”‚ raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) โ”‚
โ”‚ 568 โ”‚
โ”‚ 569 โ”‚
โ”‚ 570 def multi_gpu_launcher(args): โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
CalledProcessError: Command '['G:\AI\kohya_ss\venv\Scripts\python.exe', './fine_tune.py', '--v2',
'--train_text_encoder', '--use_8bit_adam', '--xformers',
'--pretrained_model_name_or_path=G:/AI/stable-diffusion-webui/models/Stable-diffusion/animefull-final-pruned.ckpt',
'--in_json=G:/AI/stable-diffusion-webui/Train image/Yor - processed/meta_lat.json',
'--train_data_dir=G:/AI/stable-diffusion-webui/Train image/Yor - processed/10_YorForger',
'--output_dir=G:/AI/stable-diffusion-webui/Train image/Yor - processed/OUT',
'--logging_dir=G:/AI/stable-diffusion-webui/Train image/Yor - processed/LOG', '--train_batch_size=1',
'--dataset_repeats=40', '--learning_rate=1e-05', '--lr_scheduler=cosine', '--lr_warmup_steps=2320',
'--max_train_steps=11600', '--mixed_precision=fp16', '--save_every_n_epochs=1', '--seed=1234', '--save_precision=fp16',
'--output_name=last']' returned non-zero exit status 1.
Loading config...

I get an error message that LORA cannot run because of a missing paging file, even though the required operating environment should be in place.

I have an RTX3060 with 12GB of VRAM and LORA is giving me an error that I don't have enough paging files and I can't learn.
Am I doing something wrong?
I would appreciate it if you could let me know.

โ—What we have already done

Reinstalled venv and pip install
Clean install
Change from multiple monitors to just one
Turned off chrome running at the same time and changed it to LORA only.
update

โ—Result

None of those things worked and I keep getting the same error.

โ—Situation

When I checked the task manager, 12GB of VRAM is being used 100% at the time of loading image sizes.
Then, the moment the console says training start, the VRAM usage goes to 0% and the following error message is displayed.

Operating Environment
Edition Windows 11 Home version 22H2
Processor 12th generation Intel(R) Core(TM) i5-12400F 2.50 GHz
RAM 64.0 GB (63.8 GB available)
System type 64-bit OS, x64-based processor
Graphics board is RTX3060 VRAM 12GB.
Due to the use of RAM disks, actual usable RAM is 53 GB.

Console executed: powershell
python version: 3.10.6
Commit hash: 785c4d8

error message

PS E:\dreambooth0k\sd-scripts> accelerate launch --num_cpu_threads_per_process 8 train_network.py --pretrained_model_name_or_path="E:\train\dream\koyadream1\motomodel\model.safetensors" --train_data_dir="E:\train\dream\2\img" --reg_data_dir="E:\train\dream\2\reg" --output_dir="E:\train\dream\2\model" --network_module=networks.lora --prior_loss_weight=1.0 --resolution=512 --train_batch_size=1 --learning_rate=1e-6 --max_train_steps=1600 --use_8bit_adam --xformers --mixed_precision="bf16" --cache_latents --gradient_checkpointing
prepare tokenizer
Use DreamBooth method.
prepare train images.
found directory 40_asd 1girl contains 17 image files
680 train images with repeating.
prepare reg images.
found directory 1_1girl contains 1000 image files
1000 reg images.
some of reg images are not used / ๆญฃๅ‰‡ๅŒ–็”ปๅƒใฎๆ•ฐใŒๅคšใ„ใฎใงใ€ไธ€้ƒจไฝฟ็”จใ•ใ‚Œใชใ„ๆญฃๅ‰‡ๅŒ–็”ปๅƒใŒใ‚ใ‚Šใพใ™
loading image sizes.
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 697/697 [00:00<00:00, 14869.26it/s]
prepare dataset
prepare accelerator
Using accelerator 0.15.0 or above.
load StableDiffusion checkpoint
loading u-net: <All keys matched successfully>
loading vae: <All keys matched successfully>
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'logit_scale', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.14.layer_norm1.weight', 'text_projection.weight', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'visual_projection.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
loading text encoder: <All keys matched successfully>
Replace CrossAttention.forward to use xformers
caching latents.
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 697/697 [01:13<00:00,  9.50it/s]
import network module: networks.lora
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
prepare optimizer, data loader etc.

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link
================================================================================
CUDA SETUP: Loading binary E:\dreambooth0k\sd-scripts\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll...
use 8-bit Adam optimizer
running training / ๅญฆ็ฟ’้–‹ๅง‹
  num train images * repeats / ๅญฆ็ฟ’็”ปๅƒใฎๆ•ฐร—็นฐใ‚Š่ฟ”ใ—ๅ›žๆ•ฐ: 680
  num reg images / ๆญฃๅ‰‡ๅŒ–็”ปๅƒใฎๆ•ฐ: 1000
  num batches per epoch / 1epochใฎใƒใƒƒใƒๆ•ฐ: 1360
  num epochs / epochๆ•ฐ: 2
  batch size per device / ใƒใƒƒใƒใ‚ตใ‚คใ‚บ: 1
  total train batch size (with parallel & distributed & accumulation) / ็ทใƒใƒƒใƒใ‚ตใ‚คใ‚บ๏ผˆไธฆๅˆ—ๅญฆ็ฟ’ใ€ๅ‹พ้…ๅˆ่จˆๅซใ‚€๏ผ‰: 1
  gradient accumulation steps / ๅ‹พ้…ใ‚’ๅˆ่จˆใ™ใ‚‹ใ‚นใƒ†ใƒƒใƒ—ๆ•ฐ = 1
  total optimization steps / ๅญฆ็ฟ’ใ‚นใƒ†ใƒƒใƒ—ๆ•ฐ: 1600
steps:   0%|                                                                                  | 0/1600 [00:00<?, ?it/s]epoch 1/2
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\%username%\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Users\%username%\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 125, in _main
    prepare(preparation_data)
  File "C:\Users\%username%\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "C:\Users\%username%\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "C:\Users\%username%\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 289, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "C:\Users\%username%\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 96, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "C:\Users\%username%\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "E:\dreambooth0k\sd-scripts\train_network.py", line 8, in <module>
    import torch
  File "E:\dreambooth0k\sd-scripts\venv\lib\site-packages\torch\__init__.py", line 129, in <module>
    raise err
OSError: [WinError 1455] ใƒšใƒผใ‚ธใƒณใ‚ฐ ใƒ•ใ‚กใ‚คใƒซใŒๅฐใ•ใ™ใŽใ‚‹ใŸใ‚ใ€ใ“ใฎๆ“ไฝœใ‚’ๅฎŒไบ†ใงใใพใ›ใ‚“ใ€‚ Error loading "E:\dreambooth0k\sd-scripts\venv\lib\site-packages\torch\lib\cudnn_cnn_infer64_8.dll" or one of its dependencies.

List of packages installed in the virtual environment:.

Package                      Version
---------------------------- ------------
absl-py                      1.4.0
accelerate                   0.15.0
aiofiles                     22.1.0
aiohttp                      3.8.3
aiosignal                    1.3.1
albumentations               1.3.0
altair                       4.2.0
anyio                        3.6.2
astunparse                   1.6.3
async-timeout                4.0.2
attrs                        22.2.0
bitsandbytes                 0.35.0
cachetools                   5.2.1
certifi                      2022.12.7
charset-normalizer           2.1.1
click                        8.1.3
colorama                     0.4.6
contourpy                    1.0.7
cycler                       0.11.0
diffusers                    0.10.2
easygui                      0.98.3
einops                       0.6.0
entrypoints                  0.4
fairscale                    0.4.4
fastapi                      0.89.1
ffmpy                        0.3.0
filelock                     3.9.0
flatbuffers                  23.1.4
fonttools                    4.38.0
frozenlist                   1.3.3
fsspec                       2022.11.0
ftfy                         6.1.1
gast                         0.4.0
google-auth                  2.16.0
google-auth-oauthlib         0.4.6
google-pasta                 0.2.0
gradio                       3.16.2
grpcio                       1.51.1
h11                          0.14.0
h5py                         3.7.0
httpcore                     0.16.3
httpx                        0.23.3
huggingface-hub              0.11.1
idna                         3.4
imageio                      2.24.0
importlib-metadata           6.0.0
Jinja2                       3.1.2
joblib                       1.2.0
jsonschema                   4.17.3
keras                        2.10.0
Keras-Preprocessing          1.1.2
kiwisolver                   1.4.4
libclang                     15.0.6.1
library                      0.0.0
lightning-utilities          0.5.0
linkify-it-py                1.0.3
Markdown                     3.4.1
markdown-it-py               2.1.0
MarkupSafe                   2.1.1
matplotlib                   3.6.3
mdit-py-plugins              0.3.3
mdurl                        0.1.2
multidict                    6.0.4
networkx                     3.0
numpy                        1.24.1
oauthlib                     3.2.2
opencv-python                4.7.0.68
opencv-python-headless       4.7.0.68
opt-einsum                   3.3.0
orjson                       3.8.5
packaging                    23.0
pandas                       1.5.2
Pillow                       9.4.0
pip                          22.2.2
protobuf                     3.19.6
psutil                       5.9.4
pyasn1                       0.4.8
pyasn1-modules               0.2.8
pycryptodome                 3.16.0
pydantic                     1.10.4
pydub                        0.25.1
pyparsing                    3.0.9
pyrsistent                   0.19.3
python-dateutil              2.8.2
python-multipart             0.0.5
pytorch-lightning            1.9.0
pytz                         2022.7.1
PyWavelets                   1.4.1
PyYAML                       6.0
qudida                       0.0.4
regex                        2022.10.31
requests                     2.28.2
requests-oauthlib            1.3.1
rfc3986                      1.5.0
rsa                          4.9
safetensors                  0.2.6
scikit-image                 0.19.3
scikit-learn                 1.2.0
scipy                        1.10.0
setuptools                   63.2.0
six                          1.16.0
sniffio                      1.3.0
starlette                    0.22.0
tensorboard                  2.10.1
tensorboard-data-server      0.6.1
tensorboard-plugin-wit       1.8.1
tensorboardX                 2.5.1
tensorflow                   2.10.1
tensorflow-estimator         2.10.0
tensorflow-io-gcs-filesystem 0.29.0
termcolor                    2.2.0
threadpoolctl                3.1.0
tifffile                     2022.10.10
timm                         0.4.12
tokenizers                   0.13.2
toolz                        0.12.0
torch                        1.12.1+cu116
torchmetrics                 0.11.0
torchvision                  0.13.1+cu116
tqdm                         4.64.1
transformers                 4.25.1
typing_extensions            4.4.0
uc-micro-py                  1.0.1
urllib3                      1.26.14
uvicorn                      0.20.0
wcwidth                      0.2.6
websockets                   10.4
Werkzeug                     2.2.2
wheel                        0.38.4
wrapt                        1.14.1
xformers                     0.0.14.dev0
yarl                         1.8.2
zipp                         3.11.0

I am a person who does not know anything about program.
I would appreciate it if you could let me know if there is any other necessary information I need to provide in order to solve the problem.

ERROR: xformers-some numbers is not a supported wheel on this platform

I am on the CUDNN 8.6 step and it won't install for some reason. I can't seem to figure out why. Any help is appreciated.

NVIDIA 3060
AMD Ryzen 7 5800X
16GB RAM

(venv) PS D:\ai\kohya_ss> python .\tools\cudann_1.8_install.py
[!] xformers NOT installed.
Installing xformers with: pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
Installing xformers
Traceback (most recent call last):
  File "D:\ai\kohya_ss\tools\cudann_1.8_install.py", line 82, in <module>
    check_versions()
  File "D:\ai\kohya_ss\tools\cudann_1.8_install.py", line 73, in check_versions
    run(f"pip install {x_cmd}", desc="Installing xformers")
  File "D:\ai\kohya_ss\tools\cudann_1.8_install.py", line 30, in run
    raise RuntimeError(message)
RuntimeError: Error running command.
Command: pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
Error code: 1
stdout: <empty>
stderr: ERROR: xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl is not a supported wheel on this platform.```

Long Pauses Between Epochs

I noticed that there are long pauses between epochs, around 35-40 seconds. What is the reasoning behind this and is there a way to lower it significantly or disable it entirely?

I looked through the code and was not able to find anything specific to pausing between epochs. The other db extension that is for auto's ui has an option to pause between epochs (default 60s) but can be lowered to 1s if desired.

This also applies to native training.

Edit: Looks like it hangs on line 206 in train_db.py every epoch.

Thanks.

permission denied when extracting LORA

I tried to extract a lora file, after loading both models and calculating svd, reaches 100%, loading extracted LoRA weights:
I get error PermissionError: [Errno 13] Permission denied
image

I have tried different folders, all give the same error

Why "python -m venv --system-site-packages venv"

If virtualenv to have access to my systems site-packages,other already installed torchvision,safetenson and other dependent will cuse multiple versions conflict.
Can I use "python -m venv venv" To get a cleaner environment and avoid running the following commands?
cp .\bitsandbytes_windows*.dll .\venv\Lib\site-packages\bitsandbytes
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py

webui extension?

Would it make sense to have this as an integrated extension to automatic's webui, or is there some reason the two are incompatible? A few things I can see improved:

  • automatically setting the output folder, right now the output is overwritten and you have to manually update the folder for each new lora. It would be very comvenient to have the layout similar to the one for the webui's textual_inversion
  • automatic renaming of each epoch output, instead of just epoch-X.pt/last.pt
  • interrupt feature during training
  • progress visible in the UI instead of just the console
  • preview images after each epoch, this can currently be accomplished by running txt2img in a separate webui instance with the intermediate epoch outputs, but it's nice to have done automatically
  • dropdowns for sd/lora model in the standard webui paths, this is also very important for reproducibility to be able to restore the lora config from the infotext in the future
  • single application to run, instead of two separate ones
  • automatic setup of dependencies instead of the manual install instructions?

Of course most of these can probably be implemented without it being an extension, I was just curious

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.