Giter Club home page Giter Club logo

sd-webui-easyphoto's Introduction

📷 EasyPhoto | Your Smart AI Photo Generator.

🦜 EasyPhoto is a Webui UI plugin for generating AI portraits that can be used to train digital doppelgangers relevant to you.

🦜 🦜 Welcome!

Hugging Face Spaces

English | 简体中文

Table of Contents

Introduction

EasyPhoto is a Webui UI plugin for generating AI portraits that can be used to train digital doppelgangers relevant to you. Training is recommended to be done with 5 to 20 portrait images, preferably half-body photos, and do not wear glasses (It doesn't matter if the characters in a few pictures wear glasses). After the training is done, we can generate it in the Inference section. We support using preset template images or uploading your own images for Inference.

Please read our Contributor Covenant covenant | 简体中文.

If you encounter any problems in the training, please refer to the VQA.

We now support quick pull-ups from different platforms, refer to Quick Start.

Now you can experience EasyPhoto demo quickly on ModelScope, demo.

What's New:

  • Support LCM-Lora based sampling acceleration, now you only need 12 step (vs 50 steps) for both Image & Video generation, and we provide Scene Lora training and inference in both text2Image and text2Video.[🔥 🔥 🔥 🔥 2023.12.09]
  • Support Concepts-Sliders based attribute editing and Virtual TryOn, please refer to sliders-wiki , tryon-wiki for more details. [🔥 🔥 🔥 🔥 2023.12.08]
  • Thanks to lanrui-ai. It offers an SDWebUI image with built-in EasyPhoto, promising bi-weekly updates. Personally tested, it can pull up resources in 2 minutes and complete startup within 5 minutes. [ 2023.11.20 ]
  • We are already support Video Inference without more traning! Specific details can go here![🔥 🔥 🔥 🔥 2023.11.10]
  • SDXL Training and Inference Support. Specific details can go here![🔥 🔥 🔥 🔥 2023.11.10]
  • ComfyUI Support at repo, thanks to THtianhao great work![🔥 🔥 🔥 2023.10.17]
  • EasyPhoto arxiv arxiv[🔥 🔥 🔥 2023.10.10]
  • Support SDXL to generate High resolution template, no more upload image need in this mode(SDXL), need 16GB GPU memory! Specific details can go here[ 2023.09.26 ]
  • We also support the Diffusers Edition. [ 2023.09.25 ]
  • Support fine-tuning the background and calculating the similarity score between the generated image and the user. [ 2023.09.15 ]
  • Support different base models for training and inference. [ 2023.09.08 ]
  • Support multi-people generation! Add cache option to optimize inference speed. Add log refreshing on UI. [ 2023.09.06 ]
  • Create Code! Support for Windows and Linux Now. [ 2023.09.02 ]

These are our generated results: results_1

Video Part:

Example 1 2 3
-

Photo Part: results_2 results_3

Our UI interface is as follows: train part: train_ui inference part: infer_ui

TODO List

  • Support chinese ui.
  • Support change in template's background.
  • Support high resolution.

Quick Start

1. Cloud usage: AliyunDSW/AutoDL/lanrui-ai/Docker

a. From AliyunDSW

DSW has free GPU time, which can be applied once by a user and is valid for 3 months after applying.

Aliyun provide free GPU time in Freetier, get it and use in Aliyun PAI-DSW to start EasyPhoto within 3min!

DSW Notebook

b. From AutoDL/lanrui-ai

lanrui-ai

The official full-plugin version of lanrui-ai comes with EasyPhoto built-in. They promise bi-weekly testing and updates. Personally tested and found to be effective, it can be launched within 5 minutes. Thanks to their support and contributions to the community.

AutoDL

If you are using Lanrui-ai/AutoDL, you can quickly pull up the Stable DIffusion webui using the mirror we provide.

You can select the desired mirror by filling in the following information in Community Mirrors, or using offical Image provide by lanrui-ai.

aigc-apps/sd-webui-EasyPhoto/sd-webui-EasyPhoto

c. From docker

If you are using docker, please make sure that the graphics card driver and CUDA environment have been installed correctly in your machine.

Then execute the following commands in this way:

# pull image
docker pull registry.cn-beijing.aliyuncs.com/mybigpai/sd-webui-easyphoto:0.0.3

# enter image
docker run -it -p 7860:7860 --network host --gpus all registry.cn-beijing.aliyuncs.com/mybigpai/sd-webui-easyphoto:0.0.3

# launch webui
python3 launch.py --port 7860

The docker updates may be slightly slower than the github repository of sd-webui-EasyPhoto, so you can go to extensions/sd-webui-EasyPhoto and do a git pull first.

cd extensions/sd-webui-EasyPhoto/
git pull
cd /workspace

2. Local install: Environment Check/Downloading/Installation

a. Environment Check

We have verified EasyPhoto execution on the following environment: If you meet problem with WebUI auto killed by OOM, please refer to ISSUE21, and setting some num_threads to 0 and report other fix to us, thanks.

The detailed of Windows 10:

  • OS: Windows10
  • python: py3.10
  • pytorch: torch2.0.1
  • tensorflow-cpu: 2.13.0
  • CUDA: 11.7
  • CUDNN: 8+
  • GPU: Nvidia-3060 12G

The detailed of Linux:

  • OS: Ubuntu 20.04, CentOS
  • python: py3.10 & py3.11
  • pytorch: torch2.0.1
  • tensorflow-cpu: 2.13.0
  • CUDA: 11.7
  • CUDNN: 8+
  • GPU: Nvidia-A10 24G & Nvidia-V100 16G & Nvidia-A100 40G

We need about 60GB available on disk (for saving weights and datasets process), please check!

b. Relevant Repositories & Weights Downloading

i. Controlnet

We need to use Controlnet for inference. The related repo is Mikubill/sd-webui-controlnet. You need install this repo before using EasyPhoto.

In addition, we need at least three Controlnets for inference. So you need to set the Multi ControlNet: Max models amount (requires restart) in Setting. controlnet_num

ii. Other Dependencies.

We are mutually compatible with the existing stable-diffusion-webui environment, and the relevant repositories are installed when starting stable-diffusion-webui.

The weights we need will be downloaded automatically when you start training first time.

c. Plug-in Installation

We now support installing EasyPhoto from git. The url of our Repository is https://github.com/aigc-apps/sd-webui-EasyPhoto.

We will support installing EasyPhoto from Available in the future.

install

How to use

1. Model Training

The EasyPhoto training interface is as follows:

  • On the left is the training image. Simply click Upload Photos to upload the image, and click Clear Photos to delete the uploaded image;
  • On the right are the training parameters, which cannot be adjusted for the first training.

After clicking Upload Photos, we can start uploading images. It is best to upload 5 to 20 images here, including different angles and lighting conditions. It is best to have some images that do not include glasses. If they are all glasses, the generated results may easily generate glasses. train_1

Then we click on Start Training below, and at this point, we need to fill in the User ID above, such as the user's name, to start training. train_2

After the model starts training, the webui will automatically refresh the training log. If there is no refresh, click Refresh Log button. train_3

If you want to set parameters, the parsing of each parameter is as follows:

Parameter Name Meaning
Resolution The size of the image fed into the network during training, with a default value of 512
Validation & save steps The number of steps between validating the image and saving intermediate weights, with a default value of 100, representing verifying the image every 100 steps and saving the weights
Max train steps Maximum number of training steps, default value is 800
Max steps per photos The maximum number of training sessions per image, default to 200
Train batch size The batch size of the training, with a default value of 1
Gradient accumulation steps Whether to perform gradient accumulation. The default value is 4. Combined with the train batch size, each step is equivalent to feeding four images
Dataloader num workers The number of jobs loaded with data, which does not take effect under Windows because an error will be reported if set, but is set normally on Linux
Learning rate Train Lora's learning rate, default to 1e-4
Rank Lora The feature length of the weight, default to 128
Network alpha The regularization parameter for Lora training, usually half of the rank, defaults to 64

2. Inference

a. single people

  • Step 1: Click the refresh button to query the model corresponding to the trained user ID.
  • Step 2: Select the user ID.
  • Step 3: Select the template that needs to be generated.
  • Step 4: Click the Generate button to generate the results.

single_people

b. multi people

  • Step 1: Go to the settings page of EasyPhoto and set num_of_faceid as greater than 1.
  • Step 2: Apply settings.
  • Step 3: Restart the ui interface of the webui.
  • Step 4: Return to EasyPhoto and upload the two person template.
  • Step 5: Select the user IDs of two people.
  • Step 6: Click the Generate button. Perform image generation.

single_people single_people

Algorithm Detailed

  • Arxiv paper EasyPhoto arxiv
  • More detailed principles and details can be found BLOG

1. Architectural Overview

overview

In the field of AI portraits, we expect model-generated images to be realistic and resemble the user, and traditional approaches introduce unrealistic lighting (such as face fusion or roop). To address this unrealism, we introduce the image-to-image capability of the stable diffusion model. Generating a perfect personal portrait takes into account the desired generation scenario and the user's digital doppelgänger. We use a pre-prepared template as the desired generation scene and an online trained face LoRA model as the user's digital doppelganger, which is a popular stable diffusion fine-tuning model. We use a small number of user images to train a stable digital doppelgänger of the user, and generate a personal portrait image based on the face LoRA model and the expected generative scene during inference.

2. Training Detailed

overview

First, we perform face detection on the input user image, and after determining the face location, we intercept the input image according to a certain ratio. Then, we use the saliency detection model and the skin beautification model to obtain a clean face training image, which basically consists of only faces. Then, we label each image with a fixed label. There is no need to use a labeler here, and the results are good. Finally, we fine-tune the stabilizing diffusion model to get the user's digital doppelganger.

During training, we utilize the template image for verification in real time, and at the end of training, we calculate the face id gap between the verification image and the user's image to achieve Lora fusion, which ensures that our Lora is a perfect digital doppelganger of the user.

In addition, we will choose the image that is most similar to the user in the validation as the face_id image, which will be used in Inference.

3. Inference Detailed

a. First Diffusion:

First, we will perform face detection on our incoming template image to determine the mask that needs to be inpainted for stable diffusion. then we will use the template image to perform face fusion with the optimal user image. After the face fusion is completed, we use the above mask to inpaint (fusion_image) with the face fused image. In addition, we will affix the optimal face_id image obtained during training to the template image by affine transformation (replaced_image). Then we will apply Controlnets on it, we use canny with color to extract features for fusion_image and openpose for replaced_image to ensure the similarity and stability of the images. Then we will use Stable Diffusion combined with the user's digital split for generation.

b. Second Diffusion:

After getting the result of First Diffusion, we will fuse the result with the optimal user image for face fusion, and then we will use Stable Diffusion again with the user's digital doppelganger for generation. The second generation will use higher resolution.

Special thanks

Special thanks to DevelopmentZheng, qiuyanxin, rainlee, jhuang1207, bubbliiiing, wuziheng, yjjinjie, hkunzhe, yunkchen for their code contributions (in no particular order).

Reference

Related Project

We've also listed some great open source projects as well as any extensions you might be interested in:

License

This project is licensed under the Apache License (Version 2.0).

ContactUS

  1. Use Dingding to search group-2 54095000124 or Scan to join
  2. Since the WeChat group is full, you need to scan the image on the right to add this student as a friend first, and then join the WeChat group.

Contributors ✨

Thanks goes to these wonderful people :

This project follows the all-contributors specification. Contributions of any kind are welcome!

Back to top

sd-webui-easyphoto's People

Contributors

ayushrakesh avatar bitnagar avatar bubbliiiing avatar developmentzheng avatar eltociear avatar fawkex avatar hkunzhe avatar jhuang1207 avatar joindn avatar liaogulou avatar liubo0902 avatar mridulsharma03 avatar nishitxmehta avatar qiuyanxin avatar rainlee avatar rliessum avatar rs-labhub avatar stepkim1 avatar suravshresth avatar thtianhao avatar wuziheng avatar yjjinjie avatar yunkchen avatar zhangqizky avatar zouxinyi0625 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-easyphoto's Issues

40G,A100显存不够,webui的界面上传图片,默认参数启动,显存不够

Exception in thread Thread-6 (preprocess_images):
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/miniconda3/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/root/data/weiui/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/preprocess.py", line 125, in preprocess_images
sub_image = Image.fromarray(cv2.cvtColor(portrait_enhancement(sub_image)[OutputKeys.OUTPUT_IMG], cv2.COLOR_BGR2RGB))
File "/root/miniconda3/lib/python3.10/site-packages/modelscope/pipelines/base.py", line 219, in call
output = self._process_single(input, *args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/modelscope/pipelines/base.py", line 247, in _process_single
out = self.preprocess(input, **preprocess_params)
File "/root/miniconda3/lib/python3.10/site-packages/modelscope/pipelines/cv/image_portrait_enhancement_pipeline.py", line 178, in preprocess
img_sr = self.sr_process(img)
File "/root/miniconda3/lib/python3.10/site-packages/modelscope/pipelines/cv/image_portrait_enhancement_pipeline.py", line 161, in sr_process
output = self.sr_model(img)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/modelscope/models/cv/super_resolution/rrdbnet_arch.py", line 129, in forward
self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest')))
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/data/weiui/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 376, in network_Conv2d_forward
return torch.nn.Conv2d_forward_before_network(self, input)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/root/miniconda3/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.73 GiB (GPU 0; 39.42 GiB total capacity; 23.85 GiB already allocated; 12.58 GiB free; 26.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

40G,A100显存不够,webui的界面上传图片,默认参数启动,显存不够

[Windows出现s3报错的打印] C:\arrow\cpp\src\arrow\filesystem\s3fs.cc:2829: 当默认LORA是炼好的,重新启动UI,会出现这代码,以前没出现过?

activating extra network lora: TypeError
Traceback (most recent call last):
File "C:\Users\nsg\stable-diffusion-webui\modules\extra_networks.py", line 145, in activate
extra_network.activate(p, [])
File "C:\Users\nsg\stable-diffusion-webui\extensions-builtin\Lora\extra_networks_lora.py", line 18, in activate
p.all_prompts = [x + f"lora:{additional}:{shared.opts.extra_networks_default_multiplier}" for x in p.all_prompts]
TypeError: 'NoneType' object is not iterable

NameError: name 'logging' is not defined

File "C:\Project\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_train.py", line 226, in process_rotate_image
logging.info(f'Check rotate failed: has not exif. Return original img.')
NameError: name 'logging' is not defined

You guys forgot to import logging in easyphoto_train.py

invalid syntax

Traceback (most recent call last):
File "/home/h3c/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/home/h3c/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
result = await self.call_function(
File "/home/h3c/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/h3c/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/h3c/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/home/h3c/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/h3c/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/home/h3c/Documents/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/easyphoto_infer.py", line 251, in easyphoto_infer_forward
template_images = eval(selected_template_images)
File "", line 0

SyntaxError: invalid syntax

未能获得预处理后的图片,请检查训练过程

小白不懂代码,请问这是哪里出了问题?

image

Start Downloading weights
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\sdwebui\stable-diffusion-webui\extensions\EasyPhoto-sd-webui\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\sdwebui\stable-diffusion-webui\extensions\EasyPhoto-sd-webui\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\sdwebui\stable-diffusion-webui\extensions\EasyPhoto-sd-webui\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
2023-09-04 18:08:40,207 - modelscope - INFO - Model revision not specified, use the latest revision: v2.0.2
2023-09-04 18:08:42,077 - modelscope - INFO - initiate model from C:\Users\fei.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface
2023-09-04 18:08:42,078 - modelscope - INFO - initiate model from location C:\Users\fei.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface.
2023-09-04 18:08:42,084 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 18:08:42,084 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 18:08:42,084 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\fei\.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface'}. trying to build by task and model information.
2023-09-04 18:08:42,085 - modelscope - WARNING - Find task: face-detection, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-04 18:08:42,087 - modelscope - INFO - loading model from C:\Users\fei.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface\pytorch_model.pt
2023-09-04 18:08:42,549 - modelscope - INFO - load model done
2023-09-04 18:08:43,092 - modelscope - INFO - Model revision not specified, use the latest revision: v1.0.0
2023-09-04 18:08:43,472 - modelscope - INFO - initiate model from C:\Users\fei.cache\modelscope\hub\damo\cv_u2net_salient-detection
2023-09-04 18:08:43,473 - modelscope - INFO - initiate model from location C:\Users\fei.cache\modelscope\hub\damo\cv_u2net_salient-detection.
2023-09-04 18:08:43,475 - modelscope - INFO - initialize model from C:\Users\fei.cache\modelscope\hub\damo\cv_u2net_salient-detection
2023-09-04 18:08:43,924 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 18:08:43,925 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 18:08:43,925 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\fei\.cache\modelscope\hub\damo\cv_u2net_salient-detection'}. trying to build by task and model information.
2023-09-04 18:08:43,925 - modelscope - WARNING - No preprocessor key ('detection', 'semantic-segmentation') found in PREPROCESSOR_MAP, skip building preprocessor.
2023-09-04 18:08:44,337 - modelscope - INFO - Use user-specified model revision: v1.0.1
2023-09-04 18:08:44,622 - modelscope - WARNING - ('PIPELINES', 'skin-retouching-torch', 'skin-retouching-torch') not found in ast index file
2023-09-04 18:08:44,622 - modelscope - INFO - initiate model from C:\Users\fei.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch
2023-09-04 18:08:44,623 - modelscope - INFO - initiate model from location C:\Users\fei.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch.
2023-09-04 18:08:44,628 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 18:08:44,629 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 18:08:44,629 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\fei\.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch'}. trying to build by task and model information.
2023-09-04 18:08:44,629 - modelscope - WARNING - Find task: skin-retouching-torch, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-04 18:08:45,379 - modelscope - INFO - Model revision not specified, use the latest revision: v2.0.2
2023-09-04 18:08:47,710 - modelscope - INFO - initiate model from C:\Users\fei.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface
2023-09-04 18:08:47,711 - modelscope - INFO - initiate model from location C:\Users\fei.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface.
2023-09-04 18:08:47,717 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 18:08:47,717 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 18:08:47,717 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\fei\.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface'}. trying to build by task and model information.
2023-09-04 18:08:47,717 - modelscope - WARNING - Find task: face-detection, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-04 18:08:47,720 - modelscope - INFO - loading model from C:\Users\fei.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface\pytorch_model.pt
2023-09-04 18:08:48,198 - modelscope - INFO - load model done
Exception in thread Thread-68 (preprocess_images):
Traceback (most recent call last):
File "D:\sdwebui\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 212, in build_from_cfg
return obj_cls(**args)
File "C:\Users\fei.cache\modelscope\modelscope_modules\cv_unet_skin_retouching_torch\ms_wrapper.py", line 76, in init
self.sess, self.input_node_name, self.out_node_name = self.load_onnx_model(
File "C:\Users\fei.cache\modelscope\modelscope_modules\cv_unet_skin_retouching_torch\ms_wrapper.py", line 93, in load_onnx_model
sess = onnxruntime.InferenceSession(onnx_path)
File "D:\sdwebui\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 396, in init
raise e
File "D:\sdwebui\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "D:\sdwebui\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 415, in _create_inference_session
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\fei\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\fei\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\sdwebui\stable-diffusion-webui\extensions\EasyPhoto-sd-webui\scripts\preprocess.py", line 46, in preprocess_images
skin_retouching = pipeline('skin-retouching-torch', model='damo/cv_unet_skin_retouching_torch', model_revision='v1.0.1')
File "D:\sdwebui\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\builder.py", line 147, in pipeline
return build_pipeline(cfg, task_name=task)
File "D:\sdwebui\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\builder.py", line 59, in build_pipeline
return build_from_cfg(
File "D:\sdwebui\stable-diffusion-webui\venv\lib\site-packages\modelscope\utils\registry.py", line 215, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
ValueError: SkinRetouchingTorchPipeline: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

[部分人脸模型报错需要先查验训练图片是否明显包含单张人脸]训练完,报错

jpg: 7.jpg face_id_scores 0.6320564312897047
jpg: 19.jpg face_id_scores 0.5878771045231846
jpg: 14.jpg face_id_scores 0.1949093026624526
jpg: 5.jpg face_id_scores 0.5932805170146064
jpg: 18.jpg face_id_scores 0.5313497371334172
jpg: 17.jpg face_id_scores 0.5227915373389229
jpg: 8.jpg face_id_scores 0.5211492366305912
jpg: 15.jpg face_id_scores 0.5717502106011998
jpg: 6.jpg face_id_scores 0.5081543266131069
jpg: 12.jpg face_id_scores 0.16740420898392058
jpg: 10.jpg face_id_scores 0.4865440232408242
jpg: 11.jpg face_id_scores 0.012049722207594385
jpg: 9.jpg face_id_scores 0.5337397954804464
jpg: 13.jpg face_id_scores 0.1811154251123568
jpg: 16.jpg face_id_scores 0.473920905871453
0it [00:00, ?it/s]2023-09-08 11:32:55,260 - modelscope - WARNING - task skin-retouching-torch input definition is missing
2023-09-08 11:32:57,398 - modelscope - WARNING - task skin-retouching-torch output keys are missing
12it [00:34, 2.86s/it]
Exception in thread Thread-63 (preprocess_images):
Traceback (most recent call last):
File "C:\Users\sunhu\anaconda3\envs\sdwebui\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\sunhu\anaconda3\envs\sdwebui\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\sunhu\aigc\stable-diffusion-webui\extensions\EasyPhoto-sd-webui\scripts\preprocess.py", line 149, in preprocess_images
sub_box = sub_boxes[0]
IndexError: index 0 is out of bounds for axis 0 with size 0

不清楚哪里出现的问题。

OSError:No space left on device:

在硬盘还有很多的情况下,出现这个错误
通过df -h
如下
截屏2023-09-15 下午10 19 46
/dev/shm 如果满了,就会出现这个错误
可以查看下面的链接
https://stackoverflow.com/questions/6998083/python-causing-ioerror-errno-28-no-space-left-on-device-results-32766-h
或者看AI解答

/dev/shm是一个临时的基于内存的文件系统,它利用系统的内存来保存进程间共享的数据。

通常情况下,/dev/shm目录下存储着以下几类数据:

共享内存段(Shared memory segments)
不同进程间可以使用shmget()等API来分配一块共享内存,用于进程间高速数据交换,这些共享内存段会以文件形式存在于/dev/shm目录下。

信号灯(Semaphores)
用于进程同步的信号灯也是存放在/dev/shm目录下的文件。

套接字(Sockets)
有些进程之间基于套接字通信的话, socket文件也会出现在/dev/shm下。

临时文件
一些应用程序也会利用/dev/shm来保存临时文件,相比硬盘文件系统,shm的文件IO速度更快。

其它临时数据
程序运行时的一些临时数据也可能写入到/dev/shm中。

所以简单来说,/dev/shm中主要保存的是进程间共享内存和同步所需的临时数据,它可以利用内存提高访问速度。但数据不会持久化,进程结束即丢失。

通过监控shm的使用情况,可以对系统中进程间通信方式和效率有更多了解。

如何加载SD-VAE模型

请问我的stable-diffusion webui界面中,为什么没有SD-VAE的模型选项呢

image

是缺少了某个插件还是缺了某个设置呢

更新后预处理异常

今天更新后,预处理阶段总是报错:
0it [00:00, ?it/s]2023-09-15 13:01:22,030 - modelscope - WARNING - task skin-retouching-torch input definition is missing
2023-09-15 13:01:22,113 - modelscope - WARNING - task skin-retouching-torch output keys are missing
0it [00:00, ?it/s]
Exception in thread Thread-8 (preprocess_images):
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/mnt/workspace/demos/stable_diffusion_easyphoto/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/preprocess.py", line 157, in preprocess_images
sub_box = sub_boxes[0]
IndexError: index 0 is out of bounds for axis 0 with size 0

train lora cuda out of memory

image
image
小白一枚,请问大佬,目前使用v100 16g 两张卡训练lora的时候爆出内存溢出,请问可以使用多gpu运行吗?还是有什么其他方法,求帮助。

Inference failed.

2023-09-10 04:31:59.415814: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:375] MLIR V1 optimization pass is not enabled
2023-09-10 04:32:02,769 - modelscope - INFO - model inference done
Traceback (most recent call last):
  File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 412, in easyphoto_infer_forward
    first_diffusion_output_image = inpaint_with_mask_face(input_image, input_mask, replaced_input_image, diffusion_steps=first_diffusion_steps, denoising_strength=first_denoising_strength, input_prompt=input_prompts[index], hr_scale=1.0, seed=str(seed), sd_model_checkpoint=sd_model_checkpoint)
  File "C:\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 109, in inpaint_with_mask_face
    image = i2i_inpaint_call(
  File "C:\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\sdwebui.py", line 250, in i2i_inpaint_call
    sd_vae              = os.path.basename(find_vae_near_checkpoint(sd_vae))
  File "C:\Program Files\Python310\lib\ntpath.py", line 242, in basename
    return split(p)[1]
  File "C:\Program Files\Python310\lib\ntpath.py", line 211, in split
    p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

Above is a log that I got when I tried to generate an image with a predefined template.
Could you tell me how to fix this issue? Thank you for this extension.

error in training

2023-09-06 14:59:11 /stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/train_kohya/train_lora.py
2023-09-06 14:59:15 [06:59:15] WARNING The following values were not passed to launch.py:895
2023-09-06 14:59:15 accelerate launch and had defaults used
2023-09-06 14:59:15 instead:
2023-09-06 14:59:15 --num_processes was set to a value
2023-09-06 14:59:15 of 1
2023-09-06 14:59:15 --num_machines was set to a value of
2023-09-06 14:59:15 1
2023-09-06 14:59:15 --dynamo_backend was set to a value
2023-09-06 14:59:15 of 'no'
2023-09-06 14:59:15 To avoid this warning pass in values for each
2023-09-06 14:59:15 of the problematic parameters or run
2023-09-06 14:59:15 accelerate config.
2023-09-06 14:59:19 2023-09-06 06:59:19,794 - modelscope - INFO - PyTorch version 2.0.1+cu118 Found.
2023-09-06 14:59:19 2023-09-06 06:59:19,795 - modelscope - INFO - TensorFlow version 2.13.0 Found.
2023-09-06 14:59:19 2023-09-06 06:59:19,796 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2023-09-06 14:59:19 2023-09-06 06:59:19,883 - modelscope - INFO - Loading done! Current index file version is 1.9.0, with md5 f937efe202fbf9e469155dce0c614b9a and a total number of 921 components indexed
2023-09-06 14:59:20 09/06/2023 06:59:20 - INFO - main - Distributed environment: NO
2023-09-06 14:59:20 Num processes: 1
2023-09-06 14:59:20 Process index: 0
2023-09-06 14:59:20 Local process index: 0
2023-09-06 14:59:20 Device: cuda
2023-09-06 14:59:20
2023-09-06 14:59:20 Mixed precision type: fp16
2023-09-06 14:59:20
2023-09-06 14:59:20 {'clip_sample_range', 'variance_type', 'timestep_spacing', 'dynamic_thresholding_ratio', 'prediction_type', 'thresholding', 'sample_max_value'} was not found in config. Values will be initialized to default values.
2023-09-06 14:59:20 ╭───────────────────── Traceback (most recent call last) ──────────────────────╮
2023-09-06 14:59:20 │ /stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/train_kohya/tr │
2023-09-06 14:59:20 │ ain_lora.py:1394 in │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ 1391 │
2023-09-06 14:59:20 │ 1392 │
2023-09-06 14:59:20 │ 1393 if name == "main": │
2023-09-06 14:59:20 │ ❱ 1394 │ main() │
2023-09-06 14:59:20 │ 1395 │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ /stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/train_kohya/tr │
2023-09-06 14:59:20 │ ain_lora.py:843 in main │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ 840 │ tokenizer = CLIPTokenizer.from_pretrained( │
2023-09-06 14:59:20 │ 841 │ │ args.pretrained_model_name_or_path, subfolder="tokenizer", re │
2023-09-06 14:59:20 │ 842 │ ) │
2023-09-06 14:59:20 │ ❱ 843 │ text_encoder, vae, unet = load_models_from_stable_diffusion_check │
2023-09-06 14:59:20 │ 844 │ # freeze parameters of models to save more memory │
2023-09-06 14:59:20 │ 845 │ unet.requires_grad_(False) │
2023-09-06 14:59:20 │ 846 │ vae.requires_grad_(False) │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ /data/config/auto/extensions/sd-webui-EasyPhoto/scripts/train_kohya/utils/mo │
2023-09-06 14:59:20 │ del_utils.py:842 in load_models_from_stable_diffusion_checkpoint │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ 839 │
2023-09-06 14:59:20 │ 840 │
2023-09-06 14:59:20 │ 841 def load_models_from_stable_diffusion_checkpoint(v2, ckpt_path, device │
2023-09-06 14:59:20 │ ❱ 842 │ , state_dict = load_checkpoint_with_text_encoder_conversion(ckpt
2023-09-06 14:59:20 │ 843 │ │
2023-09-06 14:59:20 │ 844 │ # Convert the UNet2DConditionModel model. │
2023-09-06 14:59:20 │ 845 │ unet_config = create_unet_diffusers_config(v2, unet_use_linear_pro │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ /data/config/auto/extensions/sd-webui-EasyPhoto/scripts/train_kohya/utils/mo │
2023-09-06 14:59:20 │ del_utils.py:818 in load_checkpoint_with_text_encoder_conversion │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ 815 │ │
2023-09-06 14:59:20 │ 816 │ if is_safetensors(ckpt_path): │
2023-09-06 14:59:20 │ 817 │ │ checkpoint = None │
2023-09-06 14:59:20 │ ❱ 818 │ │ state_dict = load_file(ckpt_path) # , device) # may causes er │
2023-09-06 14:59:20 │ 819 │ else: │
2023-09-06 14:59:20 │ 820 │ │ checkpoint = torch.load(ckpt_path, map_location=device) │
2023-09-06 14:59:20 │ 821 │ │ if "state_dict" in checkpoint: │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ /usr/local/lib/python3.10/site-packages/safetensors/torch.py:259 in │
2023-09-06 14:59:20 │ load_file │
2023-09-06 14:59:20 │ │
2023-09-06 14:59:20 │ 256 │ ``` │
2023-09-06 14:59:20 │ 257 │ """ │
2023-09-06 14:59:20 │ 258 │ result = {} │
2023-09-06 14:59:20 │ ❱ 259 │ with safe_open(filename, framework="pt", device=device) as f: │
2023-09-06 14:59:20 │ 260 │ │ for k in f.keys(): │
2023-09-06 14:59:20 │ 261 │ │ │ result[k] = f.get_tensor(k) │
2023-09-06 14:59:20 │ 262 │ return result │
2023-09-06 14:59:20 ╰──────────────────────────────────────────────────────────────────────────────╯
2023-09-06 14:59:20 SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
2023-09-06 14:59:21 ╭───────────────────── Traceback (most recent call last) ──────────────────────╮
2023-09-06 14:59:21 │ /usr/local/bin/accelerate:8 in │
2023-09-06 14:59:21 │ │
2023-09-06 14:59:21 │ 5 from accelerate.commands.accelerate_cli import main │
2023-09-06 14:59:21 │ 6 if name == 'main': │
2023-09-06 14:59:21 │ 7 │ sys.argv[0] = re.sub(r'(-script.pyw|.exe)?$', '', sys.argv[0]) │
2023-09-06 14:59:21 │ ❱ 8 │ sys.exit(main()) │
2023-09-06 14:59:21 │ 9 │
2023-09-06 14:59:21 │ │
2023-09-06 14:59:21 │ /usr/local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.p │
2023-09-06 14:59:21 │ y:45 in main │
2023-09-06 14:59:21 │ │
2023-09-06 14:59:21 │ 42 │ │ exit(1) │
2023-09-06 14:59:21 │ 43 │ │
2023-09-06 14:59:21 │ 44 │ # Run │
2023-09-06 14:59:21 │ ❱ 45 │ args.func(args) │
2023-09-06 14:59:21 │ 46 │
2023-09-06 14:59:21 │ 47 │
2023-09-06 14:59:21 │ 48 if name == "main": │
2023-09-06 14:59:21 │ │
2023-09-06 14:59:21 │ /usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py:923 in │
2023-09-06 14:59:21 │ launch_command │
2023-09-06 14:59:21 │ │
2023-09-06 14:59:21 │ 920 │ elif defaults is not None and defaults.compute_environment == Comp │
2023-09-06 14:59:21 │ 921 │ │ sagemaker_launcher(defaults, args) │
2023-09-06 14:59:21 │ 922 │ else: │
2023-09-06 14:59:21 │ ❱ 923 │ │ simple_launcher(args) │
2023-09-06 14:59:21 │ 924 │
2023-09-06 14:59:21 │ 925 │
2023-09-06 14:59:21 │ 926 def main(): │
2023-09-06 14:59:21 │ │
2023-09-06 14:59:21 │ /usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py:579 in │
2023-09-06 14:59:21 │ simple_launcher │
2023-09-06 14:59:21 │ │
2023-09-06 14:59:21 │ 576 │ process.wait() │
2023-09-06 14:59:21 │ 577 │ if process.returncode != 0: │
2023-09-06 14:59:21 │ 578 │ │ if not args.quiet: │
2023-09-06 14:59:21 │ ❱ 579 │ │ │ raise subprocess.CalledProcessError(returncode=process.ret │
2023-09-06 14:59:21 │ 580 │ │ else: │
2023-09-06 14:59:21 │ 581 │ │ │ sys.exit(1) │
2023-09-06 14:59:21 │ 582 │
2023-09-06 14:59:21 ╰──────────────────────────────────────────────────────────────────────────────╯
2023-09-06 14:59:21 CalledProcessError: Command '['/usr/local/bin/python',
2023-09-06 14:59:21 '/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/train_kohya/train
2023-09-06 14:59:21 _lora.py',
2023-09-06 14:59:21 '--pretrained_model_name_or_path=/stable-diffusion-webui/extensions/sd-webui-Eas
2023-09-06 14:59:21 yPhoto/models/stable-diffusion-v1-5',
2023-09-06 14:59:21 '--pretrained_model_ckpt=/stable-diffusion-webui/models/Stable-diffusion/Chillou
2023-09-06 14:59:21 tmix-Ni-pruned-fp16-fix.safetensors',
2023-09-06 14:59:21 '--train_data_dir=/stable-diffusion-webui/outputs/easyphoto-user-id-infos/jinche
2023-09-06 14:59:21 n/processed_images', '--caption_column=text', '--resolution=512',
2023-09-06 14:59:21 '--random_flip', '--train_batch_size=1', '--gradient_accumulation_steps=4',
2023-09-06 14:59:21 '--dataloader_num_workers=16', '--max_train_steps=800',
2023-09-06 14:59:21 '--checkpointing_steps=100', '--learning_rate=0.0001',
2023-09-06 14:59:21 '--lr_scheduler=constant', '--lr_warmup_steps=0', '--train_text_encoder',
2023-09-06 14:59:21 '--seed=42', '--rank=128', '--network_alpha=64',
2023-09-06 14:59:21 '--validation_prompt=easyphoto_face, easyphoto, 1person',
2023-09-06 14:59:21 '--validation_steps=100',
2023-09-06 14:59:21 '--output_dir=/stable-diffusion-webui/outputs/easyphoto-user-id-infos/jinchen/us
2023-09-06 14:59:21 er_weights',
2023-09-06 14:59:21 '--logging_dir=/stable-diffusion-webui/outputs/easyphoto-user-id-infos/jinchen/u
2023-09-06 14:59:21 ser_weights', '--enable_xformers_memory_efficient_attention',
2023-09-06 14:59:21 '--mixed_precision=fp16',
2023-09-06 14:59:21 '--template_dir=/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/models/tra
2023-09-06 14:59:21 ining_templates', '--template_mask', '--merge_best_lora_based_face_id',
2023-09-06 14:59:21 '--merge_best_lora_name=jinchen']' returned non-zero exit status 1.
2023-09-06 14:59:22 Traceback (most recent call last):
2023-09-06 14:59:22 File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 422, in run_predict
2023-09-06 14:59:22 output = await app.get_blocks().process_api(
2023-09-06 14:59:22 File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1323, in process_api
2023-09-06 14:59:22 result = await self.call_function(
2023-09-06 14:59:22 File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1051, in call_function
2023-09-06 14:59:22 prediction = await anyio.to_thread.run_sync(
2023-09-06 14:59:22 File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
2023-09-06 14:59:22 return await get_asynclib().run_sync_in_worker_thread(
2023-09-06 14:59:22 File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
2023-09-06 14:59:22 return await future
2023-09-06 14:59:22 File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
2023-09-06 14:59:22 result = context.run(func, *args)
2023-09-06 14:59:22 File "/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/easyphoto_train.py", line 218, in easyphoto_train_forward
2023-09-06 14:59:22 copyfile(best_weight_path, webui_save_path)
2023-09-06 14:59:22 File "/usr/local/lib/python3.10/shutil.py", line 254, in copyfile
2023-09-06 14:59:22 with open(src, 'rb') as fsrc:
2023-09-06 14:59:22 FileNotFoundError: [Errno 2] No such file or directory: '/stable-diffusion-webui/outputs/easyphoto-user-id-infos/jinchen/user_weights/best_outputs/jinchen.safetensors'
2023-09-06 14:59:22 2023-09-06 06:59:22,201 - httpx - HTTP Request: POST http://localhost:7860/api/predict "HTTP/1.1 500 Internal Server Error"

训练过程中,提示找不到路径目录。

系统环境:win11, Python 3.10.11,webui1.6Dev
Traceback (most recent call last):
File "F:\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 1416, in
main()
File "F:\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 1180, in main
user_id = args.output_dir.split('/')[-2]
IndexError: list index out of range
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Traceback (most recent call last):
File "C:\Users\H\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\H\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "F:\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 989, in
main()
File "F:\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 985, in main
launch_command(args)
File "F:\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 979, in launch_command
simple_launcher(args)
File "F:\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['F:\stable-diffusion-webui\venv\Scripts\Python.exe', 'F:\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py', '--pretrained_model_name_or_path=extensions\sd-webui-EasyPhoto\models\stable-diffusion-v1-5', '--pretrained_model_ckpt=models\Stable-diffusion\Chilloutmix-Ni-pruned-fp16-fix.safetensors', '--train_data_dir=outputs\easyphoto-user-id-infos\111\processed_images', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=1', '--gradient_accumulation_steps=4', '--dataloader_num_workers=0', '--max_train_steps=800', '--checkpointing_steps=100', '--learning_rate=0.0001', '--lr_scheduler=constant', '--lr_warmup_ste

训练完成后无法生成

报错日志:
2023-09-07 10:11:51,501 - modelscope - WARNING - Find task: face-detection, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-07 10:11:51,502 - modelscope - INFO - loading model from /root/.cache/modelscope/hub/damo/cv_resnet50_face-detection_retinaface/pytorch_model.pt
2023-09-07 10:11:52,056 - modelscope - INFO - load model done
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.10/site-packages/gradio/routes.py", line 422, in run_predict
session_state = {}
File "/root/miniconda3/lib/python3.10/site-packages/gradio/blocks.py", line 1323, in process_api
File "/root/miniconda3/lib/python3.10/site-packages/gradio/blocks.py", line 1051, in call_function
Calls function with given index and preprocessed input, and measures process time.
File "/root/miniconda3/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, args)
File "/root/data/weiui/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/easyphoto_infer.py", line 214, in easyphoto_infer_forward
best_outputs_paths = glob.glob(os.path.join(user_id_outpath_samples, user_id, "user_weights", "best_outputs", "
.jpg"))
File "/root/miniconda3/lib/python3.10/posixpath.py", line 90, in join
genericpath._check_arg_types('join', a, *p)
File "/root/miniconda3/lib/python3.10/genericpath.py", line 152, in _check_arg_types
raise TypeError(f'{funcname}() argument must be str, bytes, or '
TypeError: join() argument must be str, bytes, or os.PathLike object, not 'NoneType'

采用了默认的模板,设定了id,默认参数运行

第一次运行,哪里出错了?

To create a public link, set share=True in launch().
Startup time: 133.6s (prepare environment: 102.3s, import torch: 7.7s, import gradio: 0.7s, setup paths: 0.9s, initialize shared: 0.5s, other imports: 0.6s, setup codeformer: 0.2s, load scripts: 11.3s, initialize extra networks: 1.0s, scripts before_ui_callback: 0.3s, create ui: 3.3s, gradio launch: 4.5s, add APIs: 0.1s).
Applying attention optimization: xformers... done.
Model loaded in 14.0s (load weights from disk: 1.5s, create model: 1.8s, apply weights to model: 6.9s, move model to device: 0.2s, load textual inversion embeddings: 1.6s, calculate empty prompt: 1.9s).
Traceback (most recent call last):
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\modules\ui_extra_networks.py", line 392, in pages_html
return refresh()
File "C:\Users\nsg\stable-diffusion-webui\modules\ui_extra_networks.py", line 400, in refresh
ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
File "C:\Users\nsg\stable-diffusion-webui\modules\ui_extra_networks.py", line 400, in
ui.pages_contents = [pg.create_html(ui.tabname) for pg in ui.stored_extra_pages]
File "C:\Users\nsg\stable-diffusion-webui\modules\ui_extra_networks.py", line 162, in create_html
self.items = {x["name"]: x for x in self.list_items()}
File "C:\Users\nsg\stable-diffusion-webui\modules\ui_extra_networks.py", line 162, in
self.items = {x["name"]: x for x in self.list_items()}
File "C:\Users\nsg\stable-diffusion-webui\modules\ui_extra_networks_checkpoints.py", line 35, in list_items
yield self.create_item(name, index)
File "C:\Users\nsg\stable-diffusion-webui\modules\ui_extra_networks_checkpoints.py", line 18, in create_item
path, ext = os.path.splitext(checkpoint.filename)
AttributeError: 'NoneType' object has no attribute 'filename'
Start Downloading weights
Traceback (most recent call last):
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_train.py", line 115, in easyphoto_train_forward
original_backup_path = os.path.join(user_id_outpath_samples, user_id, "original_backup")
File "C:\Users\nsg\AppData\Local\Programs\Python\Python310\lib\ntpath.py", line 143, in join
genericpath._check_arg_types('join', path, *paths)
File "C:\Users\nsg\AppData\Local\Programs\Python\Python310\lib\genericpath.py", line 152, in _check_arg_types
raise TypeError(f'{funcname}() argument must be str, bytes, or '
TypeError: join() argument must be str, bytes, or os.PathLike object, not 'NoneType'
Start Downloading weights
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
2023-09-06 16:59:22,098 - modelscope - INFO - Model revision not specified, use the latest revision: v2.0.2
2023-09-06 16:59:24,605 - modelscope - INFO - initiate model from C:\Users\nsg.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface
2023-09-06 16:59:24,606 - modelscope - INFO - initiate model from location C:\Users\nsg.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface.
2023-09-06 16:59:24,617 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-06 16:59:24,618 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-06 16:59:24,619 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\nsg\.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface'}. trying to build by task and model information.
2023-09-06 16:59:24,620 - modelscope - WARNING - Find task: face-detection, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-06 16:59:24,624 - modelscope - INFO - loading model from C:\Users\nsg.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface\pytorch_model.pt
2023-09-06 16:59:25,371 - modelscope - INFO - load model done
2023-09-06 16:59:27,373 - modelscope - INFO - Model revision not specified, use the latest revision: v1.0.0
2023-09-06 16:59:28,111 - modelscope - INFO - initiate model from C:\Users\nsg.cache\modelscope\hub\damo\cv_u2net_salient-detection
2023-09-06 16:59:28,112 - modelscope - INFO - initiate model from location C:\Users\nsg.cache\modelscope\hub\damo\cv_u2net_salient-detection.
2023-09-06 16:59:28,119 - modelscope - INFO - initialize model from C:\Users\nsg.cache\modelscope\hub\damo\cv_u2net_salient-detection
2023-09-06 16:59:28,741 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-06 16:59:28,741 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-06 16:59:28,743 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\nsg\.cache\modelscope\hub\damo\cv_u2net_salient-detection'}. trying to build by task and model information.
2023-09-06 16:59:28,744 - modelscope - WARNING - No preprocessor key ('detection', 'semantic-segmentation') found in PREPROCESSOR_MAP, skip building preprocessor.
2023-09-06 16:59:30,765 - modelscope - INFO - Use user-specified model revision: v1.0.2
2023-09-06 16:59:31,584 - modelscope - WARNING - ('PIPELINES', 'skin-retouching-torch', 'skin-retouching-torch') not found in ast index file
2023-09-06 16:59:31,585 - modelscope - INFO - initiate model from C:\Users\nsg.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch
2023-09-06 16:59:31,588 - modelscope - INFO - initiate model from location C:\Users\nsg.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch.
2023-09-06 16:59:31,597 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-06 16:59:31,597 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-06 16:59:31,598 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\nsg\.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch'}. trying to build by task and model information.
2023-09-06 16:59:31,599 - modelscope - WARNING - Find task: skin-retouching-torch, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-06 16:59:34,006 - modelscope - INFO - Model revision not specified, use the latest revision: v2.0.2
2023-09-06 16:59:35,990 - modelscope - INFO - initiate model from C:\Users\nsg.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface
2023-09-06 16:59:35,991 - modelscope - INFO - initiate model from location C:\Users\nsg.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface.
2023-09-06 16:59:36,005 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-06 16:59:36,006 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-06 16:59:36,007 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\nsg\.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface'}. trying to build by task and model information.
2023-09-06 16:59:36,008 - modelscope - WARNING - Find task: face-detection, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-06 16:59:36,014 - modelscope - INFO - loading model from C:\Users\nsg.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface\pytorch_model.pt
2023-09-06 16:59:36,752 - modelscope - INFO - load model done
2023-09-06 16:59:41,415 - modelscope - INFO - Model revision not specified, use the latest revision: v1.0.0
2023-09-06 16:59:42,207 - modelscope - INFO - initiate model from C:\Users\nsg.cache\modelscope\hub\damo\cv_gpen_image-portrait-enhancement
2023-09-06 16:59:42,208 - modelscope - INFO - initiate model from location C:\Users\nsg.cache\modelscope\hub\damo\cv_gpen_image-portrait-enhancement.
2023-09-06 16:59:42,214 - modelscope - INFO - initialize model from C:\Users\nsg.cache\modelscope\hub\damo\cv_gpen_image-portrait-enhancement
Loading ResNet ArcFace
2023-09-06 16:59:46,402 - modelscope - INFO - load face enhancer model done
2023-09-06 16:59:47,087 - modelscope - INFO - load face detector model done
2023-09-06 16:59:47,939 - modelscope - INFO - load sr model done
2023-09-06 16:59:49,521 - modelscope - INFO - load fqa model done
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:06<00:00, 1.40s/it]
selected paths: C:\Users\nsg\stable-diffusion-webui\outputs/easyphoto-user-id-infos\jacky-5\original_backup\2.jpg total scores: 0.6295568212008975 face angles 0.9987909946784695
selected paths: C:\Users\nsg\stable-diffusion-webui\outputs/easyphoto-user-id-infos\jacky-5\original_backup\0.jpg total scores: 0.620656322531957 face angles 0.9955321676581762
selected paths: C:\Users\nsg\stable-diffusion-webui\outputs/easyphoto-user-id-infos\jacky-5\original_backup\4.jpg total scores: 0.6175696108051335 face angles 0.9642633823436466
selected paths: C:\Users\nsg\stable-diffusion-webui\outputs/easyphoto-user-id-infos\jacky-5\original_backup\3.jpg total scores: 0.5971493601266059 face angles 0.9678671137462829
selected paths: C:\Users\nsg\stable-diffusion-webui\outputs/easyphoto-user-id-infos\jacky-5\original_backup\1.jpg total scores: 0.19984603786853658 face angles 0.9990114183647643
jpg: 4.jpg face_id_scores 0.6175696108051335
jpg: 2.jpg face_id_scores 0.6295568212008975
jpg: 0.jpg face_id_scores 0.620656322531957
jpg: 3.jpg face_id_scores 0.5971493601266059
jpg: 1.jpg face_id_scores 0.19984603786853658
0it [00:00, ?it/s]2023-09-06 16:59:57,343 - modelscope - WARNING - task skin-retouching-torch input definition is missing
2023-09-06 17:00:02,053 - modelscope - WARNING - task skin-retouching-torch output keys are missing
3it [00:24, 8.01s/it]
Exception in thread Thread-41 (preprocess_images):
Traceback (most recent call last):
File "C:\Users\nsg\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\nsg\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\preprocess.py", line 125, in preprocess_images
sub_image = Image.fromarray(cv2.cvtColor(portrait_enhancement(sub_image)[OutputKeys.OUTPUT_IMG], cv2.COLOR_BGR2RGB))
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\base.py", line 219, in call
output = self._process_single(input, *args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\base.py", line 247, in _process_single
out = self.preprocess(input, **preprocess_params)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\cv\image_portrait_enhancement_pipeline.py", line 178, in preprocess
img_sr = self.sr_process(img)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\modelscope\pipelines\cv\image_portrait_enhancement_pipeline.py", line 161, in sr_process
output = self.sr_model(img)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\cv\super_resolution\rrdbnet_arch.py", line 123, in forward
body_feat = self.conv_body(self.body(feat))
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\cv\super_resolution\rrdbnet_arch.py", line 63, in forward
out = self.rdb1(x)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\modelscope\models\cv\super_resolution\rrdbnet_arch.py", line 39, in forward
x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 444, in network_Conv2d_forward
return originals.Conv2d_forward(self, input)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 322.00 MiB (GPU 0; 12.00 GiB total capacity; 10.89 GiB already allocated; 0 bytes free; 11.08 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py
The following values were not passed to accelerate launch and had defaults used instead:
--num_processes was set to a value of 2
More than one GPU was found, enabling multi-GPU training.
If this was unintended please pass in --num_processes=1.
--num_machines was set to a value of 1
--dynamo_backend was set to a value of 'no'
To avoid this warning pass in values for each of the problematic parameters or run accelerate config.
NOTE: Redirects are currently not supported in Windows or MacOs.
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [USER-20230706TY]:3456 (system error: 10049 - 在其上下文中,该请求的地址无效。).
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [USER-20230706TY]:3456 (system error: 10049 - 在其上下文中,该请求的地址无效。).
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
2023-09-06 17:00:40,304 - modelscope - INFO - PyTorch version 2.0.1+cu118 Found.
2023-09-06 17:00:40,309 - modelscope - INFO - TensorFlow version 2.13.0 Found.
2023-09-06 17:00:40,309 - modelscope - INFO - Loading ast index from C:\Users\nsg.cache\modelscope\ast_indexer
2023-09-06 17:00:40,338 - modelscope - INFO - PyTorch version 2.0.1+cu118 Found.
2023-09-06 17:00:40,343 - modelscope - INFO - TensorFlow version 2.13.0 Found.
2023-09-06 17:00:40,344 - modelscope - INFO - Loading ast index from C:\Users\nsg.cache\modelscope\ast_indexer
2023-09-06 17:00:40,457 - modelscope - INFO - Loading done! Current index file version is 1.9.0, with md5 ebb1c3c0522899612853064e3129f6d1 and a total number of 921 components indexed
2023-09-06 17:00:40,464 - modelscope - INFO - Loading done! Current index file version is 1.9.0, with md5 ebb1c3c0522899612853064e3129f6d1 and a total number of 921 components indexed
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [USER-20230706TY]:3456 (system error: 10049 - 在其上下文中,该请求的地址无效。).
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [USER-20230706TY]:3456 (system error: 10049 - 在其上下文中,该请求的地址无效。).
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [USER-20230706TY]:3456 (system error: 10049 - 在其上下文中,该请求的地址无效。).
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [USER-20230706TY]:3456 (system error: 10049 - 在其上下文中,该请求的地址无效。).
Traceback (most recent call last):
File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 1394, in
main()
File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 803, in main
accelerator = Accelerator(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\accelerator.py", line 358, in init
self.state = AcceleratorState(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\state.py", line 720, in init
PartialState(cpu, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\state.py", line 192, in init
torch.distributed.init_process_group(backend=self.backend, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 907, in init_process_group
Traceback (most recent call last):
File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 1394, in
default_pg = _new_process_group_helper(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 1013, in _new_process_group_helper
main()
raise RuntimeError("Distributed package doesn't have NCCL " "built in") File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 803, in main

RuntimeError: Distributed package doesn't have NCCL built inaccelerator = Accelerator(

File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\accelerator.py", line 358, in init
self.state = AcceleratorState(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\state.py", line 720, in init
PartialState(cpu, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\state.py", line 192, in init
torch.distributed.init_process_group(backend=self.backend, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 907, in init_process_group
default_pg = _new_process_group_helper(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 1013, in _new_process_group_helper
raise RuntimeError("Distributed package doesn't have NCCL " "built in")
RuntimeError: Distributed package doesn't have NCCL built in
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 4880) of binary: C:\Users\nsg\stable-diffusion-webui\venv\Scripts\python.exe
Traceback (most recent call last):
File "C:\Users\nsg\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\nsg\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 989, in
main()
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 985, in main
launch_command(args)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 970, in launch_command
multi_gpu_launcher(args)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\accelerate\commands\launch.py", line 646, in multi_gpu_launcher
distrib_run.run(args)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\distributed\run.py", line 785, in run
elastic_launch(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\distributed\launcher\api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\torch\distributed\launcher\api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py FAILED

Failures:
[1]:
time : 2023-09-06_17:00:46
host : USER-20230706TY
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 9704)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2023-09-06_17:00:46
host : USER-20230706TY
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 4880)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Error executing the command: Command '['C:\Users\nsg\stable-diffusion-webui\venv\Scripts\python.exe', '-m', 'accelerate.commands.launch', '--mixed_precision=fp16', '--main_process_port=3456', 'C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py', '--pretrained_model_name_or_path=extensions\sd-webui-EasyPhoto\models\stable-diffusion-v1-5', '--pretrained_model_ckpt=models\Stable-diffusion\Chilloutmix-Ni-pruned-fp16-fix.safetensors', '--train_data_dir=outputs\easyphoto-user-id-infos\jacky-5\processed_images', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=1', '--gradient_accumulation_steps=4', '--dataloader_num_workers=0', '--max_train_steps=800', '--checkpointing_steps=100', '--learning_rate=0.0001', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--train_text_encoder', '--seed=42', '--rank=128', '--network_alpha=64', '--validation_prompt=easyphoto_face, easyphoto, 1person', '--validation_steps=100', '--output_dir=outputs\easyphoto-user-id-infos\jacky-5\user_weights', '--logging_dir=outputs\easyphoto-user-id-infos\jacky-5\user_weights', '--enable_xformers_memory_efficient_attention', '--mixed_precision=fp16', '--template_dir=extensions\sd-webui-EasyPhoto\models\training_templates', '--template_mask', '--merge_best_lora_based_face_id', '--merge_best_lora_name=jacky-5']' returned non-zero exit status 1.
Traceback (most recent call last):
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_train.py", line 218, in easyphoto_train_forward
copyfile(best_weight_path, webui_save_path)
File "C:\Users\nsg\AppData\Local\Programs\Python\Python310\lib\shutil.py", line 254, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\nsg\stable-diffusion-webui\outputs/easyphoto-user-id-infos\jacky-5\user_weights\best_outputs/jacky-5.safetensors'

[cudnn版本太低与pytorch不兼容 pytorch2.0.1需要cudnn8.5] 报错RuntimeError: FIND was unable to find an engine to execute this computation

OS: Ubuntu 20.04
python: 3.10
pytorch:2.0.1
cuda: 11.7
cudnn: 8.2.4
GPU: Nvidia-v100

Encountered an error during training:

Traceback (most recent call last):                                                                                                                            [0/1142]
  File "/home/username/miniconda3/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    [self.run](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/self.run)()                           
  File "/home/username/miniconda3/lib/python3.10/threading.py", line 953, in run  
    self._target(*self._args, **self._kwargs)                   
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/extensions/EasyPhoto-sd-webui/scripts/preprocess.py", line 125, in preprocess_images
    sub_image = Image.fromarray(cv2.cvtColor(portrait_enhancement(sub_image)[OutputKeys.OUTPUT_IMG], cv2.COLOR_BGR2RGB))
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/modelscope/pipelines/base.py", line 219, in __call__
    output = self._process_single(input, *args, **kwargs)
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/modelscope/pipelines/base.py", line 247, in _process_single
    out = self.preprocess(input, **preprocess_params)                              
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/modelscope/pipelines/cv/image_portrait_enhancement_[pipeline.py](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/pipeline.py)", line 17
8, in preprocess                         
    img_sr = [self.sr](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/self.sr)_process(img)                                                  
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/modelscope/pipelines/cv/image_portrait_enhancement_[pipeline.py](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/pipeline.py)", line 16
1, in sr_process                         
    output = [self.sr](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/self.sr)_model(img)                                                    
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/modelscope/models/cv/super_resolution/rrdbnet_[arch.py](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/arch.py)", line 123, in for
ward                                                                               
    body_feat = self.conv_body(self.body(feat))
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)                                           
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
    input = module(input)               
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)                                           
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/modelscope/models/cv/super_resolution/rrdbnet_[arch.py](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/arch.py)", line 63, in forw
ard                                                                                
    out = self.rdb1(x)                                                             
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/modelscope/models/cv/super_resolution/rrdbnet_[arch.py](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/arch.py)", line 40, in forw
ard                                                                                
    x4 = self.lrelu(self.conv4([torch.cat](https://github.com/aigc-apps/sd-webui-EasyPhoto/issues/torch.cat)((x, x1, x2, x3), 1)))
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)                                           
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 415, in lora_Conv2d_forward
    return torch.nn.Conv2d_forward_before_lora(self, input)                        
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/username/src/AIGC-Pic/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,                              
RuntimeError: FIND was unable to find an engine to execute this computation

FileNotFoundError: [Errno 2] No such file or directory:

fixed by #13

===============
开始训练如下报错:
FileNotFoundError: [Errno 2] No such file or directory: 'D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\user_weights\best_outputs/lyf.safetensors'

lyf是起的id
完整log
pplied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\extensions\sd-webui-EasyPhoto\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\extensions\sd-webui-EasyPhoto\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\extensions\sd-webui-EasyPhoto\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
2023-09-04 22:40:00,481 - modelscope - INFO - Model revision not specified, use the latest revision: v2.0.2
2023-09-04 22:40:02,213 - modelscope - INFO - initiate model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface
2023-09-04 22:40:02,214 - modelscope - INFO - initiate model from location C:\Users\zcn6842.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface.
2023-09-04 22:40:02,216 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 22:40:02,216 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 22:40:02,216 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\zcn6842\.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface'}. trying to build by task and model information.
2023-09-04 22:40:02,216 - modelscope - WARNING - Find task: face-detection, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-04 22:40:02,218 - modelscope - INFO - loading model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface\pytorch_model.pt
2023-09-04 22:40:02,513 - modelscope - INFO - load model done
2023-09-04 22:40:02,921 - modelscope - INFO - Model revision not specified, use the latest revision: v1.0.0
2023-09-04 22:40:03,086 - modelscope - INFO - initiate model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_u2net_salient-detection
2023-09-04 22:40:03,086 - modelscope - INFO - initiate model from location C:\Users\zcn6842.cache\modelscope\hub\damo\cv_u2net_salient-detection.
2023-09-04 22:40:03,087 - modelscope - INFO - initialize model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_u2net_salient-detection
2023-09-04 22:40:03,303 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 22:40:03,303 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 22:40:03,304 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\zcn6842\.cache\modelscope\hub\damo\cv_u2net_salient-detection'}. trying to build by task and model information.
2023-09-04 22:40:03,304 - modelscope - WARNING - No preprocessor key ('detection', 'semantic-segmentation') found in PREPROCESSOR_MAP, skip building preprocessor.
2023-09-04 22:40:03,703 - modelscope - INFO - Use user-specified model revision: v1.0.1
2023-09-04 22:40:03,891 - modelscope - WARNING - ('PIPELINES', 'skin-retouching-torch', 'skin-retouching-torch') not found in ast index file
2023-09-04 22:40:03,891 - modelscope - INFO - initiate model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch
2023-09-04 22:40:03,891 - modelscope - INFO - initiate model from location C:\Users\zcn6842.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch.
2023-09-04 22:40:03,894 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 22:40:03,894 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 22:40:03,894 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\zcn6842\.cache\modelscope\hub\damo\cv_unet_skin_retouching_torch'}. trying to build by task and model information.
2023-09-04 22:40:03,894 - modelscope - WARNING - Find task: skin-retouching-torch, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-04 22:40:04,522 - modelscope - INFO - Model revision not specified, use the latest revision: v2.0.2
2023-09-04 22:40:06,310 - modelscope - INFO - initiate model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface
2023-09-04 22:40:06,310 - modelscope - INFO - initiate model from location C:\Users\zcn6842.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface.
2023-09-04 22:40:06,315 - modelscope - WARNING - No preprocessor field found in cfg.
2023-09-04 22:40:06,315 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2023-09-04 22:40:06,315 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': 'C:\Users\zcn6842\.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface'}. trying to build by task and model information.
2023-09-04 22:40:06,315 - modelscope - WARNING - Find task: face-detection, model type: None. Insufficient information to build preprocessor, skip building preprocessor
2023-09-04 22:40:06,317 - modelscope - INFO - loading model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_resnet50_face-detection_retinaface\pytorch_model.pt
2023-09-04 22:40:06,627 - modelscope - INFO - load model done
2023-09-04 22:40:08,332 - modelscope - INFO - Model revision not specified, use the latest revision: v1.0.0
2023-09-04 22:40:08,652 - modelscope - INFO - initiate model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_gpen_image-portrait-enhancement
2023-09-04 22:40:08,653 - modelscope - INFO - initiate model from location C:\Users\zcn6842.cache\modelscope\hub\damo\cv_gpen_image-portrait-enhancement.
2023-09-04 22:40:08,653 - modelscope - INFO - initialize model from C:\Users\zcn6842.cache\modelscope\hub\damo\cv_gpen_image-portrait-enhancement
Loading ResNet ArcFace
2023-09-04 22:40:10,276 - modelscope - INFO - load face enhancer model done
2023-09-04 22:40:10,553 - modelscope - INFO - load face detector model done
2023-09-04 22:40:10,826 - modelscope - INFO - load sr model done
2023-09-04 22:40:11,490 - modelscope - INFO - load fqa model done
selected paths: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\original_backup\5.jpg total scores: 0.6234065605623983 face angles 0.9548858022264274
selected paths: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\original_backup\2.jpg total scores: 0.6102422407964487 face angles 0.9395583143659086
selected paths: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\original_backup\0.jpg total scores: 0.5969747537782216 face angles 0.9589047791489895
selected paths: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\original_backup\4.jpg total scores: 0.5931145356793212 face angles 0.9300448887992161
selected paths: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\original_backup\3.jpg total scores: 0.5681950943441831 face angles 0.9556665541133954
selected paths: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\original_backup\6.jpg total scores: 0.5567725630669526 face angles 0.9620961552431176
selected paths: D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\original_backup\1.jpg total scores: 0.49141735771875494 face angles 0.7661806085455679
jpg: 5.jpg face_id_scores 0.6234065605623983
jpg: 2.jpg face_id_scores 0.6102422407964487
jpg: 1.jpg face_id_scores 0.49141735771875494
jpg: 4.jpg face_id_scores 0.5931145356793212
jpg: 0.jpg face_id_scores 0.5969747537782216
jpg: 3.jpg face_id_scores 0.5681950943441831
jpg: 6.jpg face_id_scores 0.5567725630669526
2023-09-04 22:40:15,963 - modelscope - WARNING - task skin-retouching-torch input definition is missing
2023-09-04 22:40:16,999 - modelscope - WARNING - task skin-retouching-torch output keys are missing
2023-09-04 22:40:17,175 - modelscope - WARNING - task semantic-segmentation input definition is missing
save processed image to D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\processed_images\train\0.jpg
save processed image to D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\processed_images\train\1.jpg
save processed image to D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\processed_images\train\2.jpg
save processed image to D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\processed_images\train\3.jpg
save processed image to D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\processed_images\train\4.jpg
save processed image to D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\processed_images\train\5.jpg
save processed image to D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\processed_images\train\6.jpg
D:\kkkkk\release\SD_webui_with_aki_launcher_dev\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py
Error executing the command: Command '['accelerate', 'launch', '--mixed_precision=fp16', '--main_process_port=3456', 'D:\kkkkk\release\SD_webui_with_aki_launcher_dev\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py', '--pretrained_model_name_or_path=extensions\sd-webui-EasyPhoto\models\stable-diffusion-v1-5', '--pretrained_model_ckpt=models\Stable-diffusion\Chilloutmix-Ni-pruned-fp16-fix.safetensors', '--train_data_dir=outputs\easyphoto-user-id-infos\lyf\processed_images', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=1', '--gradient_accumulation_steps=4', '--dataloader_num_workers=0', '--max_train_steps=800', '--checkpointing_steps=100', '--learning_rate=0.0001', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--train_text_encoder', '--seed=42', '--rank=128', '--network_alpha=64', '--validation_prompt=easyphoto_face, easyphoto, 1person', '--validation_steps=100', '--output_dir=outputs\easyphoto-user-id-infos\lyf\user_weights', '--logging_dir=outputs\easyphoto-user-id-infos\lyf\user_weights', '--enable_xformers_memory_efficient_attention', '--mixed_precision=fp16', '--template_dir=extensions\sd-webui-EasyPhoto\models\training_templates', '--template_mask', '--merge_best_lora_based_face_id', '--merge_best_lora_name=lyf']' returned non-zero exit status 1.
Traceback (most recent call last):
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\extensions\sd-webui-EasyPhoto\scripts\easyphoto_train.py", line 216, in easyphoto_train_forward
copyfile(best_weight_path, webui_save_path)
File "D:\kkkkk\release\SD_webui_with_aki_launcher_dev\py310\lib\shutil.py", line 254, in copyfile
with open(src, 'rb') as fsrc:
FileNotFoundError: [Errno 2] No such file or directory: 'D:\kkkkk\release\SD_webui_with_aki_launcher_dev\outputs/easyphoto-user-id-infos\lyf\user_weights\best_outputs/lyf.safetensors'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

最后一步,存储lora保存失败

saving checkpoint: outputs\easyphoto-user-id-infos\ruoruo-test2\user_weights\pytorch_lora_weights.safetensors
loading u-net:
loading vae:
loading text encoder:
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint.StableDiffusionInpaintPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
You have loaded a UNet with 4 input channels which.
{'lower_order_final', 'dynamic_thresholding_ratio', 'solver_order', 'variance_type', 'solver_type', 'lambda_min_clipped', 'algorithm_type', 'timestep_spacing', 'sample_max_value', 'prediction_type', 'thresholding', 'use_karras_sigmas'} was not found in config. Values will be initialized to default values.
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
┌───────────────────── Traceback (most recent call last) ─────────────────────┐
│ D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\scripts\ │
│ train_kohya\train_lora.py:1416 in │
│ │
│ 1413 │
│ 1414 │
│ 1415 if name == "main": │
│ > 1416 │ main() │
│ 1417 │
│ │
│ D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\scripts\ │
│ train_kohya\train_lora.py:1394 in main │
│ │
│ 1391 │ │ if args.merge_best_lora_based_face_id: │
│ 1392 │ │ │ pivot_dir = os.path.join(args.train_data_dir, 'train') │
│ 1393 │ │ │ merge_best_lora_name = args.train_data_dir.split("/")[-1 │
│ > 1394 │ │ │ t_result_list, tlist, scores = eval_jpg_with_faceid(pivo │
│ 1395 │ │ │ │
│ 1396 │ │ │ for index, line in enumerate(zip(tlist, scores)): │
│ 1397 │ │ │ │ print(f"Top-{str(index)}: {str(line)}") │
│ │
│ D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\scripts\ │
│ train_kohya\train_lora.py:334 in eval_jpg_with_faceid │
│ │
│ 331 │ embedding_list = [] │
│ 332 │ for img in face_image_list: │
│ 333 │ │ image = Image.open(img) │
│ > 334 │ │ embedding = face_recognition.get(np.array(image), face_analy │
│ 335 │ │ embedding = np.array([embedding / np.linalg.norm(embedding, │
│ 336 │ │ embedding_list.append(embedding) │
│ 337 │ embedding_array = np.vstack(embedding_list) │
└─────────────────────────────────────────────────────────────────────────────┘
IndexError: list index out of range
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
┌───────────────────── Traceback (most recent call last) ─────────────────────┐
│ D:\Desktop\SD\sd-webui-aki-v4.2\python\lib\runpy.py:196 in │
│ _run_module_as_main │
│ │
│ 193 │ main_globals = sys.modules["main"].dict
│ 194 │ if alter_argv: │
│ 195 │ │ sys.argv[0] = mod_spec.origin │
│ > 196 │ return _run_code(code, main_globals, None, │
│ 197 │ │ │ │ │ "main", mod_spec) │
│ 198 │
│ 199 def run_module(mod_name, init_globals=None, │
│ │
│ D:\Desktop\SD\sd-webui-aki-v4.2\python\lib\runpy.py:86 in _run_code │
│ │
│ 83 │ │ │ │ │ loader = loader, │
│ 84 │ │ │ │ │ package = pkg_name, │
│ 85 │ │ │ │ │ spec = mod_spec) │
│ > 86 │ exec(code, run_globals) │
│ 87 │ return run_globals │
│ 88 │
│ 89 def _run_module_code(code, init_globals=None, │
│ │
│ D:\Desktop\SD\sd-webui-aki-v4.2\python\lib\site-packages\accelerate\command │
│ s\launch.py:933 in │
│ │
│ 930 │
│ 931 │
│ 932 if name == "main": │
│ > 933 │ main() │
│ 934 │
│ │
│ D:\Desktop\SD\sd-webui-aki-v4.2\python\lib\site-packages\accelerate\command │
│ s\launch.py:929 in main │
│ │
│ 926 def main(): │
│ 927 │ parser = launch_command_parser() │
│ 928 │ args = parser.parse_args() │
│ > 929 │ launch_command(args) │
│ 930 │
│ 931 │
│ 932 if name == "main": │
│ │
│ D:\Desktop\SD\sd-webui-aki-v4.2\python\lib\site-packages\accelerate\command │
│ s\launch.py:923 in launch_command │
│ │
│ 920 │ elif defaults is not None and defaults.compute_environment == Com │
│ 921 │ │ sagemaker_launcher(defaults, args) │
│ 922 │ else: │
│ > 923 │ │ simple_launcher(args) │
│ 924 │
│ 925 │
│ 926 def main(): │
│ │
│ D:\Desktop\SD\sd-webui-aki-v4.2\python\lib\site-packages\accelerate\command │
│ s\launch.py:579 in simple_launcher │
│ │
│ 576 │ process.wait() │
│ 577 │ if process.returncode != 0: │
│ 578 │ │ if not args.quiet: │
│ > 579 │ │ │ raise subprocess.CalledProcessError(returncode=process.re │
│ 580 │ │ else: │
│ 581 │ │ │ sys.exit(1) │
│ 582 │
└─────────────────────────────────────────────────────────────────────────────┘
CalledProcessError: Command
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
'['D:\Desktop\SD\sd-webui-aki-v4.2\python\python.exe',
'D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\scrip
ts\train_kohya/train_lora.py',
'--pretrained_model_name_or_path=extensions\sd-webui-EasyPhoto-main\models\s
table-diffusion-v1-5',
'--pretrained_model_ckpt=models\Stable-diffusion\sd-v1-5.ckpt',
'--train_data_dir=outputs\easyphoto-user-id-infos\ruoruo-test2\processed_ima
ges', '--caption_column=text', '--resolution=512', '--random_flip',
'--train_batch_size=2', '--gradient_accumulation_steps=4',
'--dataloader_num_workers=0', '--max_train_steps=900',
'--checkpointing_steps=900', '--learning_rate=0.0001',
'--lr_scheduler=constant', '--lr_warmup_steps=0', '--train_text_encoder',
'--seed=42', '--rank=128', '--network_alpha=64',
'--validation_prompt=easyphoto_face, easyphoto, 1person',
'--validation_steps=900',
'--output_dir=outputs\easyphoto-user-id-infos\ruoruo-test2\user_weights',
'--logging_dir=outputs\easyphoto-user-id-infos\ruoruo-test2\user_weights',
'--enable_xformers_memory_efficient_attention', '--mixed_precision=fp16',
'--template_dir=extensions\sd-webui-EasyPhoto-main\models\training_templates
', '--template_mask', '--merge_best_lora_based_face_id',
'--merge_best_lora_name=ruoruo-test2',
'--cache_log_file=D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-Easy
Photo-main\train_kohya_log.txt']' returned non-zero exit status 1.
Error executing the command: Command '['D:\Desktop\SD\sd-webui-aki-v4.2\python\python.exe', '-m', 'accelerate.commands.launch', '--mixed_precision=fp16', '--main_process_port=3456', 'D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\scripts\train_kohya/train_lora.py', '--pretrained_model_name_or_path=extensions\sd-webui-EasyPhoto-main\models\stable-diffusion-v1-5', '--pretrained_model_ckpt=models\Stable-diffusion\sd-v1-5.ckpt', '--train_data_dir=outputs\easyphoto-user-id-infos\ruoruo-test2\processed_images', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=2', '--gradient_accumulation_steps=4', '--dataloader_num_workers=0', '--max_train_steps=900', '--checkpointing_steps=900', '--learning_rate=0.0001', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--train_text_encoder', '--seed=42', '--rank=128', '--network_alpha=64', '--validation_prompt=easyphoto_face, easyphoto, 1person', '--validation_steps=900', '--output_dir=outputs\easyphoto-user-id-infos\ruoruo-test2\user_weights', '--logging_dir=outputs\easyphoto-user-id-infos\ruoruo-test2\user_weights', '--enable_xformers_memory_efficient_attention', '--mixed_precision=fp16', '--template_dir=extensions\sd-webui-EasyPhoto-main\models\training_templates', '--template_mask', '--merge_best_lora_based_face_id', '--merge_best_lora_name=ruoruo-test2', '--cache_log_file=D:\Desktop\SD\sd-webui-aki-v4.2\extensions\sd-webui-EasyPhoto-main\train_kohya_log.txt']' returned non-zero exit status 1.

[Mac/Colab Support] Can MacOS be supported

The support for machine learning on computers with Apple chips is becoming increasingly extensive. Machine learning frameworks like PyTorch and TensorFlow are well-adapted for these systems. The stable-diffusion-webui can operate normally on macOS, and its various features like models, LoRa, and ControlNet extensions are all fully functional. Even in the LLM domain, well-known large models like LLama-2 can also run on computers with Apple chips. Therefore, it is hoped that this project will also support macOS.

[User can set basemodel when training/inference & support multi controlnet dir/path ]「建议」建议提供自定义底模和ControlNet模型的位置

在使用EasyPhoto的时候,建议可以保留入口,把用户基于Lora训练的底模路径填入,这样就不需要从阿里云的oss下载chilloutMix了,而且大部分用户一般接触这个插件,基本上都安装过ControlNet了,也可以在外层保留ControlNet的模型自定义路径,这样也不需要重复去下载模型了,现在把底模和各个ControlNet都下载,既浪费作者的带宽,也浪费用户的存储空间。

not enough memory

my memory is 32G,why has below error

2023-09-06 17:51:45,283 - ControlNet - INFO - ControlNet model control_v11p_sd15_openpose [cab727d4] loaded.
2023-09-06 17:51:45,373 - ControlNet - INFO - Loading preprocessor: openpose_full
2023-09-06 17:51:45,373 - ControlNet - INFO - preprocessor resolution = 512
2023-09-06 17:51:48,117 - ControlNet - INFO - Loading model: control_sd15_random_color [3afa6fba]
2023-09-06 17:51:49,487 - ControlNet - INFO - Loaded state_dict from [D:\sd15\stable-diffusion-webui\models\ControlNet\control_sd15_random_color.pth]
2023-09-06 17:51:49,487 - ControlNet - INFO - controlnet_default_config
*** Error running before_process_batch: D:\sd15\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "D:\sd15\stable-diffusion-webui\modules\scripts.py", line 627, in before_process_batch
script.before_process_batch(p, *script_args, **kwargs)
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 972, in before_process_batch
self.controlnet_main_entry(p)
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 688, in controlnet_main_entry
model_net = Script.load_control_model(p, unet, unit.model)
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 321, in load_control_model
model_net = Script.build_control_model(p, unet, model)
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 350, in build_control_model
network = build_model_by_guess(state_dict, unet, model_path)
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_model_guess.py", line 151, in build_model_by_guess
p_new = p + unet_state_dict[key].clone().cpu()
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 671088640000 bytes.


100%|██████████████████████████████████████████████████████████████████████████████████| 23/23 [00:10<00:00, 2.27it/s]
Traceback (most recent call last):it/s]
File "D:\sd15\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\sd15\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\sd15\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\sd15\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\sd15\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\sd15\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\sd15\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 287, in easyphoto_infer_forward
output_image = inpaint_with_mask_face(input_image, input_mask, replaced_input_image, diffusion_steps=first_diffusion_steps, denoising_strength=first_denoising_strength, input_prompt=input_prompt, hr_scale=1.0, seed=str(seed))
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 116, in inpaint_with_mask_face
image = i2i_inpaint_call(
File "D:\sd15\stable-diffusion-webui\extensions\sd-webui-EasyPhoto\scripts\sdwebui.py", line 285, in i2i_inpaint_call
h_1, w_1, c_1 = np.shape(processed.images[1])
IndexError: list index out of range

项目与roop有冲突?试了好几次,下架项目,roop就正常?

*** Error running postprocess_image: C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-roop\scripts\faceswap.py
Traceback (most recent call last):
File "C:\Users\nsg\stable-diffusion-webui\modules\scripts.py", line 675, in postprocess_image
script.postprocess_image(p, pp, *script_args)
File "C:\Users\nsg\stable-diffusion-webui\extensions\sd-webui-roop\scripts\faceswap.py", line 191, in postprocess_image
pp = scripts_postprocessing.PostprocessedImage(result.image())
File "C:\Users\nsg\stable-diffusion-webui\venv\lib\site-packages\PIL\Image.py", line 528, in getattr
raise AttributeError(name)
AttributeError: image


秋叶整合包,SD1.6 安装报错

*** Error running install.py for extension D:\01_AI\sd-webui-aki\extensions\sd-webui-EasyPhoto.
*** Command: "D:\01_AI\sd-webui-aki\python\python.exe" "D:\01_AI\sd-webui-aki\extensions\sd-webui-EasyPhoto\install.py"
*** Error code: 1
*** stdout: Installing requirements for easyphoto-webui
*** Installing requirements for tensorflow
*** Installing requirements for easyphoto-webui
*** Installing requirements for ifnude


*** stderr: Traceback (most recent call last):
*** File "D:\01_AI\sd-webui-aki\extensions\sd-webui-EasyPhoto\install.py", line 22, in
*** launch.run_pip("install ifnude", "requirements for ifnude")
*** File "D:\01_AI\sd-webui-aki.launcher\pyinterop.hfkx1kkk0g4q7.zip\swlpatches\progress\launch.py", line 49, in wrapped_run_pip
*** File "D:\01_AI\sd-webui-aki\modules\launch_utils.py", line 138, in run_pip
*** return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
*** File "D:\01_AI\sd-webui-aki\modules\launch_utils.py", line 115, in run
*** raise RuntimeError("\n".join(error_bits))
*** RuntimeError: Couldn't install requirements for ifnude.
*** Command: "D:\01_AI\sd-webui-aki\python\python.exe" -m pip install ifnude --prefer-binary
*** Error code: 1
*** stdout: Collecting ifnude
*** Using cached ifnude-0.0.3-py2.py3-none-any.whl (7.1 kB)
*** Requirement already satisfied: pillow in d:\01_ai\sd-webui-aki\python\lib\site-packages (from ifnude) (9.5.0)
*** Collecting opencv-python-headless>=4.5.1.48 (from ifnude)
*** Using cached opencv_python_headless-4.8.0.76-cp37-abi3-win_amd64.whl (38.0 MB)
*** Requirement already satisfied: tqdm in d:\01_ai\sd-webui-aki\python\lib\site-packages (from ifnude) (4.65.0)
*** Requirement already satisfied: scikit-image in d:\01_ai\sd-webui-aki\python\lib\site-packages (from ifnude) (0.21.0)
*** Collecting onnxruntime (from ifnude)
*** Using cached onnxruntime-1.15.1-cp310-cp310-win_amd64.whl (6.7 MB)
*** Requirement already satisfied: numpy>=1.21.2 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from opencv-python-headless>=4.5.1.48->ifnude) (1.23.5)
*** Requirement already satisfied: coloredlogs in d:\01_ai\sd-webui-aki\python\lib\site-packages (from onnxruntime->ifnude) (15.0.1)
*** Requirement already satisfied: flatbuffers in d:\01_ai\sd-webui-aki\python\lib\site-packages (from onnxruntime->ifnude) (23.3.3)
*** Requirement already satisfied: packaging in d:\01_ai\sd-webui-aki\python\lib\site-packages (from onnxruntime->ifnude) (23.1)
*** Requirement already satisfied: protobuf in d:\01_ai\sd-webui-aki\python\lib\site-packages (from onnxruntime->ifnude) (3.20.3)
*** Requirement already satisfied: sympy in d:\01_ai\sd-webui-aki\python\lib\site-packages (from onnxruntime->ifnude) (1.11.1)
*** Requirement already satisfied: scipy>=1.8 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from scikit-image->ifnude) (1.10.1)
*** Requirement already satisfied: networkx>=2.8 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from scikit-image->ifnude) (3.1)
*** Requirement already satisfied: imageio>=2.27 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from scikit-image->ifnude) (2.28.1)
*** Requirement already satisfied: tifffile>=2022.8.12 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from scikit-image->ifnude) (2023.4.12)
*** Requirement already satisfied: PyWavelets>=1.1.1 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from scikit-image->ifnude) (1.4.1)
*** Requirement already satisfied: lazy_loader>=0.2 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from scikit-image->ifnude) (0.2)
*** Requirement already satisfied: colorama in d:\01_ai\sd-webui-aki\python\lib\site-packages (from tqdm->ifnude) (0.4.6)
*** Requirement already satisfied: humanfriendly>=9.1 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from coloredlogs->onnxruntime->ifnude) (10.0)
*** Requirement already satisfied: mpmath>=0.19 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from sympy->onnxruntime->ifnude) (1.3.0)
*** Requirement already satisfied: pyreadline3 in d:\01_ai\sd-webui-aki\python\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime->ifnude) (3.4.1)
*** Installing collected packages: opencv-python-headless, onnxruntime, ifnude


*** stderr: WARNING: Ignoring invalid distribution -rotobuf (d:\01_ai\sd-webui-aki\python\lib\site-packages)
*** WARNING: Ignoring invalid distribution -rotobuf (d:\01_ai\sd-webui-aki\python\lib\site-packages)
*** ERROR: Could not install packages due to an OSError: [WinError 5] 拒绝访问。: 'D:\01_AI\sd-webui-aki\python\Lib\site-packages\cv2\cv2.pyd'
*** Consider using the --user option or check the permissions.

UnboundLocalError: local variable 'img2' referenced before assignment

Just updated to the newest version,it got error:
File "/mnt/workspace/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/easyphoto_train.py", line 78, in easyphoto_train_forward
image = process_rotate_image(image).convert("RGB")
File "/mnt/workspace/stable-diffusion-webui/extensions/sd-webui-EasyPhoto/scripts/easyphoto_train.py", line 229, in process_rotate_image
return img2
UnboundLocalError: local variable 'img2' referenced before assignment

训练报错

您好,遇到的问题是前端显示训练error,但是后端一直在训练,但训练完成后没有以id命名的模型
1694596954196
并且安装该插件后,我的后端就开始一直跳出信息
1694597097843
请问是什么问题呢

[controlnet版本问题] AttributeError: 'ControlNetUnit' object has no attribute 'guess_mode'

*** Error running process: /tmp-data/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py
Traceback (most recent call last):
File "/tmp-data/workspace/stable-diffusion-webui/modules/scripts.py", line 519, in process
script.process(p, *script_args)
File "/tmp-data/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1046, in process
self.enabled_units = self.get_enabled_units(p)
File "/tmp-data/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1009, in get_enabled_units
unit = self.parse_remote_call(p, unit, idx)
File "/tmp-data/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 874, in parse_remote_call
unit.guess_mode = selector(p, "control_net_guess_mode", unit.guess_mode, idx)
AttributeError: 'ControlNetUnit' object has no attribute 'guess_mode'


虽然报错,但是可以生成图片, 不知道会不会对生成的图片质量有影响。

训练人物被中断

训练人物:像是会被kill中断,这个有时又不会,训练人物是有成功过的,是不是有什么图像要求,这边上传的都是正面照
image

人脸大小判断的逻辑存在问题

如何复现

下载这张照片并上传训练,会出现下面的错误(由于加了 debug 代码,line 信息可能不一致):
image
这是因为原图的分辨率为 (7360, 4912),然而人脸的大小为 (588.5, 750.5)。因此,这一行会跳过导致 face_id_scores 为空。

秋叶整合包安装失败,c++14安装过insightface还是报错

Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
current transparent-background 1.2.4
Installing requirements for easyphoto-webui
Installing requirements for tensorflow
Requirement already satisfied: tensorflow-cpu in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (2.13.0)
Requirement already satisfied: tensorflow-intel==2.13.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-cpu) (2.13.0)
Requirement already satisfied: absl-py>=1.0.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (1.3.0)
Requirement already satisfied: astunparse>=1.6.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (1.6.3)
Requirement already satisfied: flatbuffers>=23.1.21 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (23.5.26)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (0.4.0)
Requirement already satisfied: google-pasta>=0.1.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (0.2.0)
Requirement already satisfied: h5py>=2.9.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (3.9.0)
Requirement already satisfied: libclang>=13.0.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (16.0.6)
Requirement already satisfied: numpy<=1.24.3,>=1.22 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (1.23.5)
Requirement already satisfied: opt-einsum>=2.3.2 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (3.3.0)
Requirement already satisfied: packaging in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (21.3)
Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (4.24.3)
Requirement already satisfied: setuptools in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (65.6.0)
Requirement already satisfied: six>=1.12.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (1.16.0)
Requirement already satisfied: termcolor>=1.1.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (2.3.0)
Requirement already satisfied: typing-extensions<4.6.0,>=3.6.6 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (4.4.0)
Requirement already satisfied: wrapt>=1.11.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (1.15.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (1.50.0)
Requirement already satisfied: tensorboard<2.14,>=2.13 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (2.13.0)
Requirement already satisfied: tensorflow-estimator<2.14,>=2.13.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (2.13.0)
Requirement already satisfied: keras<2.14,>=2.13.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (2.13.1)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorflow-intel==2.13.0->tensorflow-cpu) (0.31.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from astunparse>=1.6.0->tensorflow-intel==2.13.0->tensorflow-cpu) (0.38.4)
Requirement already satisfied: google-auth<3,>=1.6.3 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (2.22.0)
Requirement already satisfied: google-auth-oauthlib<1.1,>=0.5 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (1.0.0)
Requirement already satisfied: markdown>=2.6.8 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (3.4.1)
Requirement already satisfied: requests<3,>=2.21.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (2.31.0)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (0.7.1)
Requirement already satisfied: werkzeug>=1.0.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (2.2.2)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from packaging->tensorflow-intel==2.13.0->tensorflow-cpu) (3.0.9)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (5.2.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (4.9)
Requirement already satisfied: urllib3<2.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (1.26.12)
Requirement already satisfied: requests-oauthlib>=0.7.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from google-auth-oauthlib<1.1,>=0.5->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (1.3.1)
Requirement already satisfied: charset-normalizer<4,>=2 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (2022.9.24)
Requirement already satisfied: MarkupSafe>=2.1.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from werkzeug>=1.0.1->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (2.1.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<1.1,>=0.5->tensorboard<2.14,>=2.13->tensorflow-intel==2.13.0->tensorflow-cpu) (3.2.2)
DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063
Installing requirements for easyphoto-webui
Installing requirements for insightface
Collecting insightface==0.7
Using cached insightface-0.7.tar.gz (437 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: numpy in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (1.23.5)
Requirement already satisfied: onnx in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (1.14.1)
Requirement already satisfied: tqdm in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (4.65.0)
Requirement already satisfied: requests in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (2.31.0)
Requirement already satisfied: matplotlib in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (3.6.2)
Requirement already satisfied: Pillow in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (9.5.0)
Requirement already satisfied: scipy in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (1.9.3)
Requirement already satisfied: scikit-learn in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (1.3.0)
Requirement already satisfied: scikit-image in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (0.21.0)
Collecting easydict (from insightface==0.7)
Using cached easydict-1.10-py3-none-any.whl
Requirement already satisfied: cython in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from insightface==0.7) (3.0.2)
Collecting albumentations (from insightface==0.7)
Obtaining dependency information for albumentations from https://files.pythonhosted.org/packages/9b/f6/c486cedb4f75147232f32ec4c97026714cfef7c7e247a1f0427bc5489f66/albumentations-1.3.1-py3-none-any.whl.metadata
Using cached albumentations-1.3.1-py3-none-any.whl.metadata (34 kB)
Collecting prettytable (from insightface==0.7)
Obtaining dependency information for prettytable from https://files.pythonhosted.org/packages/4d/81/316b6a55a0d1f327d04cc7b0ba9d04058cb62de6c3a4d4b0df280cbe3b0b/prettytable-3.9.0-py3-none-any.whl.metadata
Using cached prettytable-3.9.0-py3-none-any.whl.metadata (26 kB)
Requirement already satisfied: PyYAML in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from albumentations->insightface==0.7) (6.0)
Collecting qudida>=0.0.4 (from albumentations->insightface==0.7)
Using cached qudida-0.0.4-py3-none-any.whl (3.5 kB)
Requirement already satisfied: opencv-python-headless>=4.1.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from albumentations->insightface==0.7) (4.8.0.76)
Requirement already satisfied: networkx>=2.8 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-image->insightface==0.7) (2.8.8)
Requirement already satisfied: imageio>=2.27 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-image->insightface==0.7) (2.31.3)
Requirement already satisfied: tifffile>=2022.8.12 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-image->insightface==0.7) (2022.10.10)
Requirement already satisfied: PyWavelets>=1.1.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-image->insightface==0.7) (1.4.1)
Requirement already satisfied: packaging>=21 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-image->insightface==0.7) (21.3)
Requirement already satisfied: lazy_loader>=0.2 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-image->insightface==0.7) (0.3)
Requirement already satisfied: contourpy>=1.0.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from matplotlib->insightface==0.7) (1.0.6)
Requirement already satisfied: cycler>=0.10 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from matplotlib->insightface==0.7) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from matplotlib->insightface==0.7) (4.38.0)
Requirement already satisfied: kiwisolver>=1.0.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from matplotlib->insightface==0.7) (1.4.4)
Requirement already satisfied: pyparsing>=2.2.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from matplotlib->insightface==0.7) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from matplotlib->insightface==0.7) (2.8.2)
Requirement already satisfied: protobuf>=3.20.2 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from onnx->insightface==0.7) (4.24.3)
Requirement already satisfied: typing-extensions>=3.6.2.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from onnx->insightface==0.7) (4.4.0)
Requirement already satisfied: wcwidth in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from prettytable->insightface==0.7) (0.2.5)
Requirement already satisfied: charset-normalizer<4,>=2 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests->insightface==0.7) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests->insightface==0.7) (2.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests->insightface==0.7) (1.26.12)
Requirement already satisfied: certifi>=2017.4.17 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from requests->insightface==0.7) (2022.9.24)
Requirement already satisfied: joblib>=1.1.1 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-learn->insightface==0.7) (1.3.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from scikit-learn->insightface==0.7) (3.2.0)
Requirement already satisfied: colorama in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from tqdm->insightface==0.7) (0.4.6)
Requirement already satisfied: six>=1.5 in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from python-dateutil>=2.7->matplotlib->insightface==0.7) (1.16.0)
Using cached albumentations-1.3.1-py3-none-any.whl (125 kB)
Using cached prettytable-3.9.0-py3-none-any.whl (27 kB)
Building wheels for collected packages: insightface
Building wheel for insightface (pyproject.toml): started
Building wheel for insightface (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error

× Building wheel for insightface (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [188 lines of output]
WARNING: pandoc not enabled
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-310
creating build\lib.win-amd64-cpython-310\insightface
copying insightface_init_.py -> build\lib.win-amd64-cpython-310\insightface
creating build\lib.win-amd64-cpython-310\insightface\app
copying insightface\app\common.py -> build\lib.win-amd64-cpython-310\insightface\app
copying insightface\app\face_analysis.py -> build\lib.win-amd64-cpython-310\insightface\app
copying insightface\app\mask_renderer.py -> build\lib.win-amd64-cpython-310\insightface\app
copying insightface\app\rgfs_utils.py -> build\lib.win-amd64-cpython-310\insightface\app
copying insightface\app_init_.py -> build\lib.win-amd64-cpython-310\insightface\app
creating build\lib.win-amd64-cpython-310\insightface\commands
copying insightface\commands\insightface_cli.py -> build\lib.win-amd64-cpython-310\insightface\commands
copying insightface\commands\model_download.py -> build\lib.win-amd64-cpython-310\insightface\commands
copying insightface\commands\rec_add_mask_param.py -> build\lib.win-amd64-cpython-310\insightface\commands
copying insightface\commands_init_.py -> build\lib.win-amd64-cpython-310\insightface\commands
creating build\lib.win-amd64-cpython-310\insightface\data
copying insightface\data\image.py -> build\lib.win-amd64-cpython-310\insightface\data
copying insightface\data\pickle_object.py -> build\lib.win-amd64-cpython-310\insightface\data
copying insightface\data\rec_builder.py -> build\lib.win-amd64-cpython-310\insightface\data
copying insightface\data_init_.py -> build\lib.win-amd64-cpython-310\insightface\data
creating build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\arcface_onnx.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\attribute.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\inswapper.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\landmark.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\model_store.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\model_zoo.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\retinaface.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo\scrfd.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
copying insightface\model_zoo_init_.py -> build\lib.win-amd64-cpython-310\insightface\model_zoo
creating build\lib.win-amd64-cpython-310\insightface\thirdparty
copying insightface\thirdparty_init_.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty
creating build\lib.win-amd64-cpython-310\insightface\utils
copying insightface\utils\constant.py -> build\lib.win-amd64-cpython-310\insightface\utils
copying insightface\utils\download.py -> build\lib.win-amd64-cpython-310\insightface\utils
copying insightface\utils\face_align.py -> build\lib.win-amd64-cpython-310\insightface\utils
copying insightface\utils\filesystem.py -> build\lib.win-amd64-cpython-310\insightface\utils
copying insightface\utils\storage.py -> build\lib.win-amd64-cpython-310\insightface\utils
copying insightface\utils\transform.py -> build\lib.win-amd64-cpython-310\insightface\utils
copying insightface\utils_init_.py -> build\lib.win-amd64-cpython-310\insightface\utils
creating build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d
copying insightface\thirdparty\face3d_init_.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d
creating build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh
copying insightface\thirdparty\face3d\mesh\io.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh
copying insightface\thirdparty\face3d\mesh\light.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh
copying insightface\thirdparty\face3d\mesh\render.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh
copying insightface\thirdparty\face3d\mesh\transform.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh
copying insightface\thirdparty\face3d\mesh\vis.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh
copying insightface\thirdparty\face3d\mesh_init_.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh
creating build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh_numpy
copying insightface\thirdparty\face3d\mesh_numpy\io.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh_numpy
copying insightface\thirdparty\face3d\mesh_numpy\light.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh_numpy
copying insightface\thirdparty\face3d\mesh_numpy\render.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh_numpy
copying insightface\thirdparty\face3d\mesh_numpy\transform.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh_numpy
copying insightface\thirdparty\face3d\mesh_numpy\vis.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh_numpy
copying insightface\thirdparty\face3d\mesh_numpy_init_.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh_numpy
creating build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\morphable_model
copying insightface\thirdparty\face3d\morphable_model\fit.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\morphable_model
copying insightface\thirdparty\face3d\morphable_model\load.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\morphable_model
copying insightface\thirdparty\face3d\morphable_model\morphabel_model.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\morphable_model
copying insightface\thirdparty\face3d\morphable_model_init_.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\morphable_model
running egg_info
writing insightface.egg-info\PKG-INFO
writing dependency_links to insightface.egg-info\dependency_links.txt
writing entry points to insightface.egg-info\entry_points.txt
writing requirements to insightface.egg-info\requires.txt
writing top-level names to insightface.egg-info\top_level.txt
reading manifest file 'insightface.egg-info\SOURCES.txt'
writing manifest file 'insightface.egg-info\SOURCES.txt'
E:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\build_py.py:202: SetuptoolsDeprecationWarning: Installing 'insightface.thirdparty.face3d.mesh.cython' as data is deprecated, please list it in packages.
!!

      ############################
      # Package would be ignored #
      ############################
      Python recognizes 'insightface.thirdparty.face3d.mesh.cython' as an importable package,
      but it is not listed in the `packages` configuration of setuptools.
  
      'insightface.thirdparty.face3d.mesh.cython' has been automatically added to the distribution only
      because it may contain data files, but this behavior is likely to change
      in future versions of setuptools (and therefore is considered deprecated).
  
      Please make sure that 'insightface.thirdparty.face3d.mesh.cython' is included as a package by using
      the `packages` configuration field or the proper discovery methods
      (for example by using `find_namespace_packages(...)`/`find_namespace:`
      instead of `find_packages(...)`/`find:`).
  
      You can read more about "package discovery" and "data files" on setuptools
      documentation page.
  
  !!
  
    check.warn(importable)
  E:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\build_py.py:202: SetuptoolsDeprecationWarning:     Installing 'insightface.data.images' as data is deprecated, please list it in `packages`.
      !!
  
      ############################
      # Package would be ignored #
      ############################
      Python recognizes 'insightface.data.images' as an importable package,
      but it is not listed in the `packages` configuration of setuptools.
  
      'insightface.data.images' has been automatically added to the distribution only
      because it may contain data files, but this behavior is likely to change
      in future versions of setuptools (and therefore is considered deprecated).
  
      Please make sure that 'insightface.data.images' is included as a package by using
      the `packages` configuration field or the proper discovery methods
      (for example by using `find_namespace_packages(...)`/`find_namespace:`
      instead of `find_packages(...)`/`find:`).
  
      You can read more about "package discovery" and "data files" on setuptools
      documentation page.
  
  !!
  
    check.warn(importable)
  E:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\setuptools\command\build_py.py:202: SetuptoolsDeprecationWarning:     Installing 'insightface.data.objects' as data is deprecated, please list it in `packages`.
      !!
  
      ############################
      # Package would be ignored #
      ############################
      Python recognizes 'insightface.data.objects' as an importable package,
      but it is not listed in the `packages` configuration of setuptools.
  
      'insightface.data.objects' has been automatically added to the distribution only
      because it may contain data files, but this behavior is likely to change
      in future versions of setuptools (and therefore is considered deprecated).
  
      Please make sure that 'insightface.data.objects' is included as a package by using
      the `packages` configuration field or the proper discovery methods
      (for example by using `find_namespace_packages(...)`/`find_namespace:`
      instead of `find_packages(...)`/`find:`).
  
      You can read more about "package discovery" and "data files" on setuptools
      documentation page.
  
  !!
  
    check.warn(importable)
  creating build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh\cython
  copying insightface\thirdparty\face3d\mesh\cython\mesh_core.cpp -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh\cython
  copying insightface\thirdparty\face3d\mesh\cython\mesh_core_cython.cpp -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh\cython
  creating build\lib.win-amd64-cpython-310\insightface\data\images
  copying insightface\data\images\Tom_Hanks_54745.png -> build\lib.win-amd64-cpython-310\insightface\data\images
  copying insightface\data\images\mask_black.jpg -> build\lib.win-amd64-cpython-310\insightface\data\images
  copying insightface\data\images\mask_blue.jpg -> build\lib.win-amd64-cpython-310\insightface\data\images
  copying insightface\data\images\mask_green.jpg -> build\lib.win-amd64-cpython-310\insightface\data\images
  copying insightface\data\images\mask_white.jpg -> build\lib.win-amd64-cpython-310\insightface\data\images
  copying insightface\data\images\t1.jpg -> build\lib.win-amd64-cpython-310\insightface\data\images
  creating build\lib.win-amd64-cpython-310\insightface\data\objects
  copying insightface\data\objects\meanshape_68.pkl -> build\lib.win-amd64-cpython-310\insightface\data\objects
  copying insightface\thirdparty\face3d\mesh\cython\mesh_core.h -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh\cython
  copying insightface\thirdparty\face3d\mesh\cython\mesh_core_cython.c -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh\cython
  copying insightface\thirdparty\face3d\mesh\cython\mesh_core_cython.pyx -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh\cython
  copying insightface\thirdparty\face3d\mesh\cython\setup.py -> build\lib.win-amd64-cpython-310\insightface\thirdparty\face3d\mesh\cython
  running build_ext
  building 'insightface.thirdparty.face3d.mesh.cython.mesh_core_cython' extension
  creating build\temp.win-amd64-cpython-310
  creating build\temp.win-amd64-cpython-310\Release
  creating build\temp.win-amd64-cpython-310\Release\insightface
  creating build\temp.win-amd64-cpython-310\Release\insightface\thirdparty
  creating build\temp.win-amd64-cpython-310\Release\insightface\thirdparty\face3d
  creating build\temp.win-amd64-cpython-310\Release\insightface\thirdparty\face3d\mesh
  creating build\temp.win-amd64-cpython-310\Release\insightface\thirdparty\face3d\mesh\cython
  "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Iinsightface/thirdparty/face3d/mesh/cython -IE:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\numpy\core\include -IE:\SDpreDoc\novelai-webui-aki-v3\py310\include -IE:\SDpreDoc\novelai-webui-aki-v3\py310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tpinsightface/thirdparty/face3d/mesh/cython/mesh_core.cpp /Fobuild\temp.win-amd64-cpython-310\Release\insightface/thirdparty/face3d/mesh/cython/mesh_core.obj
  mesh_core.cpp
  insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp(147): warning C4244: \xa1\xb0=\xa1\xb1: \xb4ӡ\xb0int\xa1\xb1ת\xbb\xbb\xb5\xbd\xa1\xb0float\xa1\xb1\xa3\xac\xbf\xc9\xc4ܶ\xaaʧ\xca\xfd\xbe\xdd
  insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp(147): warning C4244: \xa1\xb0=\xa1\xb1: \xb4ӡ\xb0int\xa1\xb1ת\xbb\xbb\xb5\xbd\xa1\xb0float\xa1\xb1\xa3\xac\xbf\xc9\xc4ܶ\xaaʧ\xca\xfd\xbe\xdd
  insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp(210): warning C4244: \xa1\xb0=\xa1\xb1: \xb4ӡ\xb0int\xa1\xb1ת\xbb\xbb\xb5\xbd\xa1\xb0float\xa1\xb1\xa3\xac\xbf\xc9\xc4ܶ\xaaʧ\xca\xfd\xbe\xdd
  insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp(210): warning C4244: \xa1\xb0=\xa1\xb1: \xb4ӡ\xb0int\xa1\xb1ת\xbb\xbb\xb5\xbd\xa1\xb0float\xa1\xb1\xa3\xac\xbf\xc9\xc4ܶ\xaaʧ\xca\xfd\xbe\xdd
  insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp(294): warning C4244: \xa1\xb0=\xa1\xb1: \xb4ӡ\xb0int\xa1\xb1ת\xbb\xbb\xb5\xbd\xa1\xb0float\xa1\xb1\xa3\xac\xbf\xc9\xc4ܶ\xaaʧ\xca\xfd\xbe\xdd
  insightface/thirdparty/face3d/mesh/cython/mesh_core.cpp(294): warning C4244: \xa1\xb0=\xa1\xb1: \xb4ӡ\xb0int\xa1\xb1ת\xbb\xbb\xb5\xbd\xa1\xb0float\xa1\xb1\xa3\xac\xbf\xc9\xc4ܶ\xaaʧ\xca\xfd\xbe\xdd
  "C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Iinsightface/thirdparty/face3d/mesh/cython -IE:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\numpy\core\include -IE:\SDpreDoc\novelai-webui-aki-v3\py310\include -IE:\SDpreDoc\novelai-webui-aki-v3\py310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.22621.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tpinsightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpp /Fobuild\temp.win-amd64-cpython-310\Release\insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.obj
  mesh_core_cython.cpp
  insightface/thirdparty/face3d/mesh/cython/mesh_core_cython.cpp(36): fatal error C1083: \xce޷\xa8\xb4򿪰\xfc\xc0\xa8\xceļ\xfe: \xa1\xb0Python.h\xa1\xb1: No such file or directory
  error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.37.32822\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for insightface
Failed to build insightface
ERROR: Could not build wheels for insightface, which is required to install pyproject.toml-based projects
Traceback (most recent call last):
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\install.py", line 26, in
launch.run_pip("install insightface==0.7", "requirements for insightface")
File "E:\SDpreDoc\novelai-webui-aki-v3.launcher\pyinterop.hfkx1kkk0g4q7.zip\swlpatches\progress\launch.py", line 49, in wrapped_run_pip
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\launch_utils.py", line 138, in run_pip
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\launch_utils.py", line 115, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements for insightface.
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
Command: "E:\SDpreDoc\novelai-webui-aki-v3\py310\python.exe" -m pip install insightface==0.7 --prefer-binary
Error code: 1
No SDP backend available, likely because you are running in pytorch versions < 2.0. In fact, you are using PyTorch 1.13.1+cu117. You might want to consider upgrading.
E:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
*** Error running install.py for extension E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto.
*** Command: "E:\SDpreDoc\novelai-webui-aki-v3\py310\python.exe" "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\install.py"
*** Error code: 1
Installing requirements for TemporalKit extension
Requirement already satisfied: ffmpeg-python in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (0.2.0)
Requirement already satisfied: future in e:\sdpredoc\novelai-webui-aki-v3\py310\lib\site-packages (from ffmpeg-python) (0.18.2)
DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063
Launching Web UI with arguments: --medvram-sdxl --theme dark --xformers --disable-nan-check --api --autolaunch --no-hashing

You are running torch 1.13.1+cu117.
The program is tested to work with torch 2.0.0.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.

=================================================================================
You are running xformers 0.0.16rc425.
The program is tested to work with xformers 0.0.20.
To reinstall the desired version, run with commandline flag --reinstall-xformers.

Use --skip-version-check commandline argument to disable this check.

Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
*** Error loading script: easyphoto_infer.py
Traceback (most recent call last):
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 11, in
from modelscope.pipelines import pipeline
ModuleNotFoundError: No module named 'modelscope'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


*** Error loading script: easyphoto_train.py
Traceback (most recent call last):
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\scripts\easyphoto_train.py", line 19, in
from scripts.preprocess import preprocess_images
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\scripts\preprocess.py", line 8, in
import insightface
ModuleNotFoundError: No module named 'insightface'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


*** Error loading script: easyphoto_ui.py
Traceback (most recent call last):
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\scripts\easyphoto_ui.py", line 7, in
from scripts.easyphoto_infer import easyphoto_infer_forward
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 11, in
from modelscope.pipelines import pipeline
ModuleNotFoundError: No module named 'modelscope'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


*** Error loading script: preprocess.py
Traceback (most recent call last):
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\scripts\preprocess.py", line 8, in
import insightface
ModuleNotFoundError: No module named 'insightface'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


Downloading the detection model to C:\Users\Admin.ifnude/detector.onnx
*** Error loading script: swapper.py
Traceback (most recent call last):
File "urllib\request.py", line 1348, in do_open
File "http\client.py", line 1282, in request
File "http\client.py", line 1328, in _send_request
File "http\client.py", line 1277, in endheaders
File "http\client.py", line 1037, in _send_output
File "http\client.py", line 975, in send
File "http\client.py", line 1447, in connect
File "http\client.py", line 941, in connect
File "socket.py", line 845, in create_connection
File "socket.py", line 833, in create_connection
TimeoutError: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\SDpreDoc\novelai-webui-aki-v3\modules\scripts.py", line 382, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "E:\SDpreDoc\novelai-webui-aki-v3\modules\script_loading.py", line 10, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-EasyPhoto\scripts\swapper.py", line 12, in <module>
    from ifnude import detect
  File "E:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\ifnude\__init__.py", line 1, in <module>
    from .detector import detect
  File "E:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\ifnude\detector.py", line 36, in <module>
    download(model_url, model_path)
  File "E:\SDpreDoc\novelai-webui-aki-v3\py310\lib\site-packages\ifnude\detector.py", line 16, in download
    request = urllib.request.urlopen(url)
  File "urllib\request.py", line 216, in urlopen
  File "urllib\request.py", line 519, in open
  File "urllib\request.py", line 536, in _open
  File "urllib\request.py", line 496, in _call_chain
  File "urllib\request.py", line 1391, in https_open
  File "urllib\request.py", line 1351, in do_open
urllib.error.URLError: <urlopen error [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。>

[AddNet] Updating model hashes...
[AddNet] Updating model hashes...
2023-09-12 12:27:04,856 - ControlNet - INFO - ControlNet v1.1.409
ControlNet preprocessor location: E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-controlnet\annotator\downloads
2023-09-12 12:27:04,994 - ControlNet - INFO - ControlNet v1.1.409
*** Error loading script: deforum.py
Traceback (most recent call last):
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-deforum\scripts\deforum.py", line 41, in
init_deforum()
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-deforum\scripts\deforum.py", line 33, in init_deforum
from deforum_helpers.ui_right import on_ui_tabs
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-deforum\scripts\deforum_helpers\ui_right.py", line 20, in
from webui import wrap_gradio_gpu_call
ImportError: cannot import name 'wrap_gradio_gpu_call' from 'webui' (E:\SDpreDoc\novelai-webui-aki-v3\webui.py)
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


*** Error loading script: deforum_api.py
Traceback (most recent call last):
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\SDpreDoc\novelai-webui-aki-v3\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "E:\SDpreDoc\novelai-webui-aki-v3\extensions\sd-webui-deforum\scripts\deforum_api.py", line 29, in
from deforum_api_models import Batch, DeforumJobErrorType, DeforumJobStatusCategory, DeforumJobPhase, DeforumJobStatus
ModuleNotFoundError: No module named 'deforum_api_models'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


Image Browser: ImageReward is not installed, cannot be used.
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [None] from E:\SDpreDoc\novelai-webui-aki-v3\models\Stable-diffusion\CounterfeitV30_v30.safetensors
Creating model from config: E:\SDpreDoc\novelai-webui-aki-v3\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 66.2s (prepare environment: 38.4s, initialize shared: 0.1s, other imports: 0.8s, setup codeformer: 0.1s, load scripts: 24.0s, create ui: 1.7s, gradio launch: 0.7s, add APIs: 0.1s, app_started_callback: 0.1s).
Applying attention optimization: xformers... done.
Model loaded in 10.4s (load weights from disk: 0.9s, create model: 0.4s, apply weights to model: 6.5s, apply half(): 1.6s, calculate empty prompt: 0.8s).

微信群200人无法加入问题

尊敬的项目开发者:
您好!
很高兴能看到这么好的开源工作,微信群超过200人无法进入群聊,希望可以加入群聊,谢谢!
祝好!
MXC

SDWebUI 前端断连接后,后端任然在训练,该怎么办?

问题描述:

  1. 训练时间过长,SDWebUI前端断开连接后,后台依然在训练,刷新前端却没有任何提示,该怎么办

当前版本,不要重复提交,观察后台log,等待训练结束后,去infer界面可以正常推理。

TODO:前端提供一个log 显示功能,重连后能够不断刷新出后台的训练log,帮助用户定位?

训练报错

Traceback (most recent call last):
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 461, in _make_request
httplib_response = conn.getresponse()
File "/home/t8/.conda/envs/py310/lib/python3.10/http/client.py", line 1374, in getresponse
response.begin()
File "/home/t8/.conda/envs/py310/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/home/t8/.conda/envs/py310/lib/python3.10/http/client.py", line 279, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/home/t8/.conda/envs/py310/lib/python3.10/socket.py", line 705, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request
six.raise_from(e, None)
File "", line 3, in raise_from
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 461, in _make_request
httplib_response = conn.getresponse()
File "/home/t8/.conda/envs/py310/lib/python3.10/http/client.py", line 1374, in getresponse
response.begin()
File "/home/t8/.conda/envs/py310/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/home/t8/.conda/envs/py310/lib/python3.10/http/client.py", line 279, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/home/t8/.conda/envs/py310/lib/python3.10/socket.py", line 705, in readinto
return self._sock.recv_into(b)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/modelscope/hub/file_download.py", line 290, in http_get_file
r = requests.get(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
result = await self.call_function(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/ai/t8/webui-easyphoto/extensions/sd-webui-EasyPhoto/scripts/easyphoto_infer.py", line 215, in easyphoto_infer_forward
image_face_fusion = pipeline(Tasks.image_face_fusion, model='damo/cv_unet-image-face-fusion_damo')
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 106, in pipeline
model = normalize_model_input(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 33, in normalize_model_input
model = snapshot_download(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/modelscope/hub/snapshot_download.py", line 149, in snapshot_download
http_get_file(
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/modelscope/hub/file_download.py", line 316, in http_get_file
retry = retry.increment('GET', url, error=e)
File "/ai/t8/webui-easyphoto/venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: None: Max retries exceeded with url: http://www.modelscope.cn/api/v1/models/damo/cv_unet-image-face-fusion_damo/repo?Revision=v1.2&FilePath=description/.DS_Store (Caused by ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))

Inquiry about Methods Used in Project?

Hi there,

I came across your project and was really impressed by the results that you achieved. I'm interested in learning more about the methods that you used, specifically whether you employed any techniques from academic papers such as custom diffusion or dreambooth.

Would you be able to share which papers you referenced or any other resources that may be helpful in understanding the techniques used in your project?

Thank you so much for your time and help!

[Windows 如果缺失venv 环境需要手动安装]ModuleNotFoundError: No module named 'venv'

*** Error loading script: easyphoto_infer.py
Traceback (most recent call last):
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 11, in
from modelscope.pipelines import pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines_init
.py", line 4, in
from .base import Pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines\base.py", line 15, in
from modelscope.models.base import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models_init_.py", line 8, in
from .base import Head, Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base_init_.py", line 4, in
from .base_head import * # noqa F403
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_head.py", line 5, in
from modelscope.models.base.base_model import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_model.py", line 16, in
from modelscope.utils.plugins import (register_modelhub_repo,
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\utils\plugins.py", line 12, in
import venv
ModuleNotFoundError: No module named 'venv'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


*** Error loading script: easyphoto_train.py
Traceback (most recent call last):
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-EasyPhoto\scripts\easyphoto_train.py", line 19, in
from scripts.preprocess import preprocess_images
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-EasyPhoto\scripts\preprocess.py", line 12, in
from modelscope.pipelines import pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines_init
.py", line 4, in
from .base import Pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines\base.py", line 15, in
from modelscope.models.base import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models_init_.py", line 8, in
from .base import Head, Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base_init_.py", line 4, in
from .base_head import * # noqa F403
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_head.py", line 5, in
from modelscope.models.base.base_model import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_model.py", line 16, in
from modelscope.utils.plugins import (register_modelhub_repo,
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\utils\plugins.py", line 12, in
import venv
ModuleNotFoundError: No module named 'venv'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


*** Error loading script: easyphoto_ui.py
Traceback (most recent call last):
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-EasyPhoto\scripts\easyphoto_ui.py", line 7, in
from scripts.easyphoto_infer import easyphoto_infer_forward
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-EasyPhoto\scripts\easyphoto_infer.py", line 11, in
from modelscope.pipelines import pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines_init
.py", line 4, in
from .base import Pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines\base.py", line 15, in
from modelscope.models.base import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models_init_.py", line 8, in
from .base import Head, Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base_init_.py", line 4, in
from .base_head import * # noqa F403
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_head.py", line 5, in
from modelscope.models.base.base_model import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_model.py", line 16, in
from modelscope.utils.plugins import (register_modelhub_repo,
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\utils\plugins.py", line 12, in
import venv
ModuleNotFoundError: No module named 'venv'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。


*** Error loading script: preprocess.py
Traceback (most recent call last):
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\extensions\sd-webui-EasyPhoto\scripts\preprocess.py", line 12, in
from modelscope.pipelines import pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines_init
.py", line 4, in
from .base import Pipeline
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\pipelines\base.py", line 15, in
from modelscope.models.base import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models_init_.py", line 8, in
from .base import Head, Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base_init_.py", line 4, in
from .base_head import * # noqa F403
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_head.py", line 5, in
from modelscope.models.base.base_model import Model
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\models\base\base_model.py", line 16, in
from modelscope.utils.plugins import (register_modelhub_repo,
File "D:\stable diffusion\sd-webui-aki-v4\sd-webui-aki-v4\py310\lib\site-packages\modelscope\utils\plugins.py", line 12, in
import venv
ModuleNotFoundError: No module named 'venv'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

### 用的秋叶整合包更新到最新的1.6版本安装插件后启动报错。

训练报错 链接超时

requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com', port=443): Max retries exceeded with url: /webui/ChilloutMix-ni-fp16.safetensors (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001F03D4A40A0>, 'Connection to pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com timed out. (connect timeout=None)'))

can you please upload files to hugging face or something i am not able to download from these links

    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/ChilloutMix-ni-fp16.safetensors", 
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/control_v11p_sd15_openpose.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/control_v11p_sd15_canny.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/control_v11f1e_sd15_tile.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/control_sd15_random_color.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/FilmVelvia3.safetensors",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/body_pose_model.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/facenet.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/hand_pose_model.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/vae-ft-mse-840000-ema-pruned.ckpt",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/face_skin.pth",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/w600k_r50.onnx",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/2d106det.onnx",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/det_10g.onnx",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/1.jpg",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/2.jpg",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/3.jpg",
    "https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/webui/4.jpg",

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.