Giter Club home page Giter Club logo

furkangozukara / stable-diffusion Goto Github PK

View Code? Open in Web Editor NEW
1.8K 72.0 230.0 3.04 MB

Stable Diffusion, SDXL, LoRA Training, DreamBooth Training, Automatic1111 Web UI, DeepFake, Deep Fakes, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, ComfyUI, Google Colab, RunPod, NoteBooks, ControlNet, TTS, Voice Cloning, AI, AI News, ML, ML News, News, Tech, Tech News, Kohya LoRA, Kandinsky 2, DeepFloyd IF, Midjourney

Home Page: https://www.youtube.com/SECourses

License: GNU General Public License v3.0

Jupyter Notebook 72.27% Python 26.59% Shell 1.14%
deepfake deepfakes dreambooth guide guides stable-diffusion tts tutorial tutorials text-to-video

stable-diffusion's Issues

Finish / Start of AutoInst on Runpod not possible

Hi, and first of all, thank you a lot for your work and explanation!

Worked at Runpod, RTX A6000 like you described in your youtube auto install video.

And every think worked fine till i executed:
cd /workspace/stable-diffusion-xl-demo
source venv/bin/activate
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256 SHARE=true ENABLE_REFINER=true python app7.py

Here i got several warnings as a feedback (see below what i reproduced when started again)
Interestingly it gives me a 127. ip to start.

  • when i click the 127, there is nothing - as i expected
  • wenn i start the web terminal and connect to web terminal, i get only this: "root@38f87f9b1020:/workspace#" on a black page.
  • When i click connect to HTTP... Port3001, i get the 502 Page

Thanks for the support
El


root@38f87f9b1020:/workspace/stable-diffusion-xl-demo# cd /workspace/stable-diffusion-xl-demo
source venv/bin/activate
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256 SHARE=true ENABLE_REFINER=true python app7.py
Loading model /workspace/stable-diffusion-xl-base-0.9
Loading pipeline components...: 29%|█████████████████████████████████████▋ | 2/7 [0Loading pipeline components...: 43%|████████████████████████████████████████████████████████▌ | 3/7 [0Loading pipeline components...: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 6/7 [0Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:32<00:00, 4.62s/it]
Loading model /workspace/stable-diffusion-xl-refiner-0.9
Loading pipeline components...: 100%|█| 5/5 [00:11<00:00,
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/deprecation.py:43: UserWarning: You have unused kwarg parameters in Blocks, please remove them: {'timeout': 300}
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/layouts.py:75: UserWarning: mobile_collapse is no longer supported.
warnings.warn("mobile_collapse is no longer supported.")
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/components.py:198: UserWarning: 'rounded' styling is no longer supported. To round adjacent components together, place them in a Column(variant='box').
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/components.py:216: UserWarning: 'border' styling is no longer supported. To place adjacent components in a shared border, place them in a Column(variant='box').
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/components.py:207: UserWarning: 'margin' styling is no longer supported. To place adjacent components together without margin, place them in a Column(variant='box').
warnings.warn(
/workspace/stable-diffusion-xl-demo/venv/lib/python3.10/site-packages/gradio/deprecation.py:43: UserWarning: You have unused kwarg parameters in Slider, please remove them: {'enabled': True}
warnings.warn(
Running on local URL: http://127.0.0.1:7860

AttributeError: module 'torch.nn.utils.parametrizations' has no attribute 'weight_norm'

When trying to run tortoise-tts-fast, I recieve this error

(venv) E:\X\Voice Training\tortoise-tts-fast>python "E:\X\Voice Training\tortoise-tts-fast\scripts\tortoise_tts.py" --preset high_quality --ar_checkpoint "E:\X\Voice Training\DL-Art-School\experiments\Matthew_VC\models\875_gpt.pth" "Hello. Can you hear me? Is this thing on?."
Traceback (most recent call last):
File "E:\X\Voice Training\tortoise-tts-fast\scripts\tortoise_tts.py", line 240, in
from tortoise.inference import (
File "E:\X\Voice Training\tortoise-tts-fast\tortoise\inference.py", line 167, in
vfixer = VoiceFixer()
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\base.py", line 13, in init
self._model = voicefixer_fe(channels=2, sample_rate=44100)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\restorer\model.py", line 180, in init
self.vocoder = Vocoder(sample_rate=44100)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\vocoder\base.py", line 19, in init
self._load_pretrain(Config.ckpt)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\vocoder\base.py", line 25, in _load_pretrain
self.model = Generator(Config.cin_channels)
File "E:\X\Voice Training\tortoise-tts-fast\venv\lib\site-packages\voicefixer\vocoder\model\generator.py", line 34, in init
nn.utils.parametrizations.weight_norm(
AttributeError: module 'torch.nn.utils.parametrizations' has no attribute 'weight_norm'"

customkinter..please help me with this

(venv) D:\deepfake\roop>python run.py --keep-frames --keep-fps --execution-provider cuda
Traceback (most recent call last):
File "D:\deepfake\roop\run.py", line 3, in
from roop import core
File "D:\deepfake\roop\roop\core.py", line 22, in
import roop.ui as ui
File "D:\deepfake\roop\roop\ui.py", line 3, in
import customtkinter as ctk
ModuleNotFoundError: No module named 'customtkinter'

i just pay as colab pro but not work porpaly

i jus pay as bolab pro but not work propaly show error as below

ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
pls do somtin as soon as posibal
thanks
manish kummar

Could not load torch: cuDNN version incompatibility

HELP PLEASE

root@a192be7bc1bc:/workspace/kohya_ss# bash gui.sh --share
16:25:19-866359 INFO nVidia toolkit detected
16:25:20-583053 INFO Torch 2.0.1+cu118
16:25:20-605249 ERROR Could not load torch: cuDNN version incompatibility: PyTorch was compiled against (8, 7, 0) but found runtime version (8, 5, 0). PyTorch already comes bundled with cuDNN. One option to resolving
this error is to ensure PyTorch can find the bundled cuDNN.one possibility is that there is a conflicting cuDNN in LD_LIBRARY_PATH.
screenshot_2023_06_08_at_23_26_58

Kohya LoRA - Kaggle

When I want to enter the GUI through ngrok, the screen displays "ERR_NGROK_8012", Traffic was successfully tunneled to the ngrok agent, but the agent failed to establish a connection to the upstream web service at localhost:7860. The error encountered was:dial tcp 127.0 .0.1:7860: connect: connection refused

Voice Cloning Training stops unexpectedly

I transcribed an audio using the provided code and then used the cli command to emulate the desktop settings. All my yml file settings are exactly the same. However, the training only shows to start but then stops unexpectedly without saving any model or throwing any error. Attaching the end of the terminal and the log file

image

image

ModuleNotFoundError: No module named 'gradio'

Hi, thanks for the steps and the tutorial. I followed the instructions on GitHub and YouTube, but I got the same error with both methods:

ModuleNotFoundError: No module named 'gradio'

Here is the text from the Command Prompt, after I followed the instructions on GitHub:

Microsoft Windows [Version 10.0.22621.2283]
(c) Microsoft Corporation. All rights reserved.

D:\Installers>git clone https://github.com/facebookresearch/audiocraft
Cloning into 'audiocraft'...
remote: Enumerating objects: 953, done.
remote: Counting objects: 100% (237/237), done.
remote: Compressing objects: 100% (122/122), done.
remote: Total 953 (delta 153), reused 143 (delta 114), pack-reused 716Receiving objects:  99% (944/953), 848.00 KiB | 1.64 MiB/s
Receiving objects: 100% (953/953), 1.74 MiB | 2.27 MiB/s, done.
Resolving deltas: 100% (480/480), done.

D:\Installers>cd audiocraft

D:\Installers\audiocraft>python -m venv venv

D:\Installers\audiocraft>cd venv

D:\Installers\audiocraft\venv>cd scripts

D:\Installers\audiocraft\venv\Scripts>activate

(venv) D:\Installers\audiocraft\venv\Scripts>cd ..

(venv) D:\Installers\audiocraft\venv>cd ..

(venv) D:\Installers\audiocraft>pip install -e .
Obtaining file:///D:/Installers/audiocraft
  Preparing metadata (setup.py) ... done
Collecting av
  Using cached av-10.0.0-cp310-cp310-win_amd64.whl (25.3 MB)
Collecting einops
  Using cached einops-0.7.0-py3-none-any.whl (44 kB)
Collecting flashy>=0.0.1
  Using cached flashy-0.0.2.tar.gz (72 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting hydra-core>=1.1
  Using cached hydra_core-1.3.2-py3-none-any.whl (154 kB)
Collecting hydra_colorlog
  Using cached hydra_colorlog-1.2.0-py3-none-any.whl (3.6 kB)
Collecting julius
  Using cached julius-0.2.7.tar.gz (59 kB)
  Preparing metadata (setup.py) ... done
Collecting num2words
  Using cached num2words-0.5.12-py3-none-any.whl (125 kB)
Collecting numpy
  Using cached numpy-1.26.0-cp310-cp310-win_amd64.whl (15.8 MB)
Collecting sentencepiece
  Using cached sentencepiece-0.1.99-cp310-cp310-win_amd64.whl (977 kB)
Collecting spacy==3.5.2
  Using cached spacy-3.5.2-cp310-cp310-win_amd64.whl (12.2 MB)
Collecting torch>=2.0.0
  Downloading torch-2.1.0-cp310-cp310-win_amd64.whl (192.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 192.3/192.3 MB 4.9 MB/s eta 0:00:00
Collecting torchaudio>=2.0.0
  Downloading torchaudio-2.1.0-cp310-cp310-win_amd64.whl (2.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 6.4 MB/s eta 0:00:00
Collecting huggingface_hub
  Using cached huggingface_hub-0.17.3-py3-none-any.whl (295 kB)
Collecting tqdm
  Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Collecting transformers>=4.31.0
  Using cached transformers-4.34.0-py3-none-any.whl (7.7 MB)
Collecting xformers
  Using cached xformers-0.0.22-cp310-cp310-win_amd64.whl (97.6 MB)
Collecting demucs
  Using cached demucs-4.0.1.tar.gz (1.2 MB)
  Preparing metadata (setup.py) ... done
Collecting librosa
  Using cached librosa-0.10.1-py3-none-any.whl (253 kB)
Collecting gradio
  Using cached gradio-3.47.1-py3-none-any.whl (20.3 MB)
Collecting torchmetrics
  Using cached torchmetrics-1.2.0-py3-none-any.whl (805 kB)
Collecting encodec
  Using cached encodec-0.1.1.tar.gz (3.7 MB)
  Preparing metadata (setup.py) ... done
Collecting protobuf
  Using cached protobuf-4.24.4-cp310-abi3-win_amd64.whl (430 kB)
Collecting jinja2
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting pathy>=0.10.0
  Using cached pathy-0.10.2-py3-none-any.whl (48 kB)
Collecting packaging>=20.0
  Using cached packaging-23.2-py3-none-any.whl (53 kB)
Collecting langcodes<4.0.0,>=3.2.0
  Using cached langcodes-3.3.0-py3-none-any.whl (181 kB)
Collecting cymem<2.1.0,>=2.0.2
  Using cached cymem-2.0.8-cp310-cp310-win_amd64.whl (39 kB)
Requirement already satisfied: setuptools in d:\installers\audiocraft\venv\lib\site-packages (from spacy==3.5.2->audiocraft==1.0.0) (65.5.0)
Collecting srsly<3.0.0,>=2.4.3
  Using cached srsly-2.4.8-cp310-cp310-win_amd64.whl (481 kB)
Collecting wasabi<1.2.0,>=0.9.1
  Using cached wasabi-1.1.2-py3-none-any.whl (27 kB)
Collecting murmurhash<1.1.0,>=0.28.0
  Using cached murmurhash-1.0.10-cp310-cp310-win_amd64.whl (25 kB)
Collecting smart-open<7.0.0,>=5.2.1
  Using cached smart_open-6.4.0-py3-none-any.whl (57 kB)
Collecting catalogue<2.1.0,>=2.0.6
  Using cached catalogue-2.0.10-py3-none-any.whl (17 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0
  Using cached spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Collecting requests<3.0.0,>=2.13.0
  Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting pydantic!=1.8,!=1.8.1,<1.11.0,>=1.7.4
  Using cached pydantic-1.10.13-cp310-cp310-win_amd64.whl (2.1 MB)
Collecting preshed<3.1.0,>=3.0.2
  Using cached preshed-3.0.9-cp310-cp310-win_amd64.whl (122 kB)
Collecting thinc<8.2.0,>=8.1.8
  Using cached thinc-8.1.12-cp310-cp310-win_amd64.whl (1.5 MB)
Collecting spacy-legacy<3.1.0,>=3.0.11
  Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Collecting typer<0.8.0,>=0.3.0
  Using cached typer-0.7.0-py3-none-any.whl (38 kB)
Collecting dora-search
  Using cached dora_search-0.1.12.tar.gz (87 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting colorlog
  Using cached colorlog-6.7.0-py2.py3-none-any.whl (11 kB)
Collecting antlr4-python3-runtime==4.9.*
  Using cached antlr4-python3-runtime-4.9.3.tar.gz (117 kB)
  Preparing metadata (setup.py) ... done
Collecting omegaconf<2.4,>=2.2
  Using cached omegaconf-2.3.0-py3-none-any.whl (79 kB)
Collecting filelock
  Using cached filelock-3.12.4-py3-none-any.whl (11 kB)
Collecting sympy
  Downloading sympy-1.12-py3-none-any.whl (5.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 6.4 MB/s eta 0:00:00
Collecting typing-extensions
  Using cached typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Collecting networkx
  Downloading networkx-3.1-py3-none-any.whl (2.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 6.0 MB/s eta 0:00:00
Collecting fsspec
  Using cached fsspec-2023.9.2-py3-none-any.whl (173 kB)
Collecting colorama
  Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting pyyaml>=5.1
  Using cached PyYAML-6.0.1-cp310-cp310-win_amd64.whl (145 kB)
Collecting regex!=2019.12.17
  Using cached regex-2023.10.3-cp310-cp310-win_amd64.whl (269 kB)
Collecting tokenizers<0.15,>=0.14
  Using cached tokenizers-0.14.1-cp310-none-win_amd64.whl (2.2 MB)
Collecting safetensors>=0.3.1
  Using cached safetensors-0.4.0-cp310-none-win_amd64.whl (277 kB)
Collecting lameenc>=1.2
  Using cached lameenc-1.6.1-cp310-cp310-win_amd64.whl (148 kB)
Collecting openunmix
  Using cached openunmix-1.2.1-py3-none-any.whl (46 kB)
Collecting matplotlib~=3.0
  Using cached matplotlib-3.8.0-cp310-cp310-win_amd64.whl (7.6 MB)
Collecting altair<6.0,>=4.2.0
  Using cached altair-5.1.2-py3-none-any.whl (516 kB)
Collecting uvicorn>=0.14.0
  Using cached uvicorn-0.23.2-py3-none-any.whl (59 kB)
Collecting aiofiles<24.0,>=22.0
  Using cached aiofiles-23.2.1-py3-none-any.whl (15 kB)
Collecting semantic-version~=2.0
  Using cached semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)
Collecting pandas<3.0,>=1.0
  Using cached pandas-2.1.1-cp310-cp310-win_amd64.whl (10.7 MB)
Collecting ffmpy
  Using cached ffmpy-0.3.1.tar.gz (5.5 kB)
  Preparing metadata (setup.py) ... done
Collecting websockets<12.0,>=10.0
  Using cached websockets-11.0.3-cp310-cp310-win_amd64.whl (124 kB)
Collecting fastapi
  Using cached fastapi-0.103.2-py3-none-any.whl (66 kB)
Collecting python-multipart
  Using cached python_multipart-0.0.6-py3-none-any.whl (45 kB)
Collecting orjson~=3.0
  Using cached orjson-3.9.7-cp310-none-win_amd64.whl (134 kB)
Collecting importlib-resources<7.0,>=1.3
  Using cached importlib_resources-6.1.0-py3-none-any.whl (33 kB)
Collecting pydub
  Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Collecting gradio-client==0.6.0
  Using cached gradio_client-0.6.0-py3-none-any.whl (298 kB)
Collecting markupsafe~=2.0
  Using cached MarkupSafe-2.1.3-cp310-cp310-win_amd64.whl (17 kB)
Collecting pillow<11.0,>=8.0
  Using cached Pillow-10.0.1-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting httpx
  Using cached httpx-0.25.0-py3-none-any.whl (75 kB)
Collecting scikit-learn>=0.20.0
  Using cached scikit_learn-1.3.1-cp310-cp310-win_amd64.whl (9.3 MB)
Collecting lazy-loader>=0.1
  Using cached lazy_loader-0.3-py3-none-any.whl (9.1 kB)
Collecting pooch>=1.0
  Using cached pooch-1.7.0-py3-none-any.whl (60 kB)
Collecting joblib>=0.14
  Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
Collecting numba>=0.51.0
  Using cached numba-0.58.0-cp310-cp310-win_amd64.whl (2.6 MB)
Collecting audioread>=2.1.9
  Using cached audioread-3.0.1-py3-none-any.whl (23 kB)
Collecting scipy>=1.2.0
  Using cached scipy-1.11.3-cp310-cp310-win_amd64.whl (44.1 MB)
Collecting msgpack>=1.0
  Using cached msgpack-1.0.7-cp310-cp310-win_amd64.whl (222 kB)
Collecting soundfile>=0.12.1
  Using cached soundfile-0.12.1-py2.py3-none-win_amd64.whl (1.0 MB)
Collecting decorator>=4.3.0
  Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting soxr>=0.3.2
  Using cached soxr-0.3.7-cp310-cp310-win_amd64.whl (184 kB)
Collecting docopt>=0.6.2
  Using cached docopt-0.6.2.tar.gz (25 kB)
  Preparing metadata (setup.py) ... done
Collecting lightning-utilities>=0.8.0
  Using cached lightning_utilities-0.9.0-py3-none-any.whl (23 kB)
Collecting xformers
  Using cached xformers-0.0.21-cp310-cp310-win_amd64.whl (97.5 MB)
  Using cached xformers-0.0.20-cp310-cp310-win_amd64.whl (97.6 MB)
Collecting pyre-extensions==0.0.29
  Downloading pyre_extensions-0.0.29-py3-none-any.whl (12 kB)
Collecting xformers
  Using cached xformers-0.0.19-cp310-cp310-win_amd64.whl (96.7 MB)
  Downloading xformers-0.0.18-cp310-cp310-win_amd64.whl (112.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 112.3/112.3 MB 5.3 MB/s eta 0:00:00
Collecting pyre-extensions==0.0.23
  Downloading pyre_extensions-0.0.23-py3-none-any.whl (11 kB)
Collecting xformers
  Downloading xformers-0.0.17-cp310-cp310-win_amd64.whl (112.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 112.6/112.6 MB 5.2 MB/s eta 0:00:00
  Downloading xformers-0.0.16-cp310-cp310-win_amd64.whl (40.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.0/40.0 MB 6.1 MB/s eta 0:00:00
  Downloading xformers-0.0.13.tar.gz (292 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 292.5/292.5 kB 4.6 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [6 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\bayus\AppData\Local\Temp\pip-install-nz6ycovj\xformers_5e722d04a7d34dd8b3069d58fb05c922\setup.py", line 18, in <module>
          import torch
      ModuleNotFoundError: No module named 'torch'
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) D:\Installers\audiocraft>pip uninstall torch -y
WARNING: Skipping torch as it is not installed.

(venv) D:\Installers\audiocraft>pip uninstall torchvision -y
WARNING: Skipping torchvision as it is not installed.

(venv) D:\Installers\audiocraft>pip uninstall torchaudio -y
WARNING: Skipping torchaudio as it is not installed.

(venv) D:\Installers\audiocraft>pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Looking in indexes: https://download.pytorch.org/whl/cu118
Collecting torch
  Using cached https://download.pytorch.org/whl/cu118/torch-2.1.0%2Bcu118-cp310-cp310-win_amd64.whl (2722.7 MB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu118/torchvision-0.16.0%2Bcu118-cp310-cp310-win_amd64.whl (5.0 MB)
Collecting torchaudio
  Using cached https://download.pytorch.org/whl/cu118/torchaudio-2.1.0%2Bcu118-cp310-cp310-win_amd64.whl (3.9 MB)
Collecting sympy
  Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting typing-extensions
  Using cached https://download.pytorch.org/whl/typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting fsspec
  Using cached https://download.pytorch.org/whl/fsspec-2023.4.0-py3-none-any.whl (153 kB)
Collecting filelock
  Using cached https://download.pytorch.org/whl/filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting networkx
  Using cached https://download.pytorch.org/whl/networkx-3.0-py3-none-any.whl (2.0 MB)
Collecting jinja2
  Using cached https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached https://download.pytorch.org/whl/Pillow-9.3.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting numpy
  Downloading https://download.pytorch.org/whl/numpy-1.24.1-cp310-cp310-win_amd64.whl (14.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 6.4 MB/s eta 0:00:00
Collecting requests
  Using cached https://download.pytorch.org/whl/requests-2.28.1-py3-none-any.whl (62 kB)
Collecting MarkupSafe>=2.0
  Using cached https://download.pytorch.org/whl/MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl (16 kB)
Collecting urllib3<1.27,>=1.21.1
  Using cached https://download.pytorch.org/whl/urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
Collecting certifi>=2017.4.17
  Using cached https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting charset-normalizer<3,>=2
  Using cached https://download.pytorch.org/whl/charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting idna<4,>=2.5
  Using cached https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
Collecting mpmath>=0.19
  Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.2 certifi-2022.12.7 charset-normalizer-2.1.1 filelock-3.9.0 fsspec-2023.4.0 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.0 numpy-1.24.1 pillow-9.3.0 requests-2.28.1 sympy-1.12 torch-2.1.0+cu118 torchaudio-2.1.0+cu118 torchvision-0.16.0+cu118 typing-extensions-4.4.0 urllib3-1.26.13

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) D:\Installers\audiocraft>pip install -U --pre xformers
Collecting xformers
  Downloading xformers-0.0.23.dev639-cp310-cp310-win_amd64.whl (97.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.6/97.6 MB 5.5 MB/s eta 0:00:00
Requirement already satisfied: torch==2.1.0 in d:\installers\audiocraft\venv\lib\site-packages (from xformers) (2.1.0+cu118)
Requirement already satisfied: numpy in d:\installers\audiocraft\venv\lib\site-packages (from xformers) (1.24.1)
Requirement already satisfied: filelock in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (3.9.0)
Requirement already satisfied: networkx in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (3.0)
Requirement already satisfied: fsspec in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (2023.4.0)
Requirement already satisfied: typing-extensions in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (4.4.0)
Requirement already satisfied: sympy in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (1.12)
Requirement already satisfied: jinja2 in d:\installers\audiocraft\venv\lib\site-packages (from torch==2.1.0->xformers) (3.1.2)
Requirement already satisfied: MarkupSafe>=2.0 in d:\installers\audiocraft\venv\lib\site-packages (from jinja2->torch==2.1.0->xformers) (2.1.2)
Requirement already satisfied: mpmath>=0.19 in d:\installers\audiocraft\venv\lib\site-packages (from sympy->torch==2.1.0->xformers) (1.3.0)
Installing collected packages: xformers
Successfully installed xformers-0.0.23.dev639

[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: python.exe -m pip install --upgrade pip

(venv) D:\Installers\audiocraft>python .\demos\musicgen_app.py --inbrowser
Traceback (most recent call last):
  File "D:\Installers\audiocraft\demos\musicgen_app.py", line 21, in <module>
    import gradio as gr
ModuleNotFoundError: No module named 'gradio'

(venv) D:\Installers\audiocraft>

FileNotFound

FileNotFoundError: [Errno 2] No such file or directory:
'/root/.cache/huggingface/hub/models--stabilityai--stable-diffusion-xl-base-0.9/
refs/main'

ERROR: Could not find a version that satisfies the requirement xformers==0.0.21.dev564

The problem started to occur today:
ERROR: Could not find a version that satisfies the requirement xformers==0.0.21.dev564 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.16rc424, 0.0.16rc425, 0.0.16, 0.0.17rc481, 0.0.17rc482, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21.dev569, 0.0.21.dev571)
ERROR: No matching distribution found for xformers==0.0.21.dev564

All huggingface model permissions are granted,

CudaCall CUBLAS failure

Hi, I followed your tutorial and got everthing set up. I am running on windows 11. when running python run.py it opens the roop interface and on trying to swap faces, it shows this error code. I dont know if its related to low VRAM or anything else. I am running on AMD Ryzen 5 series with NVIDIA 1650 4GB. Please let me know if anything else can be made. Thank you for your tutorials.

onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUBLAS failure 3: CUBLAS_STATUS_ALLOC_FAILED ; GPU=0 ; hostname=GRA5ITER ; file=D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=164 ; expr=cublasCreate(&cublas_handle_);

OSError: [WinError 127] 找不到指定的程序。 Error loading "D:\deepface\roop\venv\lib\site-packages\torch\lib\torch_cuda_cpp.dll" or one of its dependencies.

Please help, I have this problem:

(venv) D:\deepface\roop>python run.py --keep-frames --keep-fps --gpu-vendor nvidia
Traceback (most recent call last):
File "D:\deepface\roop\run.py", line 3, in
from roop import core
File "D:\deepface\roop\roop\core.py", line 14, in
import torch
File "D:\deepface\roop\venv\lib\site-packages\torch_init_.py", line 122, in
raise err
OSError: [WinError 127] 找不到指定的程序。 Error loading "D:\deepface\roop\venv\lib\site-packages\torch\lib\torch_cuda_cpp.dll" or one of its dependencies.

no output video

Hi Furkan,
many thanks for our tutorial.
It pretty much worked 3 times but then i started to receive this message. I already tried to reconnect or rename the files.

%cd "/content/roop"
!python run.py -s "image (3).png" -t "842.mp4" -o "face_v1.mp4" --keep-frames --keep-fps --temp-frame-quality 1 --output-video-quality 1 --execution-provider cuda

/content/roop
Downloading: 529MB [00:02, 220MB/s]
download_path: /root/.insightface/models/buffalo_l
Downloading /root/.insightface/models/buffalo_l.zip from https://github.com/deepinsight/insightface/releases/download/v0.7/buffalo_l.zip...
100% 281857/281857 [00:05<00:00, 53199.74KB/s]
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /root/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
Pre-trained weights will be downloaded.
Downloading...
From: https://github.com/bhky/opennsfw2/releases/download/v0.1.0/open_nsfw_weights.h5
To: /root/.opennsfw2/weights/open_nsfw_weights.h5
100% 24.2M/24.2M [00:00<00:00, 69.8MB/s]
100% 221/222 [00:02<00:00, 73.73it/s]

After this nothing happens

i canot start roop

(venv) E:\Deepface\roop>python run.py --keep-frames --keep-fps --core 1
Traceback (most recent call last):
File "E:\Deepface\roop\run.py", line 3, in
from roop import core
File "E:\Deepface\roop\roop\core.py", line 16, in
import onnxruntime
ModuleNotFoundError: No module named 'onnxruntime'

session stoped when run !bash ...

hi and thanks for your efforts on free kaggle notebook

when i run !bash gui.sh --share --headless, session goes off and can not start gradio

output screen shot
image

do you have this problem?

PIP Install -r requirements.txt Fail

When i run the following command "pip install -r requirements.txt"
my system appears to hang at
installing backend dependencies . . . \

Can you please help too advsie

No module named 'opennsfw2'

Getting this error when trying to execute
(venv) D:\Deepface\roop>python run.py
Traceback (most recent call last):
File "run.py", line 3, in
from roop import core
File "D:\Deepface\roop\roop\core.py", line 15, in
from opennsfw2 import predict_video_frames, predict_image
ModuleNotFoundError: No module named 'opennsfw2'

How do I fix it?

Hello, great teaching creator, would like to see how to train the Lora model of SDxl-1.0 in Colab

Hello, great teaching creator, would like to see how to train the Lora model of SDxl-1.0 in Colab,

https://github.com/Linaqruf/kohya-trainer/blob/main/kohya-LoRA-trainer-XL.ipynb

This is the Colab training code for SDxl 1.0, but I won't use it. The quality of the trained Lora model is very poor, which is very important for people without good graphics cards.Your YouTube video tutorials are very detailed and useful in every issue.

AttributeError: module 'tensorflow' has no attribute 'keras'

Hi,FurkanGozukara, I'm big fan of your YT channel and subscriber
above all I can't Thank you enough for creating this app.

but i can't run this app because If I type that "python run.py --keep-frames --keep-fps --max-cores 1" or "python run.py --keep-frames --keep-fps --gpu-vendor nvidia"

I got a message this that :" AttributeError: module 'tensorflow' has no attribute 'keras'"

so did it that "pip install --upgrade pip" and "pip install tensorflow==2.12.*"

but it didn't work, so even i did that" Step 7:Activate venv once again" but it didn't work too

스크린샷 2023-06-18 195838

How can i fix this sir?

스크린샷 2023-06-18 195011

If i see your reply I would really appreciate that.

Errors while install tortoise-tts-fast

Hi Furkan,
While executing python -m pip install -e . , the following errors pop up. Can you help? Thanks.
ModuleNotFoundError: No module named 'setuptools'

image

It stops after this line and I do not know what the problem is (google colab)

after running the second command after the while it just stops

!python run.py -f "pic.jpg" -t "155985_720p.mp4" -o "face_changed_video1.mp4" --keep-frames --keep-fps --gpu-vendor nvidia
2023-06-10 17:09:40.464752: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-10 17:09:41.892312: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-06-10 17:09:44.620582: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:44.622707: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:44.622898: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
/usr/local/lib/python3.10/dist-packages/insightface/utils/transform.py:68: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
2023-06-10 17:09:52.340637: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.340949: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341169: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341565: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341777: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.341965: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-10 17:09:52.342163: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1635] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 11602 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5
0% 0/323 [00:00<?, ?it/s]2023-06-10 17:09:55.701166: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:424] Loaded cuDNN version 8700
99% 320/323 [00:04<00:00, 68.51it/s]
[ ]

then it just stops and I do not no how to fix this can someone help

Missing LICENSE

I see you have no LICENSE file for this project. The default is copyright.

I would suggest releasing the code under the GPL-3.0-or-later or AGPL-3.0-or-later license so that others are encouraged to contribute changes back to your project.

CC-BY-SA-4.0 might also be appropriate for this repository.

I get error with onixruntime

Processing: 1%|▌ | 3/342 [00:02<05:42, 1.01s/frame]2023-06-07 11:20:47.2553254 [E:onnxruntime:, sequential_executor.cc:514 onnxruntime::ExecuteKernel] Non-zero status code returned while running FusedConv node. Name:'conv_7_conv2d' Status Message: D:\a_work\1\s\onnxruntime\core\framework\bfc_arena.cc:368 onnxruntime::BFCArena::AllocateRawInternal Failed to allocate memory for requested buffer of size 134217728

[ONNXRuntimeError] : 1 : FAIL : D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:121 onnxruntime::CudaCall D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:114 onnxruntime::CudaCall CUBLAS failure 3: CUBLAS_STATUS_ALLOC_FAILED ; GPU=0 ; hostname=DESKTOP-MK381CK ; file=D:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_stream_handle.cc ; line=50 ; expr=cublasCreate(&cublas_handle_);

help me this!

G:\Deepfake\roop\venv\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: rcond parameter will change to the default of machine precision times max(M, N) where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass rcond=None, to keep using the old, explicitly pass rcond=-1.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
93%|█████████████████████████████████████████████████████████████████████████▋ | 261/280 [00:02<00:00, 203.03it/s]Exception in Tkinter callback
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\Deepfake\lib\tkinter_init_.py", line 1921, in call
return self.func(*args)
File "G:\Deepfake\roop\run.py", line 266, in
start_button = tk.Button(window, text="Start", bg="#f1c40f", relief="flat", borderwidth=0, highlightthickness=0, command=lambda: [save_file(), start()])
File "G:\Deepfake\roop\run.py", line 194, in start
seconds, probabilities = predict_video_frames(video_path=args['target_path'], frame_interval=100)
File "G:\Deepfake\roop\venv\lib\site-packages\opennsfw2_inference.py", line 178, in predict_video_frames
cv2.destroyAllWindows() # pylint: disable=no-member
cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1266: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'

100%|███████████████████████████████████████████████████████████████████████████████| 280/280 [00:20<00:00, 203.03it/s]

this problem has been troubling me since this week have there been any new update

2023-06-10 20:20:37.394766: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/kohya_ss/train_network.py:17 in │
│ │
│ 14 from accelerate.utils import set_seed │
│ 15 from diffusers import DDPMScheduler │
│ 16 │
│ ❱ 17 import library.train_util as train_util │
│ 18 from library.train_util import ( │
│ 19 │ DreamBoothDataset, │
│ 20 ) │
│ │
│ /home/kohya_ss/library/train_util.py:56 in │
│ │
│ 53 │ KDPM2AncestralDiscreteScheduler, │
│ 54 ) │
│ 55 from huggingface_hub import hf_hub_download │
│ ❱ 56 import albumentations as albu │
│ 57 import numpy as np │
│ 58 from PIL import Image │
│ 59 import cv2 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/init.py:5 in │
│ │
│ 2 │
│ 3 version = "1.3.0" │
│ 4 │
│ ❱ 5 from .augmentations import * │
│ 6 from .core.composition import * │
│ 7 from .core.serialization import * │
│ 8 from .core.transforms_interface import * │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/augmentations/init.py:2 in │
│ │
│ │
│ 1 # Common classes │
│ ❱ 2 from .blur.functional import * │
│ 3 from .blur.transforms import * │
│ 4 from .crops.functional import * │
│ 5 from .crops.transforms import * │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/augmentations/blur/init.py:1 │
│ in │
│ │
│ ❱ 1 from .functional import * │
│ 2 from .transforms import * │
│ 3 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/albumentations/augmentations/blur/functional.py │
│ :5 in │
│ │
│ 2 from math import ceil │
│ 3 from typing import Sequence, Union │
│ 4 │
│ ❱ 5 import cv2 │
│ 6 import numpy as np │
│ 7 │
│ 8 from albumentations.augmentations.functional import convolve │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/cv2/init.py:181 in │
│ │
│ 178 │ if DEBUG: print('OpenCV loader: DONE') │
│ 179 │
│ 180 │
│ ❱ 181 bootstrap() │
│ 182 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/cv2/init.py:153 in bootstrap │
│ │
│ 150 │ │
│ 151 │ py_module = sys.modules.pop("cv2") │
│ 152 │ │
│ ❱ 153 │ native_module = importlib.import_module("cv2") │
│ 154 │ │
│ 155 │ sys.modules["cv2"] = py_module │
│ 156 │ setattr(py_module, "_native", native_module) │
│ │
│ /opt/conda/lib/python3.10/importlib/init.py:126 in import_module │
│ │
│ 123 │ │ │ if character != '.': │
│ 124 │ │ │ │ break │
│ 125 │ │ │ level += 1 │
│ ❱ 126 │ return _bootstrap._gcd_import(name[level:], package, level) │
│ 127 │
│ 128 │
│ 129 _RELOADING = {} │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/kohya_ss/venv/bin/accelerate:8 in │
│ │
│ 5 from accelerate.commands.accelerate_cli import main │
│ 6 if name == 'main': │
│ 7 │ sys.argv[0] = re.sub(r'(-script.pyw|.exe)?$', '', sys.argv[0]) │
│ ❱ 8 │ sys.exit(main()) │
│ 9 │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py:45 in │
│ main │
│ │
│ 42 │ │ exit(1) │
│ 43 │ │
│ 44 │ # Run │
│ ❱ 45 │ args.func(args) │
│ 46 │
│ 47 │
│ 48 if name == "main": │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py:918 in │
│ launch_command │
│ │
│ 915 │ elif defaults is not None and defaults.compute_environment == ComputeEnvironment.AMA │
│ 916 │ │ sagemaker_launcher(defaults, args) │
│ 917 │ else: │
│ ❱ 918 │ │ simple_launcher(args) │
│ 919 │
│ 920 │
│ 921 def main(): │
│ │
│ /home/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py:580 in │
│ simple_launcher │
│ │
│ 577 │ process.wait() │
│ 578 │ if process.returncode != 0: │
│ 579 │ │ if not args.quiet: │
│ ❱ 580 │ │ │ raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) │
│ 581 │ │ else: │
│ 582 │ │ │ sys.exit(1) │
│ 583 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
CalledProcessError: Command '['/home/kohya_ss/venv/bin/python', 'train_network.py', '--enable_bucket',
'--pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5', '--train_data_dir=/home/chimu/img',
'--reg_data_dir=/home/chimu/reg', '--resolution=768,768', '--output_dir=/home/chimu/model', '--logging_dir=/home/chimu/log',
'--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-05',
'--unet_lr=0.0001', '--network_dim=128', '--output_name=test1', '--lr_scheduler_num_cycles=10', '--learning_rate=0.0001',
'--lr_scheduler=cosine', '--train_batch_size=1', '--max_train_steps=30400', '--save_every_n_epochs=1', '--mixed_precision=fp16',
'--save_precision=fp16', '--seed=1234', '--caption_extension=.txt', '--cache_latents', '--optimizer_type=AdamW8bit',
'--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status

[WARNING] Please select an image containing a face.

i have added proper image and video, but still get this error ....
iam using colab notebook btw

2023-06-06 05:48:31.242736: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-06 05:48:32.168744: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-06-06 05:48:35.492152: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-06 05:48:35.494989: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-06-06 05:48:35.496282: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355

[WARNING] Please select an image containing a face.

Can't Swap face in video.

Hello, I have been using collab from past few months and it working smoothly until today. when I want to change the face it give me this error, can you fix this!

/content/roop
Traceback (most recent call last):
File "/content/roop/run.py", line 3, in
from roop import core
File "/content/roop/roop/core.py", line 20, in
import roop.ui as ui
File "/content/roop/roop/ui.py", line 15, in
from roop.predictor import predict_frame, clear_predictor
File "/content/roop/roop/predictor.py", line 3, in
import opennsfw2
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/init.py", line 4, in
from ._inference import Aggregation
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/_inference.py", line 12, in
from keras import KerasTensor, Model # type: ignore
ImportError: cannot import name 'KerasTensor' from 'keras' (/usr/local/lib/python3.10/dist-packages/keras/init.py)

Problem with image processing

Hello, I have been using Colab for several months without any issues in processing images (just face swap), and it had been working smoothly until today when it won't allow me to process the image. This is the code that appears:

/content/roop
Traceback (most recent call last):
File "/content/roop/run.py", line 3, in
from roop import core
File "/content/roop/roop/core.py", line 20, in
import roop.ui as ui
File "/content/roop/roop/ui.py", line 15, in
from roop.predictor import predict_frame, clear_predictor
File "/content/roop/roop/predictor.py", line 3, in
import opennsfw2
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/init.py", line 4, in
from ._inference import Aggregation
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/_inference.py", line 16, in
from ._model import make_open_nsfw_model
File "/usr/local/lib/python3.10/dist-packages/opennsfw2/_model.py", line 12, in
from tensorflow.keras import layers # type: ignore # pylint: disable=import-error
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/init.py", line 3, in
from keras.api._v2.keras import internal
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/init.py", line 3, in
from keras.api._v2.keras import internal
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/internal/init.py", line 3, in
from keras.api._v2.keras.internal import backend
File "/usr/local/lib/python3.10/dist-packages/keras/api/_v2/keras/internal/backend/init.py", line 3, in
from keras.src.backend import _initialize_variables as initialize_variables
ImportError: cannot import name '_initialize_variables' from 'keras.src.backend' (/usr/local/lib/python3.10/dist-packages/keras/src/backend/init.py)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.