Giter Club home page Giter Club logo

disco_diffusion_local's Introduction

Disco Diffusion v5 Turbo, with 3D animation, running locally.

Getting the latest versions of Disco Diffusion (at time of writing this is v5 with Turbo and 3D Animation) to work locally, instead of colab. Including how I run this on Windows, despite some Linux only dependencies ;). Now includes an experimental batch mode to create as many videos as you want with different prompts, with only 1 run.

If you run into any issues, feel free to open an issue and I’ll do my best to help troubleshoot. Be as specific as possible. For example, if you get an error message at any point you need to include this in the issue, alongside operating system and computer specs.

Examples

How to run this on Windows

The same steps should work on Linux, starting from step 3.

Requirements:

  • Nvidia GPU with at least 8GB VRAM. >12GB is recommended.
  • Windows 10 or 11

Step 1: Update windows

Windows 11: Windows 11 should work but it doesn't hurt to update to the latest version before continuing :) (note though that I've created and tested this on Windows 10) Windows 10: You must be running at least feature update 21H2 for your GPU to work. To check which version you're running, open cmd and type:

winver

If you’re on 21H2 or later, you’re good to go. If not, try updating Windows by typing “check for updates” in the start menu and using the built in tool. For me personally, the required update was not showing, but I was able to install it by using the Windows 10 Update Assistant.

Step 2: install WSL2

The way we’ll use Linux only dependencies is by installing the latest version of the Windows Subsystem for Linux (WSL2). This will run a virtual machine–like Ubuntu installation on Windows. However, Microsoft have implemented this at a very low level, meaning almost no performance hit and GPU support!

For the latest instructions on this, follow the official Microsoft guide.

Briefly, just open a Windows Powershell as administrator, and type:

wsl —-install

It might request a restart, and when you restart your computer you’ll have an app in the start menu or task bar called “Ubuntu”!

Step 3: install Anaconda on Ubuntu

We’ll need to install Anaconda inside our Ubuntu environment to manage packages easily. Open your new Ubuntu app (and fix any errors that come up on first launch. I had a few, but they were either self explanatory or fixed easily with some quick Googling). Now you want to download, then run, the Linux Anaconda installer as follows. If you’re following this much later than March 2022 you can replace the url below with the latest version from the Anaconda website.

mkdir Downloads 
cd Downloads 
wget https://repo.anaconda.com/archive/Anaconda3-2021.11-Linux-x86_64.sh
bash Anaconda3-2021.11-Linux-x86_64.sh

Follow the on-screen instructions. Ie type yes when it asks you to, and ask it to run conda init for you when prompted.

Close your Ubuntu terminal and open it again.

Now type and run

conda —-help

If it gives you a long list of conda options, that means it’s successfully installed Anaconda within Ubuntu!

Step 4: creating our environment

We’ll now create and activate a conda environment (inside Ubuntu) with all the appropriate dependencies.

conda create -n pytorch_110
conda activate pytorch_110

Whenever you restart your computer, or close and open Ubuntu again, you will have to run that second command (conda activate pytorch). Now install the correct version of pytorch:

conda install pytorch==1.10 torchvision torchaudio cudatoolkit==11.1 -c pytorch -c conda-forge

Type y whenever prompted.

Finding the above took a lot of trial and error. The difficulty was finding a pytorch and cudatoolkit combination which works with pytorch3d (required later). The above worked for me.

Now install some other dependencies:

conda install jupyter pandas requests matplotlib
conda install opencv -c conda-forge

Step 5, Option 1: Similar to Colab, Easy

Option 1 for how to actually run the code and get images / video. Option 1 involves downloading a .ipynb, that is lightly modified from the colab notebook, editing cells and editing them inside the notebook environment.

To use Option 1:

We’ll be working within a jupyter notebook version of the colab notebook. (I’m currently working on a cleaner interface, make sure to star and watch the repo to see when this goes live).

Download the jupyter notebook in this repo. If you know how, clone the repo directly into your Ubuntu distribution. To make this guide as easy to follow as possible, I’ll also show an easier way.

In your Ubuntu terminal, type:

explorer.exe .

This will open your Ubuntu directory in Windows Explorer! Find a location you want to download the notebook to, maybe create a new folder for it.

On my github repository, click “code” then download zip. Extract the zip, and copy the .ipynb file to the desired folder in Ubuntu. If typed explorer.exe earlier, you’ll have an Ubuntu folder open in Explorer, so you can drag and drop into this folder.

In your Ubuntu terminal, run:

jupyter notebook

You might notice that this doesn’t automatically open jupyter in your browser. That’s okay! Just look for the URL starting with localhost, copy this, and paste it into your browser on Windows.

This should open jupyter in your browser! Now navigate to the folder where you have placed your jupyter notebook, and open it. Run the cells, one by one. They should install further required dependencies and download all the models for you. Along the way, you can change any settings you would like. One of the last cells asks for “text_prompts”, which you can specify to create whatever you wish!

Step 5, Option 2. Batch mode command line, create multiple videos in 1 run, more advanced, still experimental

Option 2 for how to actually run the code and get images / video. This involves setting up a folder with settings files, which the notebook will work through 1 by 1. This will allow you to specify prompts for as many different videos as you would like, and create them all with a single run of a notebook.

Some options must be specified once, and will be used for all items in the queue. Set these in "queue/master_settings.txt":

  • diffusion_model
  • use_secondary_model
  • ViTB32
  • ViTB16
  • ViTL14
  • RN101
  • RN50
  • RN50x4
  • RN50x16
  • RN50x64
  • width
  • height
  • init_image
  • translation_x
  • translation_y
  • translation_z
  • rotation_3d_x
  • rotation_3d_y
  • rotation_3d_z
  • turbo_mode

Options that can be specified for each video are as follows. Must be specified in "queue/queue_1.txt", "queue/queue_2.txt" etc. Files can be created while the script is running, without interruption!

  • text_prompts
  • image_prompts
  • max_frames
  • steps

Note that this is currently experimental, and intended for creating a series of videos (not images). You are welcome to submit issues for bugs / feature requests, or even your own pull requests if you want to improve this ;)

Also, for my uses, fixing all those features works fine. If there are features you would like to be able to change between runs in the queue that you can't currently, feel free to start an issue or pull request.

To use Option 2:

clone the repo into your Ubuntu installation. If you don't know how to do this, click "code" and "download zip" on this repo. Copy the entire repo into a folder in your Ubuntu environment. This is usually somewhere like "\wsl$\Ubuntu\home\USERNAME". You can access it easily by typing explorer.exe . in your Ubuntu window.

One of the folders you copied should be called "queue". Open this, and specify what settings you want in "master_settings". Then specify what prompts you want in each video, in separate files in this same folder. They should ve named "queue_1.txt" onwards, without any gaps.

You can synthesize the queue files from the command line, simply navigate to the cloned repository and type:

 jupyter nbconvert --execute --to notebook --inplace Disco_Diffusion_v5_2_w_VR_Mode_batch_mode.ipynb

The above will run all cells in the jupyter notebook from the command line. You can also run them in jupyter if you prefer. See option 1 for instructions on how to run the jupyter notebook if you'd like.

That should be it! This should start creating images in your queue, 1 by 1.

FAQ

  1. I'm getting CUDA errors.

    RuntimeError: CUDA error: unknown error CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

If you're getting an error like the above, I've only ever known this to happen if you're using up too much VRAM. Reducing by doing 1 or all of the following:

  • In 2. Diffusion and CLIP model settings, try disabling all models except for 1 (and keep “use secondary model”)
  • In 2. Diffusion and CLIP model settings, switch to “256x256_diffusion_uncond”.
  • In settings reduce the resolution drastically (to 128x128) and see if that helps.

If this works, slowly add back models and increase resolution until you find out where the limit is for your GPU.

disco_diffusion_local's People

Contributors

hclivess avatar mohamadzeina avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

disco_diffusion_local's Issues

No Module called Pytorch3D

I cloned the repo to my pc and started a jupyter notebook. While I didn't install jupyter according to the Instructions.

I installed jupyter using pamac (on arch) and launched it using juypter notebook and using the web UI, I navigated to the folder and selected the Notebook file and clicked on Cell>Run all

It went ahead and installed dependencies, but in 1.5 Define necessary functions, I get the following error:

`---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Input In [8], in <cell line: 5>()
1 #@title 1.5 Define necessary functions
2
3 # https://gist.github.com/adefossez/0646dbe9ed4005480a2407c62aac8869
----> 5 import pytorch3d.transforms as p3dT
6 import disco_xform_utils as dxf
8 def interp(t):

ModuleNotFoundError: No module named 'pytorch3d'

`
I tried to manually install pytorch3d, but it didn't work.

Is there some way I can fix this?

I'm doing an installation of using your instructions to see if my installation was the cause of error

cudatoolkit 11.1 doesn't exist?

On running
conda install pytorch==1.10 torchvision torchaudio cudatoolkit==11.1 -c pytorch -c conda-forge

I get the following error
PackagesNotFoundError: The following packages are not available from current channels:
- cudatoolkit==11.1

In the instructions, it says "Finding the above took a lot of trial and error. The difficulty was finding a pytorch and cudatoolkit combination which works with pytorch3d (required later). The above worked for me."

Looking it up, I can't even find a cudatoolkit 11.1. There's an 11.3 and an 11.7, neither of which seem to end up working in the notebook. Any ideas?

Image.LANCZOS depreciation & Missing regex package

Depreciation Notice

When running Disco_Diffusion_v5_2_[w_VR_Mode].ipynb I received depreciation notifications during the diffuse step:

/tmp/ipykernel_11431/XXXXXXXXX.py:513: DeprecationWarning: 
LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
  init = init.resize((args.side_x, args.side_y), Image.LANCZOS)

Missing Package

When I followed the README.md file, I found that I was missing the regex package. It was easily solved with conda install regex, but I'd suggest modifying the instructions to include it:

Now install some other dependencies:

  conda install jupyter pandas requests matplotlib regex
  conda install opencv -c conda-forge

Amazing job with this repo... it's the first time I've been able to successfully run DD locally on Windows.... so thanks very much for that!

do it work with radeon hardware?

Im trying to use it locally by following your guide but I receive this:
image

then, nothing works after this point:
image

does it work with Radeon software? Is there any work around this issues?

This can't be run on MacOS?

I'm stuck at installing cudatoolkit 11.1, found on pytorch.org that CUDA is not supported on MacOS. What should i do next?

Cannot download secondary_model_imagenet_2.pth - file doesn't exist anymore

Hi, under the step 2. Diffusion and CLIP model settings, the notebook tries to download the file secondary_model_imagenet_2.pth, as I understand it. This fails with a 404. Opening the download-url https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth in a browser, AWS reports that the bucket "v-diffusion" no longer exists:
image

In the notebook, the error-message is

--2022-05-26 11:47:08--  https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt
Resolving v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)... 52.218.229.9
Connecting to v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)|52.218.229.9|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-05-26 11:47:09 ERROR 404: Not Found.

--2022-05-26 11:47:09--  https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth
Resolving v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)... 52.218.229.9
Connecting to v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)|52.218.229.9|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-05-26 11:47:10 ERROR 404: Not Found.

CUDA out of memory

I'm getting the below error on the final Diffusion step...I have a 3080 12gb and don't think this should be an issue....is there something that is installed incorrectly? Or a default setting that is way too high? I've resolved all errors on the previous steps and everything runs through good and I've tried messing with a few setting to no avail, Thanks!

RuntimeError: CUDA out of memory. Tried to allocate 52.00 MiB (GPU 0; 11.76 GiB total capacity; 9.57 GiB already allocated; 47.50 MiB free; 9.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

ModuleNotFoundError:No module named 'py3d_tools'

ModuleNotFoundError Traceback (most recent call last)
Input In [8], in <cell line: 6>()
1 #@title 1.5 Define necessary functions
2
3 # https://gist.github.com/adefossez/0646dbe9ed4005480a2407c62aac8869
5 import pytorch3d.transforms as p3dT
----> 6 import disco_xform_utils as dxf
8 def interp(t):
9 return 3 * t**2 - 2 * t ** 3

File ~/Project/disco_xform_utils.py:2, in
1 import torch, torchvision
----> 2 import py3d_tools as p3d
3 import midas_utils
4 from PIL import Image

ModuleNotFoundError: No module named 'py3d_tools'

How to make video

Hello, not an issue persay...but I was wondering since the repo says the video making function doesn't work...what's the best way to convert the outputted images into a video programmatically...say 30 to 60fps...thanks!

disco_xform_utils.py failed to import InferenceHelper. Please ensure that AdaBins directory is in the path (i.e. via sys.path.append('./AdaBins') or other means).

disco_xform_utils.py failed to import InferenceHelper. Please ensure that AdaBins directory is in the path (i.e. via sys.path.append('./AdaBins') or other means).

ImportError Traceback (most recent call last)
Input In [4], in <cell line: 125>()
133 wget("https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt", f'{PROJECT_DIR}/pretrained')
134 sys.path.append(f'{PROJECT_DIR}/AdaBins')
--> 135 from infer import InferenceHelper
136 MAX_ADABINS_AREA = 500000
138 import torch

ImportError: cannot import name 'InferenceHelper' from 'infer' (/root/anaconda3/lib/python3.9/site-packages/infer/init.py)

I have tried uninstalling and reinstalling infer but nothing works

EOFError: Ran out of input

I almost got DD to work on my machine following the WSL guide, installing modules and fixing errors but now I'm stuck when I'm running cell 4. Diffuse!

Need to mention that this is the jupyter notebook browser version not the batch version.

Starting Run: TimeToDiscoTurbo3(0) at frame 0
Prepping model...
./model/512x512_diffusion_uncond_finetune_008100.pt

---------------------------------------------------------------------------
EOFError                                  Traceback (most recent call last)
Input In [15], in <cell line: 152>()
    150 diffusion_model_path = f'{model_path}/{diffusion_model}.pt'
    151 print(diffusion_model_path)
--> 152 model.load_state_dict(torch.load(diffusion_model_path, map_location='cpu'))
    153 model.requires_grad_(False).eval().to(device)
    154 for name, param in model.named_parameters():

File ~/anaconda3/envs/pytorch_110/lib/python3.9/site-packages/torch/serialization.py:608, in load(f, map_location, pickle_module, **pickle_load_args)
    606             return torch.jit.load(opened_file)
    607         return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 608 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)

File ~/anaconda3/envs/pytorch_110/lib/python3.9/site-packages/torch/serialization.py:777, in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
    771 if not hasattr(f, 'readinto') and (3, 8, 0) <= sys.version_info < (3, 8, 2):
    772     raise RuntimeError(
    773         "torch.load does not work with file-like objects that do not implement readinto on Python 3.8.0 and 3.8.1. "
    774         f"Received object of type \"{type(f)}\". Please update to Python 3.8.2 or newer to restore this "
    775         "functionality.")
--> 777 magic_number = pickle_module.load(f, **pickle_load_args)
    778 if magic_number != MAGIC_NUMBER:
    779     raise RuntimeError("Invalid magic number; corrupt file?")

EOFError: Ran out of input

Nvidia-smi not found

FileNotFoundError: [Errno 2] No such file or directory: 'nvidia-smi' when running notebooks

Nvidia required as a channel on Anaconda Install

I needed to include -c nvidia to properly install cudetoolkit. May be good to update. Full command:

conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia -c conda-forge

No Write Permissions

After opening jupyter notebook and using either the included v5 or the most recent v5.2 .ipynb file, running some of the cells results in an error due to not having the permissions to write files. Jupyter still seems to be able to read files, as I can manually create folders and Jupyter will be able to read them, but it's unable to write folders or files to those folders.

No module named 'midas'

From 1.4 Define Midas functions


ModuleNotFoundError Traceback (most recent call last)
Input In [4], in <cell line: 4>()
1 #@title ### 1.4 Define Midas functions
----> 4 from midas.dpt_depth import DPTDepthModel
5 from midas.midas_net import MidasNet
6 from midas.midas_net_custom import MidasNet_small

ModuleNotFoundError: No module named 'midas'

No clue what I did wrong. running Win11, 21h2.

Installed everything as directed. Perhaps I goofed somewhere along the way.

Cant run Jupyter Notebook

Im at the end of Step 5. But when I try to run jupyter notebook it prompts me to install jupyter-core but it doesnt seem to work.

image

No module named 'pytorch3d'

I received those errors, and the diffuse isn't running, not really familiar with that, but I followed all the steps

image
image

Solving Environment Failed

Hello,

Everything worked okey but after I enter this command conda install opencv -c conda-forge I receive the following message.

Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed

Can you please advise?

Thank youu!

NameError: name 'model_config' is not defined

HI guys,
I am running v5 in the browser and was wondering how to fix this error? I was running the process and a beautiful picture appeared and then I wake up to this with no picture saved. I am running ColabPro. Thanks!

`---------------------------------------------------------------------------
NameError Traceback (most recent call last)
in ()
40 timestep_respacing = f'ddim{steps}'
41 diffusion_steps = (1000//steps)*steps if steps < 1000 else steps
---> 42 model_config.update({
43 'timestep_respacing': timestep_respacing,
44 'diffusion_steps': diffusion_steps,

NameError: name 'model_config' is not defined`

error model.convert_to_fp16()

Hi, I installed this to windows 11 :)
Everything is fine. I checked twice to do not mistake.
but when I try to start run all cell, This error message is coming.
I using RTX3090. I resized image size to 128 x 128, and disable all models, but it still doesn't work :)


Starting Run: TimeToDisco(0) at frame 0
Prepping model...

RuntimeError Traceback (most recent call last)
/tmp/ipykernel_1480/836853634.py in
166 param.requires_grad_()
167 if model_config['use_fp16']:
--> 168 model.convert_to_fp16()
169
170 gc.collect()

~/disco/guided-diffusion/guided_diffusion/unet.py in convert_to_fp16(self)
620 Convert the torso of the model to float16.
621 """
--> 622 self.input_blocks.apply(convert_module_to_f16)
623 self.middle_block.apply(convert_module_to_f16)
624 self.output_blocks.apply(convert_module_to_f16)

~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in apply(self, fn)
665 """
666 for module in self.children():
--> 667 module.apply(fn)
668 fn(self)
669 return self

~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in apply(self, fn)
665 """
666 for module in self.children():
--> 667 module.apply(fn)
668 fn(self)
669 return self

~/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in apply(self, fn)
666 for module in self.children():
667 module.apply(fn)
--> 668 fn(self)
669 return self
670

~/disco/guided-diffusion/guided_diffusion/fp16_util.py in convert_module_to_f16(l)
18 """
19 if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
---> 20 l.weight.data = l.weight.data.half()
21 if l.bias is not None:
22 l.bias.data = l.bias.data.half()

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

System cannot find nbserver-318-open.html when running jupyter notebook command

Hi there im fairly new to all this, but I have set up everything according to the tutorial sheet but, when I run the jupyter notebook command which is shown in detail in the image I have attached, an error message pops up. Is there a way to fix this file error?
Capture

Furthermore, when running the cells, does anyone mind explaining the process a bit more in-depth if possible. I understand I can change all my setting but how does rendering work when you run it? and when you do run the cells do you have to reach step 5 before Ubuntu does anything?

Also is there anyway to see your processed images like in the original disco diffusion in like 50 step gaps or do you have to wait until the render is complete?

ONTOP of all that now im getting alot of ERROR 404 HTTP request sent, awaiting response... 404 not found messages. Any fixes?

Thanks, sorry for the lot of questions!

Weird spinning structure render result after initial frame.

I finally got DD to work locally but one last issue I have is that the output after the initial frame turns into a "+" sign like rotating structure that lacks any detail trying to be rendered.

The initial frame rendered looks great but once it starts being as a starting point for the next frames it looks like in this video:

https://gfycat.com/delectablemasculineimperialeagle

-Disabling the second model does not fix the issue.

-Removing translation parameters did not fix the issue.

Ubuntu turns black after running 'jupyter notebook'

Hi my Ubuntu turns black except for a cursor after running 'jupyter notebook' command. Before it turns black I saw 'not a try' or something flashed away-I managed to get a screenshot and got the url but it was not valid. The browser shows "localhost refused connection". Any idea about what to do?

No module named 'cv2'

Everything went smooth till this appeared after I run "Install and import dependencies"

CUDA error: no kernel image is available for execution on the device

I'm getting an error when it comes to prepping the model...
184284403-20adfd20-dd31-42f3-929d-d5ffc38abd21

I've tried various nvidia drivers and pytorch versions:
184286096-3b39a17d-2023-4c2c-bf27-4d2b013de523
184284334-e3a6e0f5-1830-4a41-9538-b337887efc37

My environment as it stands:

# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main
_openmp_mutex             5.1                       1_gnu
absl-py                   1.2.0                    pypi_0    pypi
aiohttp                   3.8.1                    pypi_0    pypi
aiosignal                 1.2.0                    pypi_0    pypi
antlr4-python3-runtime    4.9.3                    pypi_0    pypi
anyio                     3.5.0           py310h06a4308_0
argon2-cffi               21.3.0                   pypi_0    pypi
argon2-cffi-bindings      21.2.0          py310h7f8727e_0
asttokens                 2.0.5                    pypi_0    pypi
async-timeout             4.0.2                    pypi_0    pypi
attrs                     21.4.0                   pypi_0    pypi
babel                     2.9.1                    pypi_0    pypi
backcall                  0.2.0                    pypi_0    pypi
beautifulsoup4            4.11.1          py310h06a4308_0
blas                      1.0                    openblas
bleach                    4.1.0                    pypi_0    pypi
bottleneck                1.3.5           py310ha9d4c09_0
brotli                    1.0.9                h5eee18b_7
brotli-bin                1.0.9                h5eee18b_7
brotlipy                  0.7.0           py310h7f8727e_1002
bzip2                     1.0.8                h7b6447c_0
ca-certificates           2022.07.19           h06a4308_0
cachetools                5.2.0                    pypi_0    pypi
cairo                     1.16.0               h19f5f5c_2
certifi                   2022.6.15       py310h06a4308_0
cffi                      1.15.1          py310h74dc2b5_0
charset-normalizer        2.0.4                    pypi_0    pypi
cryptography              37.0.1          py310h9ce1e76_0
cycler                    0.11.0                   pypi_0    pypi
datetime                  4.5                      pypi_0    pypi
dbus                      1.13.18              hb2f20db_0
debugpy                   1.5.1           py310h295c915_0
decorator                 5.1.1                    pypi_0    pypi
defusedxml                0.7.1                    pypi_0    pypi
eigen                     3.3.7                hd09550d_1
einops                    0.4.1                    pypi_0    pypi
entrypoints               0.4             py310h06a4308_0
executing                 0.8.3                    pypi_0    pypi
expat                     2.4.4                h295c915_0
fastjsonschema            2.15.1                   pypi_0    pypi
ffmpeg                    4.2.2                h20bf706_0
fontconfig                2.13.1               h6c09931_0
fonttools                 4.25.0                   pypi_0    pypi
freetype                  2.11.0               h70c0345_0
frozenlist                1.3.1                    pypi_0    pypi
fsspec                    2022.7.1                 pypi_0    pypi
ftfy                      6.1.1                    pypi_0    pypi
giflib                    5.2.1                h7b6447c_0
glib                      2.69.1               h4ff587b_1
gmp                       6.2.1                h295c915_3
gnutls                    3.6.15               he1e5248_0
google-auth               2.10.0                   pypi_0    pypi
google-auth-oauthlib      0.4.6                    pypi_0    pypi
graphite2                 1.3.14               h295c915_1
grpcio                    1.47.0                   pypi_0    pypi
gst-plugins-base          1.14.0               h8213a91_2
gstreamer                 1.14.0               h28cd5cc_2
harfbuzz                  4.3.0                hd55b92a_0
hdf5                      1.10.6               h3ffc7dd_1
icu                       58.2                 he6710b0_3
idna                      3.3                      pypi_0    pypi
ipykernel                 6.9.1           py310h06a4308_0
ipython                   8.4.0           py310h06a4308_0
ipython-genutils          0.2.0                    pypi_0    pypi
ipython_genutils          0.2.0              pyhd3eb1b0_1
ipywidgets                7.6.5                    pypi_0    pypi
jedi                      0.18.1          py310h06a4308_1
jinja2                    3.0.3                    pypi_0    pypi
jpeg                      9e                   h7f8727e_0
json5                     0.9.6                    pypi_0    pypi
jsonschema                4.4.0           py310h06a4308_0
jupyter                   1.0.0           py310h06a4308_8
jupyter-console           6.4.3                    pypi_0    pypi
jupyter_client            7.2.2           py310h06a4308_0
jupyter_console           6.4.3              pyhd3eb1b0_0
jupyter_core              4.10.0          py310h06a4308_0
jupyter_server            1.18.1          py310h06a4308_0
jupyterlab                3.4.4           py310h06a4308_0
jupyterlab-pygments       0.1.2                    pypi_0    pypi
jupyterlab-widgets        1.0.0                    pypi_0    pypi
jupyterlab_pygments       0.1.2                      py_0
jupyterlab_server         2.12.0          py310h06a4308_0
jupyterlab_widgets        1.0.0              pyhd3eb1b0_1
kiwisolver                1.4.2           py310h295c915_0
krb5                      1.19.2               hac12032_0
lame                      3.100                h7b6447c_0
lcms2                     2.12                 h3be6417_0
ld_impl_linux-64          2.38                 h1181459_1
libbrotlicommon           1.0.9                h5eee18b_7
libbrotlidec              1.0.9                h5eee18b_7
libbrotlienc              1.0.9                h5eee18b_7
libclang                  10.0.1          default_hb85057a_2
libedit                   3.1.20210910         h7f8727e_0
libevent                  2.1.12               h8f2d780_0
libffi                    3.3                  he6710b0_2
libgcc-ng                 11.2.0               h1234567_1
libgfortran-ng            11.2.0               h00389a5_1
libgfortran5              11.2.0               h1234567_1
libgomp                   11.2.0               h1234567_1
libidn2                   2.3.2                h7f8727e_0
libllvm10                 10.0.1               hbcb73fb_5
libopenblas               0.3.20               h043d6bf_1
libopus                   1.3.1                h7b6447c_0
libpng                    1.6.37               hbc83047_0
libpq                     12.9                 h16c4e8d_3
libprotobuf               3.20.1               h4ff587b_0
libsodium                 1.0.18               h7b6447c_0
libstdcxx-ng              11.2.0               h1234567_1
libtasn1                  4.16.0               h27cfd23_0
libtiff                   4.2.0                h2818925_1
libunistring              0.9.10               h27cfd23_0
libuuid                   1.0.3                h7f8727e_2
libvpx                    1.7.0                h439df22_0
libwebp                   1.2.2                h55f646e_0
libwebp-base              1.2.2                h7f8727e_0
libxcb                    1.15                 h7f8727e_0
libxkbcommon              1.0.1                hfa300c1_0
libxml2                   2.9.14               h74e7548_0
libxslt                   1.1.35               h4e12654_0
lpips                     0.1.4                    pypi_0    pypi
lz4-c                     1.9.3                h295c915_1
markdown                  3.4.1                    pypi_0    pypi
markupsafe                2.1.1           py310h7f8727e_0
matplotlib                3.5.1           py310h06a4308_1
matplotlib-base           3.5.1           py310ha18d171_1
matplotlib-inline         0.1.2                    pypi_0    pypi
mistune                   0.8.4           py310h7f8727e_1000
multidict                 6.0.2                    pypi_0    pypi
munkres                   1.1.4                    pypi_0    pypi
nbclassic                 0.3.5                    pypi_0    pypi
nbclient                  0.5.13          py310h06a4308_0
nbconvert                 6.4.4           py310h06a4308_0
nbformat                  5.3.0           py310h06a4308_0
ncurses                   6.3                  h5eee18b_3
nest-asyncio              1.5.5           py310h06a4308_0
nettle                    3.7.3                hbbd107a_1
notebook                  6.4.12          py310h06a4308_0
nspr                      4.33                 h295c915_0
nss                       3.74                 h0370c37_0
numexpr                   2.8.3           py310h757a811_0
numpy                     1.21.5          py310hac523dd_3
numpy-base                1.21.5          py310h375b286_3
oauthlib                  3.2.0                    pypi_0    pypi
omegaconf                 2.2.2                    pypi_0    pypi
opencv                    4.5.5           py310h496257e_4
openh264                  2.1.1                h4ff587b_0
openjpeg                  2.4.0                h3ad879b_0
openssl                   1.1.1q               h7f8727e_0
packaging                 21.3                     pypi_0    pypi
pandas                    1.4.3           py310h6a678d5_0
pandocfilters             1.5.0                    pypi_0    pypi
parso                     0.8.3                    pypi_0    pypi
pcre                      8.45                 h295c915_0
pexpect                   4.8.0                    pypi_0    pypi
pickleshare               0.7.5                    pypi_0    pypi
pillow                    9.2.0           py310hace64e9_1
pip                       22.1.2          py310h06a4308_0
pixman                    0.40.0               h7f8727e_1
ply                       3.11            py310h06a4308_0
prometheus_client         0.14.1          py310h06a4308_0
prompt-toolkit            3.0.20                   pypi_0    pypi
prompt_toolkit            3.0.20               hd3eb1b0_0
protobuf                  3.19.4                   pypi_0    pypi
ptyprocess                0.7.0                    pypi_0    pypi
pure-eval                 0.2.2                    pypi_0    pypi
pure_eval                 0.2.2              pyhd3eb1b0_0
pyasn1                    0.4.8                    pypi_0    pypi
pyasn1-modules            0.2.8                    pypi_0    pypi
pycparser                 2.21                     pypi_0    pypi
pydeprecate               0.3.2                    pypi_0    pypi
pygments                  2.11.2                   pypi_0    pypi
pyopenssl                 22.0.0                   pypi_0    pypi
pyparsing                 3.0.4                    pypi_0    pypi
pyqt                      5.15.7          py310h6a678d5_1
pyqt5-sip                 12.11.0                  pypi_0    pypi
pyrsistent                0.18.0          py310h7f8727e_0
pysocks                   1.7.1           py310h06a4308_0
python                    3.10.4               h12debd9_0
python-dateutil           2.8.2                    pypi_0    pypi
python-fastjsonschema     2.15.1             pyhd3eb1b0_0
pytorch-lightning         1.7.1                    pypi_0    pypi
pytz                      2022.1          py310h06a4308_0
pyyaml                    6.0                      pypi_0    pypi
pyzmq                     23.2.0          py310h6a678d5_0
qt-main                   5.15.2               h327a75a_7
qt-webengine              5.15.9               hd2b0992_4
qtconsole                 5.3.1           py310h06a4308_1
qtpy                      2.0.1                    pypi_0    pypi
qtwebkit                  5.212                h4eab89a_4
readline                  8.1.2                h7f8727e_1
regex                     2022.7.9        py310h5eee18b_0
requests                  2.28.1          py310h06a4308_0
requests-oauthlib         1.3.1                    pypi_0    pypi
rsa                       4.9                      pypi_0    pypi
scipy                     1.9.0                    pypi_0    pypi
send2trash                1.8.0                    pypi_0    pypi
setuptools                61.2.0          py310h06a4308_0
sip                       6.6.2           py310h6a678d5_0
six                       1.16.0                   pypi_0    pypi
sniffio                   1.2.0           py310h06a4308_1
soupsieve                 2.3.1                    pypi_0    pypi
sqlite                    3.39.2               h5082296_0
stack-data                0.2.0                    pypi_0    pypi
stack_data                0.2.0              pyhd3eb1b0_0
tensorboard               2.10.0                   pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.1                    pypi_0    pypi
terminado                 0.13.1          py310h06a4308_0
testpath                  0.6.0           py310h06a4308_0
timm                      0.6.7                    pypi_0    pypi
tk                        8.6.12               h1ccaba5_0
toml                      0.10.2                   pypi_0    pypi
torch                     1.12.1                   pypi_0    pypi
torchmetrics              0.9.3                    pypi_0    pypi
torchvision               0.13.1                   pypi_0    pypi
tornado                   6.1             py310h7f8727e_0
tqdm                      4.64.0                   pypi_0    pypi
traitlets                 5.1.1                    pypi_0    pypi
typing-extensions         4.3.0           py310h06a4308_0
typing_extensions         4.3.0           py310h06a4308_0
tzdata                    2022a                hda174b7_0
urllib3                   1.26.11         py310h06a4308_0
wcwidth                   0.2.5                    pypi_0    pypi
webencodings              0.5.1           py310h06a4308_1
websocket-client          0.58.0          py310h06a4308_4
werkzeug                  2.2.2                    pypi_0    pypi
wheel                     0.37.1                   pypi_0    pypi
widgetsnbextension        3.5.2           py310h06a4308_0
x264                      1!157.20191217       h7b6447c_0
xz                        5.2.5                h7f8727e_1
yarl                      1.8.1                    pypi_0    pypi
zeromq                    4.3.4                h2531618_0
zlib                      1.2.12               h7f8727e_2
zope-interface            5.4.0                    pypi_0    pypi
zstd                      1.5.2                ha4553b6_0

Unable to find a valid cuDNN algorithm to run convolution

168454448-1517a0c4-edef-48de-a217-2d751c40a4f6

I'm running a simple prompt and ViTB32 as true because otherwise I get a CUDA error or my kernel dies
Settings
ViTB32 = True #@param{type:"boolean"}
ViTB16 = False #@param{type:"boolean"}
ViTL14 = False #@param{type:"boolean"} # Default False
RN101 = False #@param{type:"boolean"} # Default False
RN50 = False #@param{type:"boolean"} # Default True
RN50x4 = False #@param{type:"boolean"} # Default False
RN50x16 = False #@param{type:"boolean"}
RN50x64 = False #@param{type:"boolean"}
SLIPB16 = False # param{type:"boolean"} # Default False. Looks broken, likely related to commented import of SLIP_VITB16
SLIPL16 = False # param{type:"boolean"}

RuntimeError: CUDA error: unknown error

First of all Thank you so much for clear instructions.

However I have this issue!
RuntimeError: CUDA error: unknown error
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

  • I followed the provided instructions.
  • It is updated windows 10
  • NVIDIA GeForce RTX 3080 (UUID: GPU-92f60414-8b1f-9d4c-64a9-b3d4149e2665)
  • 64G memory
  • No other application running
  • Fresh reboot

Error starts like this:

Starting Run: TimeToDiscoTurbo3(0) at frame 0
Prepping model...
./model/512x512_diffusion_uncond_finetune_008100.pt

RuntimeError Traceback (most recent call last)
Input In [19], in <cell line: 153>()
151 print(diffusion_model_path)
152 model.load_state_dict(torch.load(diffusion_model_path, map_location='cpu'))
--> 153 model.requires_grad_(False).eval().to(device)
154 for name, param in model.named_parameters():
155 if 'qkv' in name or 'norm' in name or 'proj' in name:

Model Links are dead.

404 Not Found

When trying to run the notebook it cannot find the target web addresses to download.

[I 20:38:37.339 NotebookApp] Kernel started: b4edfb2b-3eb4-4509-bad4-88d8a9bab1e1, name: python3
--2022-11-25 20:38:48-- https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt
Resolving v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)... 52.218.240.185, 52.92.192.210, 52.92.128.146, ...
Connecting to v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)|52.218.240.185|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-11-25 20:38:50 ERROR 404: Not Found.

--2022-11-25 20:38:50-- https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth
Resolving v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)... 3.5.83.168, 52.92.192.210, 52.218.204.137, ...
Connecting to v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)|3.5.83.168|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-11-25 20:38:51 ERROR 404: Not Found.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.