noise-lab / netdiffusion_generator Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
When trying to install requirements with the CLI commandpip install -r requirements
, I got
ERROR: Invalid requirement: '_libgcc_mutex=0.1=main' (from line 4 of requirements.txt)
Hint: = is not a valid operator. Did you mean == ?
Then I check the requirements.txt file, and found some comments saying
" # This file may be used to create an environment using:
# $ conda create --name --file
# platform: linux-64 ...".
I tried install a whole-new virtual environment with conda create --name net-diffusion --file requirements.txt
, unfortunately it failed again.
The fail message was:
Collecting package metadata (current_repodata.json): done
Solving environment: unsuccessful attempt using repodata from current_repodata.json, retrying with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- pygments==2.17.2=pypi_0
- ca-certificates==2023.12.12=h06a4308_0
- importlib-resources==6.1.1=pypi_0
- lit==17.0.6=pypi_0
- nvidia-cusparse-cu11==11.7.4.91=pypi_0
- jinja2==3.1.3=pypi_0
- aiosignal==1.3.1=pypi_0
- python-multipart==0.0.9=pypi_0
- nvidia-cuda-nvrtc-cu11==11.7.99=pypi_0
- wandb==0.15.11=pypi_0
- coloredlogs==15.0.1=pypi_0
- opt-einsum==3.3.0=pypi_0
- tqdm==4.66.2=pypi_0
- opencv-python==4.7.0.68=pypi_0
- cycler==0.12.1=pypi_0
- nvidia-nvjitlink-cu12==12.3.101=pypi_0
- zipp==3.17.0=pypi_0
- sentry-sdk==1.40.5=pypi_0
- pyparsing==3.1.1=pypi_0
- cachetools==5.3.2=pypi_0
- appdirs==1.4.4=pypi_0
- onnxruntime-gpu==1.16.0=pypi_0
- nvidia-cublas-cu11==11.10.3.66=pypi_0
- tzdata==2023d=h04d1e81_0
- requests-oauthlib==1.3.1=pypi_0
- setuptools==68.2.2=py310h06a4308_0
- certifi==2024.2.2=py310h06a4308_0
- numexpr==2.8.7=py310h85018f9_0
- tbb==2021.8.0=hdb19cb5_0
- xformers==0.0.21=pypi_0
- blas==1.0=mkl
- sympy==1.12=pypi_0
- smmap==5.0.1=pypi_0
- wheel==0.41.2=py310h06a4308_0
- h5py==3.10.0=pypi_0
- packaging==23.2=pypi_0
- importlib-metadata==7.0.1=pypi_0
- gast==0.5.4=pypi_0
- libwebp-base==1.3.2=h5eee18b_0
- astunparse==1.6.3=pypi_0
- oauthlib==3.2.2=pypi_0
- fsspec==2024.2.0=pypi_0
- markdown==3.5.1=pypi_0
- numpy-base==1.26.3=py310hb5e798b_0
- numpy==1.26.1=pypi_0
- nvidia-cuda-runtime-cu12==12.1.105=pypi_0
- libclang==16.0.6=pypi_0
- tensorboard-data-server==0.7.2=pypi_0
- libpng==1.6.39=h5eee18b_0
- rsa==4.9=pypi_0
- sqlite==3.41.2=h5eee18b_0
- transformers==4.36.2=pypi_0
- markdown-it-py==2.2.0=pypi_0
- docker-pycreds==0.4.0=pypi_0
- mpmath==1.3.0=pypi_0
- frozenlist==1.4.1=pypi_0
- diffusers==0.25.0=pypi_0
- absl-py==2.0.0=pypi_0
- attrs==23.2.0=pypi_0
- ffmpy==0.3.2=pypi_0
- bottleneck==1.3.7=py310ha9d4c09_0
- setproctitle==1.3.3=pypi_0
- grpcio==1.59.2=pypi_0
- pip==23.3.1=py310h06a4308_0
- freetype==2.12.1=h4a9f257_0
- tensorboard==2.14.1=pypi_0
- mkl-service==2.4.0=py310h5eee18b_1
- safetensors==0.4.2=pypi_0
- anyio==4.3.0=pypi_0
- antlr4-python3-runtime==4.9.3=pypi_0
- libgcc-ng==11.2.0=h1234567_1
- nvidia-cufft-cu12==11.0.2.54=pypi_0
- pydantic==2.6.1=pypi_0
- _libgcc_mutex==0.1
- yarl==1.9.4=pypi_0
- gradio==3.50.2=pypi_0
- libuuid==1.41.5=h5eee18b_0
- orjson==3.9.14=pypi_0
- pytz==2023.3.post1=py310h06a4308_0
- ml-dtypes==0.2.0=pypi_0
- omegaconf==2.3.0=pypi_0
- uvicorn==0.27.1=pypi_0
- altair==4.2.2=pypi_0
- nvidia-cudnn-cu12==8.9.2.26=pypi_0
- charset-normalizer==3.3.2=pypi_0
- pathtools==0.1.2=pypi_0
- pytorch-lightning==1.9.0=pypi_0
- starlette==0.36.3=pypi_0
- ftfy==6.1.1=pypi_0
- httpx==0.27.0=pypi_0
- fairscale==0.4.13=pypi_0
- jpeg==9e=h5eee18b_1
- nvidia-nvtx-cu11==11.7.91=pypi_0
- tensorflow-estimator==2.14.0=pypi_0
- toml==0.10.2=pypi_0
- markupsafe==2.1.3=pypi_0
- lz4-c==1.9.4=h6a678d5_0
- torchmetrics==1.3.1=pypi_0
- pydub==0.25.1=pypi_0
- triton==2.2.0=pypi_0
- tensorflow-io-gcs-filesystem==0.34.0=pypi_0
- nvidia-cublas-cu12==12.1.3.1=pypi_0
- prodigyopt==1.0=pypi_0
- google-auth==2.23.4=pypi_0
- nvidia-nccl-cu12==2.19.3=pypi_0
- libffi==3.4.4=h6a678d5_0
- onnx==1.14.1=pypi_0
- huggingface-hub==0.20.1=pypi_0
- websockets==11.0.3=pypi_0
- openjpeg==2.4.0=h3ad879b_0
- pandas==2.1.4=py310h1128e8f_0
- scapy==2.5.0=pypi_0
- nvidia-nccl-cu11==2.14.3=pypi_0
- openssl==1.1.1w=h7f8727e_0
- annotated-types==0.6.0=pypi_0
- gitpython==3.1.42=pypi_0
- six==1.16.0=pyhd3eb1b0_1
- rich==13.4.1=pypi_0
- toolz==0.12.1=pypi_0
- ncurses==6.4=h6a678d5_0
- lightning-utilities==0.10.1=pypi_0
- _openmp_mutex==5.1=1_gnu
- exceptiongroup==1.2.0=pypi_0
- gradio-client==0.6.1=pypi_0
- tensorrt-bindings==8.6.1=pypi_0
- pillow==10.2.0=py310h5eee18b_0
- tensorrt-libs==8.6.1=pypi_0
- tensorrt==8.6.1.post1=pypi_0
- lcms2==2.12=h3be6417_0
- entrypoints==0.4=pypi_0
- readline==8.2=h5eee18b_0
- ld_impl_linux-64==2.38=h1181459_1
- flatbuffers==23.5.26=pypi_0
- termcolor==2.3.0=pypi_0
- aiohttp==3.9.3=pypi_0
- libgomp==11.2.0=h1234567_1
- werkzeug==3.0.1=pypi_0
- sentencepiece==0.2.0=pypi_0
- httpcore==1.0.4=pypi_0
- easygui==0.98.3=pypi_0
- async-timeout==4.0.3=pypi_0
- nvidia-cuda-cupti-cu11==11.7.101=pypi_0
- xz==5.4.5=h5eee18b_0
- mdurl==0.1.2=pypi_0
- pyasn1-modules==0.3.0=pypi_0
- kiwisolver==1.4.5=pypi_0
- google-auth-oauthlib==1.0.0=pypi_0
- nvidia-cusparse-cu12==12.1.0.106=pypi_0
- aiofiles==23.2.1=pypi_0
- pyyaml==6.0.1=pypi_0
- wrapt==1.14.1=pypi_0
- google-pasta==0.2.0=pypi_0
- nvidia-cufft-cu11==10.9.0.58=pypi_0
- multidict==6.0.5=pypi_0
- mkl_random==1.2.4=py310hdb19cb5_0
- referencing==0.33.0=pypi_0
- typing-extensions==4.8.0=pypi_0
- jsonschema-specifications==2023.12.1=pypi_0
- python-tzdata==2023.3=pyhd3eb1b0_0
- tensorflow==2.14.0=pypi_0
- tokenizers==0.15.2=pypi_0
- nvidia-cuda-nvrtc-cu12==12.1.105=pypi_0
- accelerate==0.25.0=pypi_0
- filelock==3.13.1=pypi_0
- matplotlib==3.8.3=pypi_0
- nvidia-cuda-runtime-cu11==11.7.99=pypi_0
- voluptuous==0.13.1=pypi_0
- libdeflate==1.17=h5eee18b_1
- h11==0.14.0=pypi_0
- einops==0.6.1=pypi_0
- nvidia-cusolver-cu11==11.4.0.1=pypi_0
- lion-pytorch==0.0.6=pypi_0
- nvidia-cusolver-cu12==11.4.5.107=pypi_0
- rpds-py==0.18.0=pypi_0
- python==3.10.9=h7a1cb2a_2
- fonttools==4.49.0=pypi_0
- python-dateutil==2.8.2=pyhd3eb1b0_0
- idna==3.4=pypi_0
- pywavelets==1.5.0=pypi_0
- mkl==2023.1.0=h213fc3f_46344
- networkx==3.2.1=pypi_0
- humanfriendly==10.0=pypi_0
- nvidia-cudnn-cu11==8.5.0.96=pypi_0
- urllib3==2.0.7=pypi_0
- click==8.1.7=pypi_0
- contourpy==1.2.0=pypi_0
- jsonschema==4.21.1=pypi_0
- intel-openmp==2023.1.0=hdb19cb5_46306
- nvidia-nvtx-cu12==12.1.105=pypi_0
- requests==2.31.0=pypi_0
- invisible-watermark==0.2.0=pypi_0
- nvidia-curand-cu11==10.2.10.91=pypi_0
- bzip2==1.0.8=h7b6447c_0
- pyasn1==0.5.0=pypi_0
- keras==2.14.0=pypi_0
- zlib==1.2.13=h5eee18b_0
- mkl_fft==1.3.8=py310h5eee18b_0
- fastapi==0.109.2=pypi_0
- libstdcxx-ng==11.2.0=h1234567_1
- torch==2.2.0=pypi_0
- semantic-version==2.10.0=pypi_0
- libtiff==4.5.1=h6a678d5_0
- sniffio==1.3.0=pypi_0
- wcwidth==0.2.13=pypi_0
- psutil==5.9.8=pypi_0
- nvidia-cuda-cupti-cu12==12.1.105=pypi_0
- zstd==1.5.5=hc292b87_0
- lycoris-lora==2.0.2=pypi_0
- dadaptation==3.1=pypi_0
- pydantic-core==2.16.2=pypi_0
- nvidia-curand-cu12==10.3.2.106=pypi_0
- protobuf==3.20.3=pypi_0
- regex==2023.12.25=pypi_0
- cmake==3.28.3=pypi_0
- tk==0.1.0=pypi_0
- lerc==3.0=h295c915_0
- bitsandbytes==0.41.1=pypi_0
- scipy==1.11.4=pypi_0
- gitdb==4.0.11=pypi_0
Current channels:
- https://conda.anaconda.org/default/linux-64
- https://conda.anaconda.org/default/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
Is there any other way to get things done? Could you double check the requirements.txt
file?
Please consider making the YAML configuration file public to facilitate the installation of Netdiffusion. Currently, the requirements.txt file is not usable by the pip utility.
In my case, I had to create a YAML configuration file from the requirements file thanks to the script provided here: https://gist.github.com/mfansler/5a1a703d1ac6bb2139547003248c3827
In your case, you can create the correct YAML when your environment is active, with this command:
conda env export | head -n -1 | tail -n +2 > environment.yml
Then I create a new virtual environment with the following:
conda create --name my_env pip python=3.10.9
and activate the environment with
conda activate my_env
Then I installed tensorrt-libs from the pypi nvidia repository, as this package is not available in pypi and conda cannot resolve it:
pip install --extra-index-url https://pypi.nvidia.com tensorrt-libs
This step is mandatory to install all requirements listed in the YAML config file.
Next, I downgraded torch and triton to 2.0.1 and 2.0.0 respectively, because some packages were not compatible with torch 2.2.0 stated in the original requirements.txt
Finally, I updated all dependencies with:
conda env update -n my_env -f environment.yml
How can I generate the netflix_05.png?
I've been trying distinct parameter configurations. However, it is not clear what ControlNet (w/ Canny) + Generation configurations you used.
Also is it expected the error: Lora not found: test_task_model
?
I have ControlNet extension installed and identified.
I fed ControlNet with nextflix_01.png
when I tried Data Preprocessing. It turns out that no results appeared when I executed:
# Leverage Stable Diffusion WebUI for initial caption creation
cd ../../sd-webui-fork/stable-diffusion-webui/
# Lunch WebUI
bash webui.sh
And this is what happened:
There is nothing in http://127.0.0.1:7860
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.