Giter Club home page Giter Club logo

hf-real-time-latent-consistency-model's Introduction

title emoji colorFrom colorTo sdk pinned suggested_hardware disable_embedding
Real-Time Latent Consistency Model Image-to-Image ControlNet
๐Ÿ–ผ๏ธ๐Ÿ–ผ๏ธ
gray
indigo
docker
false
a10g-small
true

Real-Time Latent Consistency Model

This demo showcases Latent Consistency Model (LCM) using Diffusers with a MJPEG stream server. You can read more about LCM + LoRAs with diffusers here.

You need a webcam to run this demo. ๐Ÿค—

See a collecting with live demos here

Running Locally

You need CUDA and Python 3.10, Node > 19, Mac with an M1/M2/M3 chip or Intel Arc GPU

Install

python -m venv venv
source venv/bin/activate
pip3 install -r server/requirements.txt
cd frontend && npm install && npm run build && cd ..
python server/main.py --reload --pipeline img2imgSDTurbo 

Don't forget to fuild the frontend!!!

cd frontend && npm install && npm run build && cd ..

Pipelines

You can build your own pipeline following examples here here,

LCM

Image to Image

python server/main.py --reload --pipeline img2img 

LCM

Text to Image

python server/main.py --reload --pipeline txt2img 

Image to Image ControlNet Canny

python server/main.py --reload --pipeline controlnet 

LCM + LoRa

Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. Learn more here or technical report

Image to Image ControlNet Canny LoRa

python server/main.py --reload --pipeline controlnetLoraSD15

or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images

python server/main.py --reload --pipeline controlnetLoraSDXL

Text to Image

python server/main.py --reload --pipeline txt2imgLora
python server/main.py --reload --pipeline txt2imgLoraSDXL

Available Pipelines

img2img
txt2img
controlnet
txt2imgLora
controlnetLoraSD15

controlnetLoraSDXL
txt2imgLoraSDXL

img2imgSDXLTurbo
controlnetSDXLTurbo

img2imgSDTurbo
controlnetSDTurbo

controlnetSegmindVegaRT
img2imgSegmindVegaRT

Setting environment variables

  • --host: Host address (default: 0.0.0.0)
  • --port: Port number (default: 7860)
  • --reload: Reload code on change
  • --max-queue-size: Maximum queue size (optional)
  • --timeout: Timeout period (optional)
  • --safety-checker: Enable Safety Checker (optional)
  • --torch-compile: Use Torch Compile
  • --use-taesd / --no-taesd: Use Tiny Autoencoder
  • --pipeline: Pipeline to use (default: "txt2img")
  • --ssl-certfile: SSL Certificate File (optional)
  • --ssl-keyfile: SSL Key File (optional)
  • --debug: Print Inference time
  • --compel: Compel option
  • --sfast: Enable Stable Fast
  • --onediff: Enable OneDiff

If you run using bash build-run.sh you can set PIPELINE variables to choose the pipeline you want to run

PIPELINE=txt2imgLoraSDXL bash build-run.sh

and setting environment variables

TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python server/main.py --reload --pipeline txt2imgLoraSDXL

If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my comment

openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
python server/main.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem

Docker

You need NVIDIA Container Toolkit for Docker, defaults to `controlnet``

docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live

reuse models data from host to avoid downloading them again, you can change ~/.cache/huggingface to any other directory, but if you use hugingface-cli locally, you can share the same cache

docker run -ti -p 7860:7860 -e HF_HOME=/data -v ~/.cache/huggingface:/data  --gpus all lcm-live

or with environment variables

docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live

Demo on Hugging Face

lcm-real.mp4

hf-real-time-latent-consistency-model's People

Contributors

radames avatar nuullll avatar strint avatar cocktailpeanut avatar

Watchers

Diego Silva avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.