qunash / stable-diffusion-2-gui Goto Github PK
View Code? Open in Web Editor NEWLightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img, inpaint and upscale4x.
License: MIT License
Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img, inpaint and upscale4x.
License: MIT License
Hi,
thanks for building this. I managed to use the upscaler on a 768x image the first time I used the plugin but after that I get broken images after upscaling, also for 512x. The image is displayed in preview but when I try to open / download it, it breaks halfway through loading.
when I click play button of run the app section,got an error:
OSError: [Errno 36] File name too long: '.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem}\n'
Hey, as I run the colab cells I'm getting this error:
NameError Traceback (most recent call last) [<ipython-input-2-c92a6aca1dc2>](https://localhost:8080/#) in <module> 413 414 gallery = gr.Gallery(label="Generated images", show_label=False).style(grid=[2], height="auto") --> 415 state_info = gr.Textbox(label="State", show_label=False, max_lines=2, interactive=false).style(container=False) 416 error_output = gr.Markdown(visible=False) 417
I cant get img2img to work. I am getting this error:
Error
Pipeline <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline'> expected {'unet', 'text_encoder', 'scheduler', 'feature_extractor', 'vae', 'safety_checker', 'tokenizer'}, but only {'unet', 'text_encoder', 'scheduler', 'vae', 'tokenizer'} were passed.
This is a really nice bit of code. Have you considered adding a licence (so others can build on it) or is to remain closed source?
Randomly losing the connection, without any info in the output cell. I have to restart the cell to fix the issue
Only black images for whatever prompt
This could be a potential solution
It seems like everything after the first 288 characters of the prompt is discarded. Is this correct? Is there a way to raise this limit? Thanks.
After trying Depth-to-Image model a 4 or 5 times, I got this error.
CUDA out of memory. Tried to allocate 6.78 GiB (GPU 0; 14.76 GiB total capacity; 6.31 GiB already allocated; 5.71 GiB free; 7.80 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I'm not sure if something wrong happens with the D2I model. Just letting you know!
thank you for your good work. i wonder how can i use anpther model (ex: epicrealism drom civitai)
Where in the code should local models be specified?
Your 4x upscaler did a very good job at upscaling the detail of the eyes, that I cannot recreate that with stable-diffusion-webui and Stable Diffusion V2.1. However, there are visible tiles in the upscaled photo, is it possible to lift off that 512 x512 restriction?
After about 2 weeks from last usage, I ran the code, but now I get an error that doesn't let to use SD.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. torchaudio 2.0.2+cu118 requires torch==2.0.1, but you have torch 1.13.1 which is incompatible. torchdata 0.6.1 requires torch==2.0.1, but you have torch 1.13.1 which is incompatible. torchtext 0.15.2 requires torch==2.0.1, but you have torch 1.13.1 which is incompatible. torchvision 0.13.1+cu113 requires torch==1.12.1, but you have torch 1.13.1 which is incompatible.
When creating in 'Text to Image', it shows a few other errors and the waiting icon keeps infinitely long. Setting to img2img or other options, it doesn't rearrange the box (the field to upload an image file doesn't even show up).
Is it possible to exclude the xformers check, this is the main reason AMD user needs to run SD in Colab instead of running in local
The runtime crashes sometime when switching between 2 modes, and in my trials, everytime I switch from img2img to text2img
I have tried many notebooks and yours is the simplest, fastest and most stable of them all! The only function I miss is to see the seed of the image I just generated
Hello, I wanted to ask which is the sampling method used for this tool, as I was unable to find any reference in the code
I kept getting this:
upscale() takes 7 positional arguments but 8 were given
I think 4x upscale should just be a button. Not drag and drop and needing prompting. But I am curious how it could actually work, if it can add details while upscaling.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.