Giter Club home page Giter Club logo

fooocus-colab's Introduction

๐Ÿฃ Please follow me for new updates https://twitter.com/camenduru
๐Ÿ”ฅ Please join our discord server https://discord.gg/k5BwmmvJJU
๐Ÿฅณ Please join my patreon community https://patreon.com/camenduru

๐Ÿฆ’ Colab

Colab Info
Open In Colab Fooocus Colab (Official Version)
Open In Colab Fooocus-MRE Colab (Thanks to @MoonRide303 โค)
Open In Colab rundiffusion_xl_fooocus_colab
RunDiffusion/rundiffusion-xl (Old Version)
Open In Colab juggernaut_xl_fooocus_colab
KandooAI/juggernaut-xl (Old Version)
Open In Colab sd_xl_fooocus_colab
stabilityai/stable-diffusion-xl-base-1.0 (Old Version)

Style Reference

Main Repo

https://github.com/lllyasviel/Fooocus (Thanks to @lllyasviel โค)
https://github.com/AUTOMATIC1111/stable-diffusion-webui
https://github.com/comfyanonymous/ComfyUI

Paper

https://arxiv.org/abs//2307.01952

Output

Screenshot 2023-08-12 030023 Screenshot 2023-08-12 030302

fooocus-colab's People

Contributors

camenduru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fooocus-colab's Issues

Reach GPU RAM limit in 10 image generated

Thanks Gamenduru and lllyasviel , Fooocus is good for easily generating midjourney quality images.

After I tested , I found the app will crash after about 10 images in colab. It will touch GPU ram limit very soon.

I'm run on colab pro and T4 server.

Please check the screenshot when crash in below.
Screenshot of Fooocus_colab ipynb - Colaboratory

Incorrect Styles

Uploading Screenshot

When sai-origami selected it create sai-neonpunk, similarly when sai-photographic selected it create sai-origami.

Please check.

Thank You!

Screenshot 2023-08-29 at 4 03 13 PM

Persistent unknown error

Getting this new error while trying to start the webui.. Also using --normalvram doesn't help.

newerror

torch.cuda.OutOfMemoryError on Windows Desktop

Hello. I just installed Fooocus, let it download the SDXL models, and did my first test run. It failed to complete the run with the message:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacty of 6.00 GiB of which 3.55 GiB is free. Of the allocated memory 1.36 GiB is allocated by PyTorch, and 77.12 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I'm running on a Desktop with 8GB RAM and a NVIDIA GeForce RTX 2060 with 6GB VRAM. I was able to get SDXL to work before on Automatic1111, but the model was a slightly different version.

Error

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lida 0.0.10 requires kaleido, which is not installed.
tensorflow-probability 0.22.0 requires typing-extensions<4.6.0, but you have typing-extensions 4.8.0 which is incompatible.
torchaudio 2.1.0+cu118 requires torch==2.1.0, but you have torch 2.0.1 which is incompatible.
torchdata 0.7.0 requires torch==2.1.0, but you have torch 2.0.1 which is incompatible.
torchtext 0.16.0 requires torch==2.1.0, but you have torch 2.0.1 which is incompatible.
torchvision 0.16.0+cu118 requires torch==2.1.0, but you have torch 2.0.1 which is incompatible.
Successfully

VAE inpaint encoding problem

When I use fooo
IMG_20240225_183351
cus on Google Colab and attempt face swapping in the inpainting tab, as soon as the VAE inpaint encoding process starts, the GPU disconnects. This is a significant problem. Please assist.

Issue Foocus MRE Colab

Hi, I'm reporting an error encountered while using Foocus MRE Colab.

During the installation process of Foocus MRE, the following notifications were observed:

Xformers Version: 0.0.21
Issues Encountered:

  • Attempted registration of cuDNN factory resulted in an error: "Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered."
  • Attempted registration of cuFFT factory resulted in an error: "Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered."
  • Attempted registration of cuBLAS factory resulted in an error: "Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered."
  • TensorFlow-TRT warning: "TF-TRT Warning: Could not find TensorRT."
  • Missing components: {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}

Additionally, an error message was encountered during the image generation process:

Error Message:

  • An exception occurred in the Thread-3 (worker) with a traceback indicating various modules and functions involved in the process.
  • Several models and operators were reported as not supported due to various reasons such as lack of CUDA support, outdated GPU capability, unsupported data types, or missing operators.

==================

==================

NOTIFICATIONS WHILE INSTALLING FOOCUS MRE :

xformers version: 0.0.21
2024-02-02 04:03:40.364049: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-02-02 04:03:40.364103: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-02-02 04:03:40.365846: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-02-02 04:03:41.900015: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}

ERROR MASSAGE WHILE GENERATE IMAGES :

Exception in thread Thread-3 (worker):
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/content/Fooocus-MRE/modules/async_worker.py", line 613, in worker
handler(task)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus-MRE/modules/async_worker.py", line 487, in handler
imgs = pipeline.process_diffusion(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus-MRE/modules/default_pipeline.py", line 403, in process_diffusion
sampled_latent = core.ksampler(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus-MRE/modules/core.py", line 347, in ksampler
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image,
File "/content/Fooocus-MRE/modules/samplers_advanced.py", line 202, in sample
samples = getattr(k_diffusion_sampling, "sample
{}".format(self.sampler))(self.model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/k_diffusion/sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/k_diffusion/sampling.py", line 613, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 323, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "/content/Fooocus-MRE/modules/patch.py", line 164, in patched_discrete_eps_ddpm_denoiser_forward
return self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/k_diffusion/external.py", line 151, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 311, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 289, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 263, in calc_cond_uncond_batch
output = model_options['model_function_wrapper'](model_function, {"input": input_x, "timestep": timestep
, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "/content/Fooocus-MRE/modules/patch.py", line 173, in patched_model_function
return func(x, t, **c)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/model_base.py", line 63, in apply_model
return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Fooocus-MRE/modules/patch.py", line 338, in patched_unet_forward
h = forward_timestep_embed(module, h, emb, context, transformer_options)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 56, in forward_timestep_embed
x = layer(x, context, transformer_options)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 693, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 525, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/diffusionmodules/util.py", line 123, in checkpoint
return func(*inputs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 588, in _forward
n = self.attn1(n, context=context_attn1, value=value_attn1)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Fooocus-MRE/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 438, in forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 193, in memory_efficient_attention
return _memory_efficient_attention(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 291, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 307, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 96, in _dispatch_fw
return _run_priority_list(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(20, 4032, 1, 64) (torch.float16)
key : shape=(20, 4032, 1, 64) (torch.float16)
value : shape=(20, 4032, 1, 64) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF is not supported because:
xFormers wasn't build with CUDA support
requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
xFormers wasn't build with CUDA support
requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
xFormers wasn't build with CUDA support
requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
operator wasn't built - see python -m xformers.info for more info
triton is not available
requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4
Only work on pre-MLIR triton for now
cutlassF is not supported because:
xFormers wasn't build with CUDA support
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 64

Error to start

Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors" to /content/Fooocus-MRE/models/checkpoints/sd_xl_base_1.0_0.9vae.safetensors

Traceback (most recent call last):
  File "/content/Fooocus-MRE/entry_with_update.py", line 46, in <module>
    from launch import *
  File "/content/Fooocus-MRE/launch.py", line 145, in <module>
    download_models()
  File "/content/Fooocus-MRE/launch.py", line 102, in download_models
    load_file_from_url(url=url, model_dir=modelfile_path, file_name=file_name)
  File "/content/Fooocus-MRE/modules/model_loader.py", line 24, in load_file_from_url
    download_url_to_file(url, cached_file, progress=progress)
  File "/usr/local/lib/python3.10/dist-packages/torch/hub.py", line 620, in download_url_to_file
    u = urlopen(req)
  File "/usr/lib/python3.10/urllib/request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.10/urllib/request.py", line 525, in open
    response = meth(req, response)
  File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
    response = self.parent.error(
  File "/usr/lib/python3.10/urllib/request.py", line 563, in error
    return self._call_chain(*args)
  File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500: Internal Server Error

Many thanks!

Face swap crashes

[Fooocus] Loading control models ...
extra keys clip vision: ['vision_model.embeddings.position_ids']
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.18 seconds
[Fooocus] Encoding negative #1 ...
[Fooocus] Image processing ...
Requested to load CLIPVisionModelWithProjection
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.81 seconds
Requested to load Resampler
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.57 seconds
Requested to load To_KV
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.33 seconds
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1152, 896)
Preparation time: 40.98 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 3.45 seconds
100% 30/30 [00:26<00:00, 1.13it/s]
Image generated with private log at: /content/Fooocus/outputs/2023-11-22/log.html
Generating and saving time: 33.59 seconds
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.95 seconds
Total time: 77.02 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 2321806533096841955
[Fooocus] Downloading control models ...
[Fooocus] Loading control models ...
^C

in some cases this is how the focus breaks when using face swap

odd errors

Ive been using this just fine, but within this last week everytime i use this on collab, it generates an error as soon as i press generate button in the fooocus UI. 1/60 is the one that generates the error and every photo thats generated looks incomplete.
2023-10-08_02-44-52_4430

Change link

Hi thanks for the great work. We are willing to officially maintain it and probably we can change the colab link to official colab link in: https://github.com/lllyasviel/Fooocus

For anyone with problems, please test with official colab version above and if problem is still there, redirect the issue to the repo of Fooocus above.

image

out of cuda memory error

after doing a few generations , the generation process stops and stays stuck like that while on colab an error appears :

torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated : 14.10 GiB
Requested : 19.69 MiB
Device limit : 14.75 GiB
Free (according to CUDA): 2.81 MiB
PyTorch limit (set by user-supplied memory fraction)
: 17179869184.00 GiB

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.