Giter Club home page Giter Club logo

brushnet's Introduction

BrushNet

This repository contains the implementation of the paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion"

Keywords: Image Inpainting, Diffusion Models, Image Generation

Xuan Ju12, Xian Liu12, Xintao Wang1*, Yuxuan Bian2, Ying Shan1, Qiang Xu2*
1ARC Lab, Tencent PCG 2The Chinese University of Hong Kong *Corresponding Author

๐ŸŒProject Page | ๐Ÿ“œArxiv | ๐Ÿ—„๏ธData | ๐Ÿ“นVideo | ๐Ÿค—Hugging Face Demo |

๐Ÿ“– Table of Contents

TODO

  • Release trainig and inference code
  • Release checkpoint (sdv1.5)
  • Release checkpoint (sdxl). Sadly, I only have V100 for training this checkpoint, which can only train with a batch size of 1 with a slow speed. The current ckpt is only trained for a small step number thus perform not well. But fortunately, yuanhang volunteer to help training a better version. Please stay tuned! Thank yuanhang for his effort!
  • Release evluation code
  • Release gradio demo
  • Release comfyui demo. Thank nullquant (ConfyUI-BrushNet) and kijai (ComfyUI-BrushNet-Wrapper) for helping!
  • Release trainig data. Thank random123123 for helping!

๐Ÿ› ๏ธ Method Overview

BrushNet is a diffusion-based text-guided image inpainting model that can be plug-and-play into any pre-trained diffusion model. Our architectural design incorporates two key insights: (1) dividing the masked image features and noisy latent reduces the model's learning load, and (2) leveraging dense per-pixel control over the entire pre-trained model enhances its suitability for image inpainting tasks. More analysis can be found in the main paper.

๐Ÿš€ Getting Started

Environment Requirement ๐ŸŒ

BrushNet has been implemented and tested on Pytorch 1.12.1 with python 3.9.

Clone the repo:

git clone https://github.com/TencentARC/BrushNet.git

We recommend you first use conda to create virtual environment, and install pytorch following official instructions. For example:

conda create -n diffusers python=3.9 -y
conda activate diffusers
python -m pip install --upgrade pip
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116

Then, you can install diffusers (implemented in this repo) with:

pip install -e .

After that, you can install required packages thourgh:

cd examples/brushnet/
pip install -r requirements.txt

Data Download โฌ‡๏ธ

Dataset

You can download the BrushData and BrushBench here (as well as the EditBench we re-processed), which are used for training and testing the BrushNet. By downloading the data, you are agreeing to the terms and conditions of the license. The data structure should be like:

|-- data
    |-- BrushData
        |-- 00200.tar
        |-- 00201.tar
        |-- ...
    |-- BrushDench
        |-- images
        |-- mapping_file.json
    |-- EditBench
        |-- images
        |-- mapping_file.json

Noted: We only provide a part of the BrushData in google drive due to the space limit. random123123 has helped upload a full dataset on hugging face here. Thank for his help!

Checkpoints

Checkpoints of BrushNet can be downloaded from here. The ckpt folder contains

  • BrushNet pretrained checkpoints for Stable Diffusion v1.5 (segmentation_mask_brushnet_ckpt and random_mask_brushnet_ckpt)
  • pretrinaed Stable Diffusion v1.5 checkpoint (e.g., realisticVisionV60B1_v51VAE from Civitai). You can use scripts/convert_original_stable_diffusion_to_diffusers.py to process other models downloaded from Civitai.
  • BrushNet pretrained checkpoints for Stable Diffusion XL (segmentation_mask_brushnet_ckpt_sdxl_v0 and random_mask_brushnet_ckpt_sdxl_v0). A better version will be shortly released by yuanhang. Please stay tuned!
  • pretrinaed Stable Diffusion XL checkpoint (e.g., juggernautXL_juggernautX from Civitai). You can use StableDiffusionXLPipeline.from_single_file("path of safetensors").save_pretrained("path to save",safe_serialization=False) to process other models downloaded from Civitai.

The data structure should be like:

|-- data
    |-- BrushData
    |-- BrushDench
    |-- EditBench
    |-- ckpt
        |-- realisticVisionV60B1_v51VAE
            |-- model_index.json
            |-- vae
            |-- ...
        |-- segmentation_mask_brushnet_ckpt
        |-- segmentation_mask_brushnet_ckpt_sdxl_v0
        |-- random_mask_brushnet_ckpt
        |-- random_mask_brushnet_ckpt_sdxl_v0
        |-- ...

The checkpoint in segmentation_mask_brushnet_ckpt and segmentation_mask_brushnet_ckpt_sdxl_v0 provide checkpoints trained on BrushData, which has segmentation prior (mask are with the same shape of objects). The random_mask_brushnet_ckpt and random_mask_brushnet_ckpt_sdxl provide a more general ckpt for random mask shape.

๐Ÿƒ๐Ÿผ Running Scripts

Training ๐Ÿคฏ

You can train with segmentation mask using the script:

# sd v1.5
accelerate launch examples/brushnet/train_brushnet.py \
--pretrained_model_name_or_path runwayml/stable-diffusion-v1-5 \
--output_dir runs/logs/brushnet_segmentationmask \
--train_data_dir data/BrushData \
--resolution 512 \
--learning_rate 1e-5 \
--train_batch_size 2 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300
--checkpointing_steps 10000 

# sdxl
accelerate launch examples/brushnet/train_brushnet_sdxl.py \
--pretrained_model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 \
--output_dir runs/logs/brushnetsdxl_segmentationmask \
--train_data_dir data/BrushData \
--resolution 1024 \
--learning_rate 1e-5 \
--train_batch_size 1 \
--gradient_accumulation_steps 4 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300 \
--checkpointing_steps 10000 

To use custom dataset, you can process your own data to the format of BrushData and revise --train_data_dir.

You can train with random mask using the script (by adding --random_mask):

# sd v1.5
accelerate launch examples/brushnet/train_brushnet.py \
--pretrained_model_name_or_path runwayml/stable-diffusion-v1-5 \
--output_dir runs/logs/brushnet_randommask \
--train_data_dir data/BrushData \
--resolution 512 \
--learning_rate 1e-5 \
--train_batch_size 2 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300 \
--random_mask

# sdxl
accelerate launch examples/brushnet/train_brushnet_sdxl.py \
--pretrained_model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 \
--output_dir runs/logs/brushnetsdxl_randommask \
--train_data_dir data/BrushData \
--resolution 1024 \
--learning_rate 1e-5 \
--train_batch_size 1 \
--gradient_accumulation_steps 4 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300 \
--checkpointing_steps 10000 \
--random_mask

Inference ๐Ÿ“œ

You can inference with the script:

# sd v1.5
python examples/brushnet/test_brushnet.py
# sdxl
python examples/brushnet/test_brushnet_sdxl.py

Since BrushNet is trained on Laion, it can only guarantee the performance on general scenarios. We recommend you train on your own data (e.g., product exhibition, virtual try-on) if you have high-quality industrial application requirements. We would also be appreciate if you would like to contribute your trained model!

You can also inference through gradio demo:

# sd v1.5
python examples/brushnet/app_brushnet.py

Evaluation ๐Ÿ“

You can evaluate using the script:

python examples/brushnet/evaluate_brushnet.py \
--brushnet_ckpt_path data/ckpt/segmentation_mask_brushnet_ckpt \
--image_save_path runs/evaluation_result/BrushBench/brushnet_segmask/inside \
--mapping_file data/BrushBench/mapping_file.json \
--base_dir data/BrushBench \
--mask_key inpainting_mask

The --mask_key indicates which kind of mask to use, inpainting_mask for inside inpainting and outpainting_mask for outside inpainting. The evaluation results (images and metrics) will be saved in --image_save_path.

Noted that you need to ignore the nsfw detector in src/diffusers/pipelines/brushnet/pipeline_brushnet.py#1261 to get the correct evaluation results. Moreover, we find different machine may generate different images, thus providing the results on our machine here.

๐Ÿค๐Ÿผ Cite Us

@misc{ju2024brushnet,
  title={BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion}, 
  author={Xuan Ju and Xian Liu and Xintao Wang and Yuxuan Bian and Ying Shan and Qiang Xu},
  year={2024},
  eprint={2403.06976},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

๐Ÿ’– Acknowledgement

Our code is modified based on diffusers, thanks to all the contributors!

brushnet's People

Contributors

eltociear avatar juxuan27 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brushnet's Issues

strength option

There is an option like strength, what I want is for the masked part to take the color of the original image.

Often get all zero image when inpainting mask is small

Thank you for opening source this work and the result is wonderful!
But I met met a problem when I tried to edit cartoon images:

  • Checkpoint: AnythingV5_Ink.
  • Brushnet model: random.
  • Prompt: 1girl, 1boy with mouth open (or closed)
  • Image and mask:
image

I often get a result that is all zeros:
image

Adjusting prompt seems no use.
When I change to another checkpoint, like dreamshaper_v8, the result is reasonable(But not cartoon style I want).

So, why for AnythingV5_Ink, the results are often all zeros. Seems that the model is collapsed.
Do you guys have any idea on how to avoid this?

Flexible Control params not support ?

Model/Pipeline/Scheduler description

In section 4.3 of the paper, I saw that you mentioned blend operation, but I didnโ€™t see them in the code?

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

when to release checkpoint (sdv1.5)?

Model/Pipeline/Scheduler description

hello! Thanks for your excellent work. I want to ask when to release the checkpoint? Thanks a lot :-)

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

The object added has a strange shape

rabbit
bird
When I tried the huggingface demo, I found that the shape of the object was always very close to the shape of the mask, and it was very strange, why

cannot import name 'StableDiffusionBrushNetPipeline' from 'diffusers'

Describe the bug

Running examples/brushnet/app_brushnet.py fails with this error message:
File "/media/iwoolf/tenT/BrushNet/./examples/brushnet/app_brushnet.py", line 10, in
from diffusers import StableDiffusionBrushNetPipeline, BrushNetModel, UniPCMultistepScheduler
ImportError: cannot import name 'StableDiffusionBrushNetPipeline' from 'diffusers' (/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/diffusers/init.py)

I have tried uninstalling and re-installing diffusers, but it made no difference.
I'm running Ubuntu 22.04.4LTS

Reproduction

python examples/brushnet/app_brushnet.py
/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/transformers/utils/generic.py:485: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/transformers/utils/generic.py:342: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/transformers/utils/generic.py:342: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
Traceback (most recent call last):
File "/media/iwoolf/tenT/BrushNet/examples/brushnet/app_brushnet.py", line 10, in
from diffusers import StableDiffusionBrushNetPipeline, BrushNetModel, UniPCMultistepScheduler
ImportError: cannot import name 'StableDiffusionBrushNetPipeline' from 'diffusers' (/media/iwoolf/BigDrive/anaconda3/envs/Brushnet/lib/python3.9/site-packages/diffusers/init.py)

Logs

Instructions, please?

System Info

  • diffusers version: 0.27.2
  • Platform: Linux-5.15.0-102-generic-x86_64-with-glibc2.35
  • Python version: 3.9.19
  • PyTorch version (GPU?): 2.2.2+cu121 (True)
  • Huggingface_hub version: 0.22.2
  • Transformers version: 4.39.3
  • Accelerate version: 0.20.3
  • xFormers version: 0.0.25.post1

Who can help?

@yiyixuxu @DN6 @sayakpaul

remove object

Hi, thanks for your code. I have a question, if I want to remove an object (for example I want to remove this blueberry), how should I write the prompt? If I wrote "remove the blueberry" it might not work well.
image

Is it possible to guide inpainting using controlnet?

Model/Pipeline/Scheduler description

Hi, I am trying to inpaint a person in specific pose. Is it possible to use controlnet with this repo?

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

adapt to automatic11111

thanks for your great work. Do you have a plan to adjust the brushnet to sd-webui-automatic1111111 ?

Compare with the 9-channel controlnet-inpainting model

Dear developer,

Thank you for your wonderful work.

image

The paper seems to only compare 4-channel controlnet-inpainting.
Have you ever compared the 9-channel controlnet-inpainting model?
What I mean is that the input of UNet is 9-channel.

Best wishes.

why mask not worked?

i have used the mask to generate background, but the model change my foreground !
13b69ddd0d08c
dfbada49e0035

the result is
output20240426061019
you can find the words in foreground is changed!

Question about training

Your job is great! I have some questions about training epochs. I want to train BrushNet on my own data, and I see the default training epoch is 10000. And I also see the config.json in your offered model weights:
random_mask_brushnet_ckpt: "runs/logs/brushnet_randommask/checkpoint-100000"
segmentation_mask_brushnet_ckpt: "runs/logs/brushnet_segmask/checkpoint-550000"
And it seems that other models also corresponds to different training epochs.

So generally I can use 10000 training epochs or I need to choose based on the loss values? In the paper, it says: "BrushNet and all ablation models are trained for 430 thousands steps on 8 NVIDIA Tesla V100 GPUs, which takes around 3 days". For my situation, I think my training seems to take much longer time than that.

Thx for your reply.

BrushNet with other ControlNet models.

Thanks for your great work!
But i wanted to make sure about one thing.
Is it possible to use BurshNet inpainting model with other ControlNet models such as Canny, Segmentation etc to get the better outputs or other purposes?

weight question

Describe the bug

I downloaded the weights you guys provided, but there seems to be some issues with the weights

ValueError: Cannot load <class 'diffusers.models.brushnet.BrushNetModel'> from /model/BrushNet/unet because the following keys are missing:
brushnet_up_blocks.11.weight, brushnet_up_blocks.4.bias, brushnet_up_blocks.10.bias, brushnet_down_blocks.9.bias, brushnet_up_blocks.6.weight, brushnet_down_blocks.3.bias, brushnet_down_blocks.5.bias, brushnet_up_blocks.7.bias, brushnet_up_blocks.3.bias, brushnet_up_blocks.12.weight, brushnet_down_blocks.8.weight, brushnet_up_blocks.13.weight, brushnet_down_blocks.6.bias, brushnet_up_blocks.9.bias, brushnet_down_blocks.11.bias, brushnet_down_blocks.5.weight, brushnet_down_blocks.4.bias, brushnet_mid_block.bias, brushnet_down_blocks.2.weight, brushnet_down_blocks.3.weight, brushnet_down_blocks.1.bias, brushnet_down_blocks.7.bias, brushnet_down_blocks.0.bias, brushnet_up_blocks.12.bias, brushnet_up_blocks.7.weight, brushnet_up_blocks.8.bias, brushnet_up_blocks.2.bias, brushnet_up_blocks.14.weight, brushnet_up_blocks.0.weight, brushnet_up_blocks.11.bias, brushnet_up_blocks.3.weight, brushnet_up_blocks.1.bias, brushnet_up_blocks.6.bias, brushnet_up_blocks.4.weight, brushnet_down_blocks.4.weight, brushnet_up_blocks.0.bias, brushnet_down_blocks.0.weight, brushnet_up_blocks.2.weight, brushnet_up_blocks.10.weight, brushnet_mid_block.weight, brushnet_up_blocks.13.bias, brushnet_up_blocks.1.weight, brushnet_up_blocks.14.bias, brushnet_down_blocks.7.weight, brushnet_up_blocks.8.weight, brushnet_up_blocks.9.weight, brushnet_down_blocks.1.weight, brushnet_down_blocks.10.weight, conv_in_condition.bias, brushnet_down_blocks.8.bias, conv_in_condition.weight, brushnet_up_blocks.5.bias, brushnet_down_blocks.2.bias, brushnet_down_blocks.6.weight, brushnet_down_blocks.11.weight, brushnet_down_blocks.10.bias, brushnet_down_blocks.9.weight, brushnet_up_blocks.5.weight.
Please make sure to pass low_cpu_mem_usage=False and device_map=None if you want to randomly initialize those weights or else make sure your checkpoint file is correct.

Reproduction

realisticVisionV60B1_v51VAE

Logs

No response

System Info

  • diffusers version: 0.27.0
  • Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.17
  • Python version: 3.8.18
  • PyTorch version (GPU?): 1.12.1+cu116 (True)
  • Huggingface_hub version: 0.21.4
  • Transformers version: 4.38.2
  • Accelerate version: 0.20.3
  • xFormers version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

No response

HuggingFace Demo Fails to Load

Describe the bug

When I visit the HuggingFace demo page here: https://huggingface.co/spaces/TencentARC/BrushNet

It fails to load with the saying Runtime error Memory limit exceeded (46Gi).

Reproduction

Simply visit: https://huggingface.co/spaces/TencentARC/BrushNet

Logs

===== Application Startup at 2024-04-02 14:01:10 =====

Installing correct gradio version...
Found existing installation: gradio 3.50.2
Uninstalling gradio-3.50.2:
  Successfully uninstalled gradio-3.50.2
Collecting gradio==3.50.0
  Downloading gradio-3.50.0-py3-none-any.whl (20.3 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 20.3/20.3 MB 100.7 MB/s eta 0:00:00
Requirement already satisfied: websockets<12.0,>=10.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (11.0.3)
Requirement already satisfied: packaging in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (24.0)
Requirement already satisfied: pyyaml<7.0,>=5.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (6.0.1)
Requirement already satisfied: aiofiles<24.0,>=22.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (23.2.1)
Requirement already satisfied: jinja2<4.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (3.1.3)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,<3.0.0,>=1.7.4 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (1.10.14)
Requirement already satisfied: ffmpy in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.3.2)
Requirement already satisfied: orjson~=3.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (3.10.0)
Requirement already satisfied: matplotlib~=3.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (3.8.3)
Requirement already satisfied: python-multipart in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.0.9)
Requirement already satisfied: gradio-client==0.6.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.6.1)
Requirement already satisfied: semantic-version~=2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.10.0)
Requirement already satisfied: pandas<3.0,>=1.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.2.1)
Requirement already satisfied: huggingface-hub>=0.14.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.22.1)
Requirement already satisfied: httpx in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.27.0)
Requirement already satisfied: altair<6.0,>=4.2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (5.2.0)
Requirement already satisfied: pillow<11.0,>=8.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (9.5.0)
Requirement already satisfied: typing-extensions~=4.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (4.10.0)
Requirement already satisfied: uvicorn>=0.14.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.29.0)
Requirement already satisfied: pydub in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.25.1)
Requirement already satisfied: numpy~=1.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (1.26.4)
Requirement already satisfied: importlib-resources<7.0,>=1.3 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (6.4.0)
Requirement already satisfied: fastapi in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (0.110.0)
Requirement already satisfied: markupsafe~=2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.1.5)
Requirement already satisfied: requests~=2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio==3.50.0) (2.31.0)
Requirement already satisfied: fsspec in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from gradio-client==0.6.1->gradio==3.50.0) (2024.2.0)
Requirement already satisfied: toolz in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from altair<6.0,>=4.2.0->gradio==3.50.0) (0.12.1)
Requirement already satisfied: jsonschema>=3.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from altair<6.0,>=4.2.0->gradio==3.50.0) (4.21.1)
Requirement already satisfied: filelock in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from huggingface-hub>=0.14.0->gradio==3.50.0) (3.13.3)
Requirement already satisfied: tqdm>=4.42.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from huggingface-hub>=0.14.0->gradio==3.50.0) (4.66.2)
Requirement already satisfied: zipp>=3.1.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from importlib-resources<7.0,>=1.3->gradio==3.50.0) (3.18.1)
Requirement already satisfied: kiwisolver>=1.3.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (1.4.5)
Requirement already satisfied: contourpy>=1.0.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (1.2.0)
Requirement already satisfied: python-dateutil>=2.7 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (2.9.0.post0)
Requirement already satisfied: fonttools>=4.22.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (4.50.0)
Requirement already satisfied: cycler>=0.10 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (0.12.1)
Requirement already satisfied: pyparsing>=2.3.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from matplotlib~=3.0->gradio==3.50.0) (3.1.2)
Requirement already satisfied: tzdata>=2022.7 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from pandas<3.0,>=1.0->gradio==3.50.0) (2024.1)
Requirement already satisfied: pytz>=2020.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from pandas<3.0,>=1.0->gradio==3.50.0) (2024.1)
Requirement already satisfied: idna<4,>=2.5 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (3.6)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (3.3.2)
Requirement already satisfied: certifi>=2017.4.17 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (2024.2.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from requests~=2.0->gradio==3.50.0) (2.2.1)
Requirement already satisfied: click>=7.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from uvicorn>=0.14.0->gradio==3.50.0) (8.0.4)
Requirement already satisfied: h11>=0.8 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from uvicorn>=0.14.0->gradio==3.50.0) (0.14.0)
Requirement already satisfied: starlette<0.37.0,>=0.36.3 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from fastapi->gradio==3.50.0) (0.36.3)
Requirement already satisfied: sniffio in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from httpx->gradio==3.50.0) (1.3.1)
Requirement already satisfied: httpcore==1.* in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from httpx->gradio==3.50.0) (1.0.5)
Requirement already satisfied: anyio in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from httpx->gradio==3.50.0) (4.3.0)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (2023.12.1)
Requirement already satisfied: referencing>=0.28.4 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (0.34.0)
Requirement already satisfied: rpds-py>=0.7.1 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (0.18.0)
Requirement already satisfied: attrs>=22.2.0 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from jsonschema>=3.0->altair<6.0,>=4.2.0->gradio==3.50.0) (23.2.0)
Requirement already satisfied: six>=1.5 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from python-dateutil>=2.7->matplotlib~=3.0->gradio==3.50.0) (1.16.0)
Requirement already satisfied: exceptiongroup>=1.0.2 in /home/user/.pyenv/versions/3.9.19/lib/python3.9/site-packages (from anyio->httpx->gradio==3.50.0) (1.2.0)
Installing collected packages: gradio
Successfully installed gradio-3.50.0

[notice] A new release of pip available: 22.3.1 -> 24.0
[notice] To update, run: pip install --upgrade pip
Installing Finished!

System Info

I ran this on the web

Who can help?

No response

่ฏท้—ฎGPUๆ˜พๅญ˜้œ€ๆฑ‚ๅคšๅฐ‘

่ฟ™ๆ˜ฏไธ€ไธชๅพˆๆฃ’็š„ๆก†ๆžถ๏ผŒๅฏไปฅๅบ”็”จๅœจๅพˆๅคšๆ–น้ข๏ผ
ไธ่ฟ‡ๅœจ่ฎบๆ–‡ไธญ็š„ๅฎž็Žฐ็ป†่Š‚ไธŠ๏ผŒๆˆ‘็œ‹ๅˆฐๆ˜ฏ็”จๆฅ8ๅ—V100GPU๏ผŒ่ฏท้—ฎไฝฟ็”จ็š„GPUๆ˜พๅญ˜ๆ˜ฏๅคšๅคงๅ‘ข๏ผŸๅฆ‚ๆžœๅชๆœ‰24Gๆ˜พๅญ˜ๆ˜ฏๅฆ่ƒฝๅคŸ่ฎญ็ปƒ๏ผŸ

Can you provide the code for data preprocessing?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...].

I noticed you preprocessed the images and generated several features for each image, could you provide the preprocessing codes? Thanks a lot!

Describe the solution you'd like.
A clear and concise description of what you want to happen.

Describe alternatives you've considered.
A clear and concise description of any alternative solutions or features you've considered.

Additional context.
Add any other context or screenshots about the feature request here.

Random brush model release ?

Model/Pipeline/Scheduler description

Hi,

Thank you for releasing the 1.5 model for segmented inpainting. It's working pretty great but it's not working as well for masks that don't follow segmentations. Do you plan on releasing the random brush model as well ?

Thanks

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

No response

mask value problem

Code

I've noticed a detail that during training, the mask uses INTER_CUBIC and the latent dimension of the mask has continuous values between 0 and 1. However, during inference, it always uses discrete values of 0 and 1. Is there a problem with this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.