Giter Club home page Giter Club logo

magicclothing's Introduction

Magic Clothing (ACM Multimedia 2024)

This repository is the official implementation of Magic Clothing

Magic Clothing is a branch version of OOTDiffusion, focusing on controllable garment-driven image synthesis

Magic Clothing: Controllable Garment-Driven Image Synthesis [arXiv paper]
Weifeng Chen*, Tao Gu*, Yuhao Xu*+, Chengcai Chen
* Equal contribution + Corresponding author
Xiao-i Research

๐Ÿ“ข๐Ÿ“ข We are continuing to improve this project. Please check earlyAccess branch for new features and updates : )

News

๐Ÿ”ฅ [2024/4/16] Our paper is available now!

๐Ÿ”ฅ [2024/3/8] We release the model weights trained on the 768 resolution. The strength of clothing and text prompts can be independently adjusted.

๐Ÿค— Hugging Face link

๐Ÿ”ฅ [2024/2/28] We support IP-Adapter-FaceID with ControlNet-Openpose now! A portrait and a reference pose image can be used as additional conditions.

Have fun with gradio_ipadapter_openpose.py

๐Ÿ”ฅ [2024/2/23] We support IP-Adapter-FaceID now! A portrait image can be used as an additional condition.

Have fun with gradio_ipadapter_faceid.py

demoย  workflowย 

Installation

  1. Clone the repository
git clone https://github.com/ShineChen1024/MagicClothing.git
  1. Create a conda environment and install the required packages
conda create -n magicloth python==3.10
conda activate magicloth
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
pip install -r requirements.txt

Inference

  1. Python demo

512 weights

python inference.py --cloth_path [your cloth path] --model_path [your model checkpoints path]

768 weights

python inference.py --cloth_path [your cloth path] --model_path [your model checkpoints path] --enable_cloth_guidance
  1. Gradio demo

512 weights

python gradio_generate.py --model_path [your model checkpoints path] 

768 weights

python gradio_generate.py --model_path [your model checkpoints path] --enable_cloth_guidance

Citation

@article{chen2024magic,
  title={Magic Clothing: Controllable Garment-Driven Image Synthesis},
  author={Chen, Weifeng and Gu, Tao and Xu, Yuhao and Chen, Chengcai},
  journal={arXiv preprint arXiv:2404.09512},
  year={2024}
}

TODO List

  • Paper
  • Gradio demo
  • Inference code
  • Model weights
  • Training code

magicclothing's People

Contributors

levihsu avatar shinechen1024 avatar t-gu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

magicclothing's Issues

help ----No checkpoints at given path----

the log is :

(magicloth) /mnt/workspace/workgroup/yunke/MagicClothing> python gradio_generate.py --model_path /mnt/workspace/workgroup/yunke/MagicClothing/checkpoints/magic_clothing_768_vitonhd_joint.safetensors --enable_cloth_guidance
Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:01<00:00, 4.37it/s]
----No checkpoints at given path----
Traceback (most recent call last):
File "/mnt/workspace/workgroup/yunke/MagicClothing/gradio_generate.py", line 25, in
full_net = ClothAdapter(pipe, args.model_path, device, args.enable_cloth_guidance)
File "/mnt/workspace/workgroup/yunke/MagicClothing/garment_adapter/garment_diffusion.py", line 36, in init
self.set_seg_model()
File "/mnt/workspace/workgroup/yunke/MagicClothing/garment_adapter/garment_diffusion.py", line 41, in set_seg_model
self.seg_net = load_seg_model(checkpoint_path, device=self.device)
File "/mnt/workspace/workgroup/yunke/MagicClothing/garment_seg/process.py", line 96, in load_seg_model
net = net.to(device)
AttributeError: 'NoneType' object has no attribute 'to'

Paper and train code release

Any news on paper release? It would be very interesting to understand how the overall architecture works, the paper release would be really great.

About the code: when can we expect to have (even an ugly draft) of training code? Maybe it could help some use cases to train new checkpoints for specific needs. Even some tips about how to organize a new training could be useful.

Thanks in advance. Really appreciate the work you did so far.

Comfy UI

Hello guys,

You have created wonderful solution, are there any chances to make it available for Comfy UI?

Hint on training

Hi, thanks very much for the great project!

I am working on my own training script, and I based it on controlnet training - I do noise prediction for random time step and compare it with real noise. Trainable ref_unet is inferenced for time step 0 always and passes its attention maps through dictionary to the main unet. Weights of the main unet are frozen. I do training in fp16 precision with 8-bit AdamW optimizer.

https://github.com/hcl14/oms-Diffusion/blob/cdbbbe9060027b51fef0ac35f5b1d06342c6e496/train_attempt_my_controlnet_agnostic_mask.py#L171

https://github.com/hcl14/oms-Diffusion/blob/cdbbbe9060027b51fef0ac35f5b1d06342c6e496/train_attempt_my_controlnet_agnostic_mask.py#L633

https://github.com/hcl14/oms-Diffusion/blob/cdbbbe9060027b51fef0ac35f5b1d06342c6e496/train_attempt_my_controlnet_agnostic_mask.py#L647

I also added controlnet-openpose for guidance and cloth-agnostic masks applied to latent in attempt to make work easier. I also use gradient accumulation to imitate big batch size.

https://github.com/hcl14/oms-Diffusion/blob/cdbbbe9060027b51fef0ac35f5b1d06342c6e496/train_attempt_my_controlnet_agnostic_mask.py#L157

Currently the training diverges very quickly with lr=1e-5 and more slowly with lr=1e-6 and starts to produce noisy image.

I know you plan to release code later, but perhaps you can give me some hints on it?

Thank you very much!

error

(oms) lcs@lcs:~/oms-Diffusion$ python inference.py --cloth_path assets/t1.png --model_path assets/m1.png
Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:00<00:00, 7.40it/s]
Traceback (most recent call last):
File "/home/lcs/oms-Diffusion/inference.py", line 32, in
full_net = ClothAdapter(pipe, args.model_path, device)
File "/home/lcs/oms-Diffusion/garment_adapter/garment_diffusion.py", line 93, in init
with safe_open(ref_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Great

Great work, much appreciated.

where should I put the checkpoint

thanks for great job!
I download the checkpoints from huggingface, but still don't know where should I put it, could you tell me?
Just put 'oms_diffusion_768.safetensors' in 'checkpoints' folder?

ๅฎ‰่ฃ…ๆจกๅ—ๆœ‰้—ฎ้ข˜

pip install torch==2.0.1 torchvision==0.15.2 numpy==1.25.1 diffusers==0.25.1 opencv-python==4.8.0 transformers==4.31.0 gradio==4.16.0 safetensors==0.3.1 controlnet-aux==0.0.6 accelerate-0.21.0

ไปฅไธŠๆจกๅ—้ƒจๅˆ†็š„opencv-python==4.8.0ๆ˜พ็คบๆฒก่ฟ™ไธช็‰ˆๆœฌ๏ผŒaccelerate-0.21.0่ฟ™ไธชๆœ‰้—ฎ้ข˜๏ผŒๅบ”่ฏฅๆ˜ฏaccelerate==0.21.0

ๆˆ‘ไฟฎๆ”นๅŽๆˆๅŠŸ่ฟ่กŒ
pip install torch==2.0.1 torchvision==0.15.2 numpy==1.25.1 diffusers==0.25.1 opencv-python==4.8.0.76 transformers==4.31.0 gradio==4.16.0 safetensors==0.3.1 controlnet-aux==0.0.6 accelerate==0.21.0

OSError: Can't load config for 'stabilityai/sd-vae-ft-mse'

Traceback (most recent call last):
File "E:\software\oms-Diffusion\gradio_generate.py", line 17, in
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse").to(dtype=torch.float16)
File "C:\Users\Administrator\pinokio\bin\miniconda\envs\ootd\lib\site-packages\diffusers\models\modeling_utils.py", line 712, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
File "C:\Users\Administrator\pinokio\bin\miniconda\envs\ootd\lib\site-packages\diffusers\configuration_utils.py", line 415, in load_config
raise EnvironmentError(
OSError: Can't load config for 'stabilityai/sd-vae-ft-mse'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/sd-vae-ft-mse' is the correct path to a directory containing a config.json file

cuda errpr

(magicloth) F:\MagicClothing-main>python gradio_generate.py --model_path F:\MagicClothing-main\checkpoints\ipadapter_faceid --enable_cloth_guidance
C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
Loading pipeline components...: 20%|โ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 1/5 [00:00<00:03, 1.08it/s]C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:01<00:00, 3.43it/s]
Traceback (most recent call last):
File "F:\MagicClothing-main\gradio_generate.py", line 25, in
full_net = ClothAdapter(pipe, args.model_path, device, args.enable_cloth_guidance)
File "F:\MagicClothing-main\garment_adapter\garment_diffusion.py", line 20, in init
self.pipe = sd_pipe.to(self.device)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 869, in to
module.to(device, dtype)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\module.py", line 1152, in to
return self._apply(convert)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
module._apply(fn)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\module.py", line 825, in apply
param_applied = fn(param)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\cuda_init
.py", line 293, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

(magicloth) F:\MagicClothing-main>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

so many errors

RuntimeError: Input type (struct c10::Half) and bias type (float) should be the same
C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\functional.py:3737: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
Traceback (most recent call last):
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\gradio\queueing.py", line 495, in call_prediction
output = await route_utils.call_process_api(
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\gradio\blocks.py", line 1561, in process_api
result = await self.call_function(
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\gradio\blocks.py", line 1179, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\gradio\utils.py", line 695, in wrapper
response = f(*args, **kwargs)
File "F:\MagicClothing-main\gradio_generate.py", line 29, in process
images, cloth_mask_image = full_net.generate(cloth_image, cloth_mask_image, prompt, a_prompt, num_samples, n_prompt, seed, scale, cloth_guidance_scale, sample_steps, height, width)
File "F:\MagicClothing-main\garment_adapter\garment_diffusion.py", line 90, in generate
cloth_embeds = self.pipe.vae.encode(cloth).latent_dist.mode() * self.pipe.vae.config.scaling_factor
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\models\autoencoders\autoencoder_kl.py", line 260, in encode
h = self.encoder(x)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\diffusers\models\autoencoders\vae.py", line 143, in forward
sample = self.conv_in(sample)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\ggrov\anaconda3\envs\magicloth\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (struct c10::Half) and bias type (float) should be the same

่ฏท้—ฎๅฆ‚ไฝ•ๆ›ฟๆข่ƒŒๆ™ฏ

ไฝ ๅฅฝ๏ผŒๅœจๅฎž้™…ๅบ”็”จไธญ๏ผŒๅฆ‚ๆžœๆƒณๅœจๆขๅฅฝ่กฃๆœ็š„ๅŸบ็ก€ไธŠๅ†ๆทปๅŠ ่ƒŒๆ™ฏๅ›พ็‰‡่ฏท้—ฎ่ฆๆ€Žไนˆๅš๏ผŸๆœ‰ๆŽจ่็š„้กน็›ฎๅ‚่€ƒๅ—๏ผŸ

Can't loads the local safetensors file

Command line (the files are all exists):

python gradio_generate.py --model_path .\examples\model\01008_00.jpg --pipe_path .\checkpoints\RealisticVisionV51.safetensors

Errors:

Traceback (most recent call last):
  File "D:\oms-Diffusion\gradio_generate.py", line 18, in <module>
    pipe = StableDiffusionPipeline.from_pretrained(args.pipe_path, vae=vae, torch_dtype=torch.float16)
  File "C:\Users\admin\anaconda3\envs\oms-diffusion\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Users\admin\anaconda3\envs\oms-diffusion\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1096, in from_pretrained
    cached_folder = cls.download(
  File "C:\Users\admin\anaconda3\envs\oms-diffusion\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Users\admin\anaconda3\envs\oms-diffusion\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1656, in download
    info = model_info(pretrained_model_name, token=token, revision=revision)
  File "C:\Users\admin\anaconda3\envs\oms-diffusion\lib\site-packages\huggingface_hub\utils\_validators.py", line 110, in _inner_fn
    validate_repo_id(arg_value)
  File "C:\Users\admin\anaconda3\envs\oms-diffusion\lib\site-packages\huggingface_hub\utils\_validators.py", line 164, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: '.\checkpoints\RealisticVisionV51.safetensors'.

ComfyUI custom code kick off

I appreciate for your relentless work, i've just implemented Magic Clothing in ComfyUI,i am looking forward to continuing figure out more heterogeneous ViT and SOAT in computer vision field.

AttributeError: 'NoneType' object has no attribute 'to'

I ran:
(magicloth) ubuntu@ubuntu-Legion-5-15ARP8:~/Desktop/MagicClothing$ python inference.py --cloth_path input_images/image2.jpg --model_path oms_diffusion_100000.safetensors --output_path output_images/

And I got the following:

Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:02<00:00,  2.11it/s]
----No checkpoints at given path----
Traceback (most recent call last):
  File "/home/hassan/Desktop/MagicClothing/inference.py", line 38, in <module>
    full_net = ClothAdapter(pipe, args.model_path, device, args.enable_cloth_guidance)
  File "/home/hassan/Desktop/MagicClothing/garment_adapter/garment_diffusion.py", line 36, in __init__
    self.set_seg_model()
  File "/home/hassan/Desktop/MagicClothing/garment_adapter/garment_diffusion.py", line 41, in set_seg_model
    self.seg_net = load_seg_model(checkpoint_path, device=self.device)
  File "/home/hassan/Desktop/MagicClothing/garment_seg/process.py", line 96, in load_seg_model
    net = net.to(device)
AttributeError: 'NoneType' object has no attribute 'to'

่ฏทๅๅŠฉ่งฃๅ†ณ๏ผšsafetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

ๆ‚จๅฅฝ๏ผๅฐ่ฏ•ๅœจ windows ็Žฏๅขƒๆœฌๅœฐ้ƒจ็ฝฒ๏ผš
windows 11 x64ใ€Python 3.10.13ใ€Torch 2.0.1+cu118ใ€ๅ…ถๅฎƒ้ƒฝๅฎ‰่ฃ… readme ้‡Œ้ข็š„ๆŒ‡็คบๅฎ‰่ฃ…ไพ่ต–็‰ˆๆœฌใ€‚
pip install numpy==1.25.1 diffusers==0.25.1 transformers==4.31.0 gradio==4.16.0 safetensors==0.3.1 controlnet-aux==0.0.6 accelerate==0.21.0
ๆจกๅž‹ๆ–‡ไปถไปŽ ็™พๅบฆ็ฝ‘็›˜ ไธ‹่ฝฝ็š„ใ€‚3ไธชๆ–‡ไปถ้ƒฝๆ”พๅœจ OMSDiffusion\checkpoints ็›ฎๅฝ•ไธ‹ใ€‚

ๆ‰ง่กŒ python gradio_generate.py --model_path checkpoints/cloth_segm.pth
ๆŠฅ้”™ๅฆ‚ไธ‹๏ผš

D:\AITest\OMSDiffusion>python gradio_generate.py --model_path checkpoints/cloth_segm.pth
Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:02<00:00, 2.30it/s]
Traceback (most recent call last):
File "D:\AITest\OMSDiffusion\gradio_generate.py", line 20, in
full_net = ClothAdapter(pipe, args.model_path, device)
File "D:\AITest\OMSDiffusion\garment_adapter\garment_diffusion.py", line 23, in init
with safe_open(ref_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

ๅ…ถๅฎƒๆจกๅž‹้€š่ฟ‡ huggingface snapshot ๆ–นๅผ่‡ชๅŠจไธ‹่ฝฝ๏ผŒๅญ˜ๆ”พ็›ฎๅฝ•็ป“ๆž„ๅฆ‚ไธ‹ๅ›พ๏ผš
็›ฎๅฝ•็ป“ๆž„

ไนŸๆ›พ็ปไปŽ huggingface ๆจกๅž‹ๅœฐๅ€๏ผŒ้‡ๆ–ฐไธ‹่ฝฝ่ฟ‡ๆจกๅž‹ๆ–‡ไปถ๏ผŒไพ็„ถๆŠฅ้”™๏ผŒๆ‰€ไปฅไธ็Ÿฅๅฆ‚ไฝ•่งฃๅ†ณ๏ผŒ่ฏทๆŒ‡ๅฏผไธ€ไธ‹๏ผŒ่ฐข่ฐข๏ผ

HeaderTooLarge

(magicloth) F:\MagicClothing-main>python gradio_generate.py --model_path "F:\MagicClothing-main\checkpoints\ipadapter_faceid\ip-adapter-faceid_sd15.bin"
Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:00<00:00, 6.47it/s]
Traceback (most recent call last):
File "F:\MagicClothing-main\gradio_generate.py", line 25, in
full_net = ClothAdapter(pipe, args.model_path, device, args.enable_cloth_guidance)
File "F:\MagicClothing-main\garment_adapter\garment_diffusion.py", line 28, in init
with safe_open(ref_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

(magicloth) F:\MagicClothing-main>

safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Traceback (most recent call last):
File "E:\software\oms-Diffusion\inference.py", line 33, in
full_net = ClothAdapter(pipe, args.model_path, device)
File "E:\software\oms-Diffusion\garment_adapter\garment_diffusion.py", line 23, in init
with safe_open(ref_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

AttributeError: 'NoneType' object has no attribute 'convert'

File "E:\software\oms-Diffusion\gradio_controlnet_inpainting.py", line 35, in process
inpaint_image = make_inpaint_condition(person_image,person_mask)
File "E:\software\oms-Diffusion\gradio_controlnet_inpainting.py", line 26, in make_inpaint_condition
image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0
AttributeError: 'NoneType' object has no attribute 'convert'

model missed

does the model weight file has not been released? please

is blip necessary for training?

I see that for inference, if blip is applied, the result will be better for some categories of cloth other than those in viton-hd, like jackets or bikinis. So is the blip part in the figure of this repo necessary for training?

Train another model

Hello, can you release training code? I want to train male models. this only supports woman...

Is the inpainting pipeline the same as OOTDiffusion?

Thanks for the great work of yours!

Noticed that you got another work OOTDiffusion which has similar results as the inpainting demo of this work. Is OOTDiffusion using the same method as this one? Or perhaps using shared model structure/weights?

Mark!

ไปŽ็›ฎๅ‰ๆ”พๅ‡บ็š„demoๆฅ็œ‹๏ผŒๆ˜ฏไธ€ไธชๅพˆๆฃ’็š„้กน็›ฎ๏ผŒๆœŸๅพ…้ข„่ฎญ็ปƒๆƒ้‡็š„ๅผ€ๆบๅ’Œ่ฎญ็ปƒๆต็จ‹็š„ๅ…ฌๅผ€ใ€‚

ๅ…ˆๆ ‡่ฎฐไธ€ไธ‹๏ผŒๅฆ‚ๆžœๆœ‰ๆ›ดๆ–ฐไบ†ๅฆ‚ๆžœๆœ‰ไบบ็œ‹ๅˆฐ้บป็ƒฆๅบ•ไธ‹@ไธ€ไธ‹ๆˆ‘ใ€‚:]

when I use cpu to run inference.py, it return errors: safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

i use mac, can't use cuda, so I change device to "cpu", then I run inference.py, it return errors:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

$ python inference.py --cloth_path images/garment/00055_00.jpg --model_path images/model/01008_00.jpg

Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Pipelines loaded with `dtype=torch.float16` cannot run with `cpu` device. It is not recommended to move them to `cpu` as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support for`float16` operations on this device in PyTorch. Please, remove the `torch_dtype=torch.float16` argument, or use another device for inference.
Traceback (most recent call last):
  File "/Users/admin/py/MagicClothing/inference.py", line 38, in <module>
    full_net = ClothAdapter(pipe, args.model_path, device, args.enable_cloth_guidance)
  File "/Users/admin/py/MagicClothing/garment_adapter/garment_diffusion.py", line 28, in __init__
    with safe_open(ref_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

ๆŠฅ้”™๏ผŒๆฑ‚ๅŠฉ๏ผŒ่ฏ•ไบ†ๅพˆๅคšๅŠžๆณ•ใ€‚

2024-04-17 11:52:43.470077: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-04-17 11:52:43.470129: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-04-17 11:52:43.471608: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-04-17 11:52:44.532182: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Loading pipeline components...: 100% 5/5 [00:03<00:00, 1.59it/s]
Traceback (most recent call last):
File "/content/MagicClothing/gradio_generate.py", line 25, in
full_net = ClothAdapter(pipe, args.model_path, device, args.enable_cloth_guidance)
File "/content/MagicClothing/garment_adapter/garment_diffusion.py", line 28, in init
with safe_open(ref_path, framework="pt", device="cpu") as f:
OSError: No such device (os error 19)

Can't load model_path

with safe_open(ref_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

I'm getting the above issue while loading the ref_path in garment_diffusion.py line 28

safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Command line (the files are all exists):

python gradio_generate.py --model_path .\examples\model\01008_00.jpg

Errors:

(oms-diffusion) D:\oms-Diffusion>
Loading pipeline components...: 100%โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5/5 [00:00<00:00,  6.10it/s]
Traceback (most recent call last):
  File "D:\oms-Diffusion\gradio_generate.py", line 20, in <module>
    full_net = ClothAdapter(pipe, args.model_path, device)
  File "D:\oms-Diffusion\garment_adapter\garment_diffusion.py", line 23, in __init__
    with safe_open(ref_path, framework="pt", device="cpu") as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.