Giter Club home page Giter Club logo

cpm's Introduction

Table of Content
  1. Introduction
  2. Datasets
  3. Getting Started
  4. Training & Evaluation

CPM: Color-Pattern Makeup Transfer

  • CPM is a holistic makeup transfer framework that outperforms previous state-of-the-art models on both light and extreme makeup styles.
  • CPM consists of an improved color transfer branch (based on BeautyGAN) and a novel pattern transfer branch.
  • We also introduce 4 new datasets (both real and synthesis) to train and evaluate CPM.

📢 New: We provide ❝Qualitative Performane Comparisons❞ online! Check it out!

teaser.png
CPM can replicate both colors and patterns from a reference makeup style to another image.

Details of the dataset construction, model architecture, and experimental results can be found in our following paper:

@inproceedings{m_Nguyen-etal-CVPR21,
  author = {Thao Nguyen and Anh Tran and Minh Hoai},
  title = {Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup Transfer},
  year = {2021},
  booktitle = {Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)}
}

Please CITE our paper whenever our datasets or model implementation is used to help produce published results or incorporated into other software.

Open In Colab - arXiv - project page


Datasets

We introduce ✨ 4 new datasets: CPM-Real, CPM-Synt-1, CPM-Synt-2, and Stickers datasets. Besides, we also use published LADN's Dataset & Makeup Transfer Dataset.

CPM-Real and Stickers are crawled from Google Image Search, while CPM-Synt-1 & 2 are built on Makeup Transfer and Stickers. (Click on dataset name to download)

Name #imgs Description -
CPM-Real 3895 real - makeup styles CPM-Real.png
CPM-Synt-1 5555 synthesis - makeup images with pattern segmentation mask ./imgs/CPM-Synt-1.png
CPM-Synt-2 1625 synthesis - triplets: makeup, non-makeup, ground-truth ./imgs/CPM-Synt-2.png
Stickers 577 high-quality images with alpha channel Stickers.png

Dataset Folder Structure can be found here.

By downloading these datasets, USER agrees:

  • to use these datasets for research or educational purposes only
  • to not distribute the datasets or part of the datasets in any original or modified form.
  • and to cite our paper whenever these datasets are employed to help produce published results.

Getting Started

Requirements
Installation
# clone the repo
git clone https://github.com/VinAIResearch/CPM.git
cd CPM

# install dependencies
conda env create -f environment.yml
Download pre-trained models
mkdir checkpoints
cd checkpoints
wget https://public.vinai.io/CPM_checkpoints/color.pth
wget https://public.vinai.io/CPM_checkpoints/pattern.pth
  • Download [PRNet pre-trained model] from Drive. Put it in PRNet/net-data
Usage

➡️ You can now try it in Google Colab Open in Colab

# Color+Pattern: 
CUDA_VISIBLE_DEVICES=0 python main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png

# Color Only: 
CUDA_VISIBLE_DEVICES=0 python main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png --color_only

# Pattern Only: 
CUDA_VISIBLE_DEVICES=0 python main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png --pattern_only

Result image will be saved in result.png

result
From left to right: Style, Input & Output

Training and Evaluation

As stated in the paper, the Color Branch and Pattern Branch are totally independent. Yet, they shared the same workflow:

  1. Data preparation: Generating texture_map of faces.

  2. Training

Please redirect to Color Branch or Pattern Branch for further details.


🌿 If you have trouble running the code, please read Trouble Shooting before creating an issue. Thank you 🌿

Trouble Shooting
  1. [Solved] ImportError: libGL.so.1: cannot open shared object file: No such file or directory:

    sudo apt update
    sudo apt install libgl1-mesa-glx
    
  2. [Solved] RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_convolution) Add CUDA VISIBLE DEVICES before .py. Ex:

    CUDA_VISIBLE_DEVICES=0 python main.py
    
  3. [Solved] RuntimeError: cuda runtime error (999) : unknown error at /opt/conda/conda-bld/pytorch_1595629403081/work/aten/src/THC/THCGeneral.cpp:47

    sudo rmmod nvidia_uvm
    sudo modprobe nvidia_uvm
    
Docker file
docker build -t name .

cpm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cpm's Issues

training code

Thanks for your great work!
Will you release the training code ??

The performance of the code can not match the picture in README

Hi, Thanks for sharing the excellent work!

However, after successfully running the code, the results are visually worse than the demo picture in README. Are the bad results related to the code environment? Or the results are expected?

Demo picture in README:

result

Results(color+pattern) after I running:

c+p3

Results(color) after I running:
ColorOnly

Results(pattern) after I running:
patternonly

RuntimeError: Unsupported image type, must be 8bit gray or RGB image.

Traceback (most recent call last): File "main.py", line 34, in <module> model.prn_process(imgA) File "/data1/CPM/makeup.py", line 57, in prn_process self.pos = self.prn.process(self.face) File "/data1/CPM/utils/api.py", line 107, in process detected_faces = self.dlib_detect(image) File "/data1/CPM/utils/api.py", line 56, in dlib_detect return self.face_detector(image, 1) RuntimeError: Unsupported image type, must be 8bit gray or RGB image.

img_v2_e0aba768-04ac-4eb3-9722-8e845381477g
img_v2_230f2005-8a15-4dff-a850-babc4d6c9f9g

About the MT dataset

The dataset link proposed before seems to be broken.
Would you please share it again?
Thanks a lot for your time !

Read-only CPM folder preventing me from running Colab example - Read-only file system: './result.png'

Hello,
As in your Colab instruction, I added a shortcut of CPM-Shared-Folder to my Drive. However, I don't have the permission to write to your Drive, so I see the following error when running your instruction:

Traceback (most recent call last):
  File "main.py", line 54, in <module>
    Image.fromarray((output).astype("uint8")).save(save_path)
  File "/usr/local/lib/python3.7/dist-packages/PIL/Image.py", line 2131, in save
    fp = builtins.open(filename, "w+b")
OSError: [Errno 30] Read-only file system: './result.png'

Suggested solutions: Point savedir to another directory:

# Pattern + Color: Image will be saved in 'result.png'
os.chdir(path)
!python -W ignore main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png --savedir=/content/

You should also update other commands.

AttributeError: module 'segmentation_models_pytorch' has no attribute 'utils'

After following the install instructions and running the main.py script as instructed both locally and in colab...

os.chdir(path)
!python -W ignore main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png

I get the following error:

Traceback (most recent call last):
File "main.py", line 28, in
model = Makeup(args)
File "/content/gdrive/.shortcut-targets-by-id/1rZyAvaAtqZ9a0okVcv4OFaq9aiVvKg5Q/CPM/makeup.py", line 18, in init
self.pattern = Segmentor(args)
File "/content/gdrive/.shortcut-targets-by-id/1rZyAvaAtqZ9a0okVcv4OFaq9aiVvKg5Q/CPM/utils/models.py", line 15, in init
self.loss = smp.utils.losses.DiceLoss()
AttributeError: module 'segmentation_models_pytorch' has no attribute 'utils'

No module named 'segmentation_models_pytorch.unet'

i run the test i got this issue
⊱ ──────ஓ๑♡๑ஓ ────── ⊰ 🎵 hhey, arguments are here if you need to check 🎵 checkpoint_pattern: ./checkpoints/pattern.pth checkpoint_color: ./checkpoints/color.pth device: cuda prn: True color_only: True pattern_only: False input: ./imgs/non-makeup.png style: ./imgs/style-1.png alpha: 0.5 savedir:

raceback (most recent call last):
File "main.py", line 27, in
model = Makeup(args)
File "D:\AI\MakeUp\makeup.py", line 19, in init
self.pattern.test_model(args.checkpoint_pattern)
File "D:\AI\MakeUp\utils\models.py", line 46, in test_model
torch.load(path),
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 795, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 1012, in _legacy_load
result = unpickler.load()
File "C:\Users\This PC\miniconda3\envs\makeup\lib\site-packages\torch\serialization.py", line 828, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'segmentation_models_pytorch.unet'

Color Training

When i try to train the Color Branch,there is a error occurred.Could you give me a suggestion when you have time.
The error message is as follows:
Traceback (most recent call last):
File "create_beautygan_uv.py", line 51, in
uv_texture, uv_seg = generator.get_texture(image, seg)
File "/data/run01/scv0004/CPM-main/Color/texture_generator.py", line 23, in get_texture
pos[:, :, :2].astype(np.float32),
TypeError: 'NoneType' object is not subscriptable
image

Suggest to loosen the dependency on albumentations

Hi, your project CPM requires "albumentations==0.5.2" in its dependency. After analyzing the source code, we found that the following versions of albumentations can also be suitable without affecting your project, i.e., albumentations 0.5.1. Therefore, we suggest to loosen the dependency on albumentations from "albumentations==0.5.2" to "albumentations>=0.5.1,<=0.5.2" to avoid any possible conflict for importing more packages or for downstream projects that may use CPM.

May I pull a request to further loosen the dependency on albumentations?

By the way, could you please tell us whether such dependency analysis may be potentially helpful for maintaining dependencies easier during your development?



We also give our detailed analysis as follows for your reference:

Your project CPM directly uses 17 APIs from package albumentations.

albumentations.augmentations.transforms.MotionBlur.__init__, albumentations.augmentations.transforms.CLAHE.__init__, albumentations.imgaug.transforms.IAASharpen.__init__, albumentations.augmentations.transforms.RandomContrast.__init__, albumentations.imgaug.transforms.IAAPerspective.__init__, albumentations.augmentations.transforms.PadIfNeeded.__init__, albumentations.augmentations.transforms.Lambda.__init__, albumentations.core.composition.OneOf.__init__, albumentations.augmentations.transforms.RandomCrop.__init__, albumentations.core.composition.Compose.__init__, albumentations.augmentations.transforms.RandomBrightness.__init__, albumentations.imgaug.transforms.IAAAdditiveGaussianNoise.__init__, albumentations.augmentations.transforms.RandomGamma.__init__, albumentations.augmentations.transforms.HueSaturationValue.__init__, albumentations.augmentations.transforms.ShiftScaleRotate.__init__, albumentations.augmentations.transforms.Blur.__init__, albumentations.augmentations.transforms.HorizontalFlip.__init__

Beginning from the 17 APIs above, 14 functions are then indirectly called, including 13 albumentations's internal APIs and 1 outsider APIs. The specific call graph is listed as follows (neglecting some repeated function occurrences).

[/VinAIResearch/CPM]
+--albumentations.augmentations.transforms.MotionBlur.__init__
|      +--albumentations.augmentations.transforms.Blur.__init__
|      |      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      |      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.CLAHE.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.imgaug.transforms.IAASharpen.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.RandomContrast.__init__
|      +--albumentations.augmentations.transforms.RandomBrightnessContrast.__init__
|      |      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      |      +--albumentations.core.transforms_interface.to_tuple
|      +--warnings.warn
+--albumentations.imgaug.transforms.IAAPerspective.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.PadIfNeeded.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
+--albumentations.augmentations.transforms.Lambda.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--warnings.warn
+--albumentations.core.composition.OneOf.__init__
|      +--albumentations.core.composition.BaseCompose.__init__
|      |      +--albumentations.core.composition.Transforms.__init__
|      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
|      |      |      |      +--albumentations.core.composition.Transforms._find_dual_start_end
+--albumentations.augmentations.transforms.RandomCrop.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
+--albumentations.core.composition.Compose.__init__
|      +--albumentations.core.composition.BaseCompose.__init__
|      +--albumentations.augmentations.bbox_utils.BboxProcessor.__init__
|      |      +--albumentations.core.utils.DataProcessor.__init__
|      +--albumentations.core.composition.BboxParams.__init__
|      |      +--albumentations.core.utils.Params.__init__
|      +--albumentations.augmentations.keypoints_utils.KeypointsProcessor.__init__
|      |      +--albumentations.core.utils.DataProcessor.__init__
|      +--albumentations.core.composition.KeypointParams.__init__
|      |      +--albumentations.core.utils.Params.__init__
|      +--albumentations.core.composition.BaseCompose.add_targets
+--albumentations.augmentations.transforms.RandomBrightness.__init__
|      +--albumentations.augmentations.transforms.RandomBrightnessContrast.__init__
|      +--warnings.warn
+--albumentations.imgaug.transforms.IAAAdditiveGaussianNoise.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.RandomGamma.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.HueSaturationValue.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.ShiftScaleRotate.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__
|      +--albumentations.core.transforms_interface.to_tuple
+--albumentations.augmentations.transforms.Blur.__init__
+--albumentations.augmentations.transforms.HorizontalFlip.__init__
|      +--albumentations.core.transforms_interface.BasicTransform.__init__

We scan albumentations's versions and observe that during its evolution between any version from [0.5.1] and 0.5.2, the changing functions (diffs being listed below) have none intersection with any function or API we mentioned above (either directly or indirectly called by this project).

diff: 0.5.2(original) 0.5.1
['albumentations.augmentations.transforms.MedianBlur', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.targets_as_params', 'albumentations.augmentations.transforms.GaussianBlur', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.update_params', 'albumentations.pytorch.transforms.ToTensorV2', 'albumentations.pytorch.transforms.ToTensorV2.apply', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists.get_params_dependent_on_targets', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists', 'albumentations.augmentations.transforms.CropNonEmptyMaskIfExists._preprocess_mask']

As for other packages, the APIs of warnings are called by albumentations in the call graph and the dependencies on these packages also stay the same in our suggested versions, thus avoiding any outside conflict.

Therefore, we believe that it is quite safe to loose your dependency on albumentations from "albumentations==0.5.2" to "albumentations>=0.5.1,<=0.5.2". This will improve the applicability of CPM and reduce the possibility of any further dependency conflict with other projects.

higher resolution

Thanks for sharing.

Is it possible to run this method on higher-resolution input? or 256 only?

Unsatisfied results tested on customized images

Hello

I got some non-ideal results when testing the model on other images.

  1. As shown in the following image, there are some distinct boundaries on the resulting image. Any idea on this?

    image

    When diving into the code, I found you applied Face Detection before PRNet (See here). Because the image I input has been cropped, I commented those codes related to Face Detection in process() function (See below). However, the result turned out to be worse. It seems this Face Detection is necessary. Would you provide me with more explanation on this?

    image

    image

  2. When testing on my device (Tesla V100), it took several seconds to process an image, which is really time-consuming. Is this normal? If so, why will it be such time-consuming (it seems the model is not really complex)?

Looking forward to your reply.
Thanks.

RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED

Hello,
I am trying to reimplementing the pattern branch and have this issue.
There is also a warning saying that "GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37."
I am using Ubuntu 21.04 and CUDA of version 11.2.
I this the error is due to incompatibility between my rtx3070 and pytorch 1.6. I think I need a combination of pytorch=1.8, torchvision=0.9 to get it work. Do you have an alternative environment for this? ty!

Setup throwing error

Was trying to setup in on local system but conda is throwing error:

Solving environment: failed

ResolvePackageNotFound:
  - libstdcxx-ng=9.3.0
  - gmp=6.2.1
  - openh264=2.1.1
  - _openmp_mutex=4.5
  - jasper=1.900.1
  - readline=8.1
  - graphite2=1.3.14
  - libuuid=1.0.3
  - libxcb=1.14
  - ncurses=6.2
  - libxkbcommon=1.0.3
  - libedit=3.1.20210216
  - ld_impl_linux-64=2.33.1
  - libnghttp2=1.43.0
  - nspr=4.30
  - lame=3.100
  - cupti=10.1.168
  - nss=3.63
  - libev=4.33
  - libgcc-ng=9.3.0
  - gnutls=3.6.13
  - dbus=1.13.18
  - nettle=3.6
  - libgfortran-ng=7.3.0

Also what should be the ideal architecture to deploy this model over cloud?

Training on new dataset

Hi, I really appreciate the work you have done and I want to train your color module on new dataset.
In CPM/Color/Readme.md, it says to re-train model on new dataset, one should follow the the instruction on BeautyGAN. I am wondering whether this means I should use the BeautyGAN network to train on new dataset and use the checkpoints from this as the new color module checkpoints, or I can simply train the color module in this CPM network?

a problem in colab

when I run it in colab, I faced a problem like this
b99d270aa1cbc4ed9f15d5a1290be93
Can you tell me how to solve it?

image-invariant region mask

Thank you to share your work!
In Sec.3.2 of your paper, I find following description:
"This region mask is image-invariant and equals to a universal mask"
Does it mean all image share the same region mask? if so, can you share the region mask? I try to reproduce makeup transfer, can you show how to ues region mask to realize it? Thanks a lot.

Requirements

Can you create a requirements.txt file and put it in git?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.