Yiu-ming Cheung, Mengke Li, Rong Zou
A pytorch implementation of Facial Structure Guided GAN for Identity-preserved Face Image De-occlusion Contact: [email protected]
The code is built with following libraries:
- Occluded CelebA (for training).
- Occluded LFW (for testing)
The dataset can be download from: https://drive.google.com/drive/folders/1ISmIMmmpEVFTi8Xl2aiGR8DUBjjOEHMl?usp=sharing
- LightCNN[1] is used to extract features. The code of LightCNN is borrowed from: https://github.com/AlfredXiangWu/LightCNN
We use LightCNN-29 v2 provided by the author.
After download the pretrained model, directly put it in the master directory.
- To train the FIRST stage of SGGAN using the train script simply specify the parameters listed in train_phase1.py as a flag or manually change them. We provide the example of training on CelebA dataset with this repo:
python train_phase1.py --dataset_name="prepared_image/img_align_celeba_crop" \
--dataset_train="prepared_image/CelebA_train_delete.txt" \
--dataset_test="prepared_image/CelebA_test.txt" \
--img_root_path="E:\data"
- To train the SECOND stage of SGGAN using the train script simply specify the parameters listed in train_phase2.py as a flag or manually change them. We provide the example of the second stage with this repo:
python train_phase2.py --dataset_name="prepared_image/img_align_celeba_crop" \
--dataset_train="prepared_image/CelebA_train_delete.txt" \
--dataset_test="prepared_image/CelebA_test.txt" \
--img_root_path="E:\data"\
--stage1-path="./output/vae.pth"
-
Tips:
- Suppose the dataset is placed in the folder: E:\data\prepared_image/img_align_celeba_crop
- The dataset structure should be: e.g.
Celeba_crop
complete
contaminate
img_align_mask
img_align_mask_sv
img_align_rand
img_align_sun_glass
- dataset_train is the train list. We provide the list in our experiment which randomly deletes some image name of female. Because there are more female photos than males in the original dataset.
- dataset_test is the test list.
- For stage2, suppose the trained moedl of the first stage is placed in the forlder: ./output/vae.pth
We provide the example of testing on lfw dataset with this repo:
python test-lfw.py --dataset_name="E:\data\prepared_image\processed_lfw_aligned" \
--img_list='E:/data/identity_lfw.txt'\
--stage1_resume="./output/gen1.pth"\
--stage2_resume="./output/gen2.pth"
- Tips:
- util.py provides a rough alignment function named "ImgAlignment" which is based on FAN [2]. If the testing images are seriously misaligned with the training images, the alignment function could be helpful.
- Suppose the trained models are named "gen1.pth" and "gen2.pth" for stage1 and stage2, respectively. And they are placed in the folder: ./output.
- [1] Xiang Wu, Ran He, Zhenan Sun, and Tieniu Tan. 2018. A light cnn for deep face representation with noisy labels. IEEE Transactions on Information Forensics and Security 13, 11 (2018), 2884-2896. https://doi.org/10.1109/TIFS.2018.2833032
- [2] Bulat, Adrian, and Georgios Tzimiropoulos. "How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks)." Proceedings of the IEEE International Conference on Computer Vision. 2017.
Please cite the paper if the codes are helpful for you research.
@inproceedings{CheungLZ21FSGGAN,
author = {Cheung, Yiu{-}Ming and Li, Mengke Li and Zou, Rong Zou}
title = {Facial Structure Guided {GAN} for Identity-preserved Face Image De-occlusion},
booktitle = {{ICMR} '21: International Conference on Multimedia Retrieval, Taipei,
Taiwan, August 21-24, 2021},
pages = {46--54},
publisher = {{ACM}},
year = {2021},
doi = {10.1145/3460426.3463642}
}