Code for the paper "PortraitNet: Real-time portrait segmentation network for mobile device". @ CAD&Graphics 2019
We propose a real-time portrait segmentation model, called PortraitNet, that can run effectively and efficiently on mobile device. PortraitNet is based on a lightweight U-shape architecture with two auxiliary losses at the training stage, while no additional cost is required at the testing stage for portrait inference.
- python 3.9
- CUDA 12.0
conda create -n port python=3.9
pip install -r requirements.txt
If you want to use another pip sources in China:
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
Or another pip sources:
-
EG1800 Since several image URL links are invalid in the original EG1800 dataset, we finally use 1447 images for training and 289 images for validation.
-
Supervise-Portrait Supervise-Portrait is a portrait segmentation dataset collected from the public human segmentation dataset Supervise.ly using the same data process as EG1800.
python train.py --batchsize 32
Test for the image and video:
- Test for a single image
python test.py
- Test for a video
Coming soon...
Using tensorboard to visualize the training process:
cd path_to_save_model
tensorboard --logdir='./log'
from Dropbox:
- mobilenetv2_eg1800_with_two_auxiliary_losses(Training on EG1800 with two auxiliary losses)