name | function | need to be reset recording to your environment | note |
---|---|---|---|
Random parameter setting | |||
seed | Random seed for dataset shuffle | ||
distributed training setting | |||
gpus | How many GPU you use in distributed training | ||
world_size | The number of processes in distributed training | ||
backend | The platform used for process communication | ||
init_method | Ip and port of your server | ||
syncbn | Switch for synchronized batch normalization | ||
transform setting | |||
face | Type of face corping | ||
size | Size of corpped face | ||
training parameter setting | |||
debug | Switch of debug mode | ||
logint | Interval of logging | ||
modelperiod | Interval of saving model | ||
valint | Interval of validation | ||
batch | The size of training batch size | ||
epochs | The maximum epoch | ||
net_s | Net architecture for student model | ||
net_t | Net architecture for teacher model | ||
traindb | Decide quality and training set segmentation | ["ff-c23-720-140-140"] raw/c23/c40:chosen quality 720:training set size 140:validation set size 140: testing set size |
|
trainIndex | Choose temper type | 0:Deepfake 1:Face2Face 2:Face Swap 3:Neural Texture |
|
tagnote | Note part for tag | ||
dataset setting | |||
ffpp_faces_df_path | The path of dataset dataframe | ||
ffpp_faces_dir | The path of dataset | ||
workers | The process number for dataset loading | ||
optimizer setting | |||
lr | Training learning rate | ||
patience | Adam parameter | ||
model loading setting | |||
models_dir | The path of models repository | ||
mode | Model loading type | 0:choose the best trained model 1: choose the last one 2:choose the one of specific iteration |
|
index | The model of a specific iteration | ||
log setting | |||
log_dir | The path of logs repository |
- use index_dataset.py generating the dataframe of corresponding dataset
- use extract_faces.py transform video into frames and corresponding dataframe
- setting the config.py in folder config
- train your teacher using train_starter.py
- train your expanded model using train_starter.py
- trained model for domain adaptation using train_starter.py