This repo includes the official Pytorch implementation of our DGHE.
We recommend running our code using NVIDIA GPU + CUDA, CuDNN.
DGHE works on the checkpoints of pretrained diffusion models.
Image Type to Edit | Size | Pretrained Model | Dataset | Reference Repo. |
---|---|---|---|---|
Human face | 256×256 | Diffusion (Auto) | CelebA-HQ | SDEdit |
Human face | 256×256 | Diffusion | CelebA-HQ | P2 weighting |
Church | 256×256 | Diffusion (Auto) | Church | SDEdit |
Dog face | 256×256 | Diffusion | AFHQ-Dog | ILVR |
- The pretrained Diffuson models on 256x256 images in CelebA-HQ, LSUN-Church, and AFHQ-Dog are automatically downloaded in the code. (codes from DiffusionCLIP)
- You can manually revise the checkpoint paths and names in the
./configs/paths_config.py
file.
Path | Description |
---|---|
Human face | DGHE trained on the CelebA-HQ dataset. |
Church | DGHE trained on the Church dataset. |
Dog face | DGHE trained on the Afhq-Dog dataset. |
To train the DGHE, run the following commands using script_train.sh
-
python main.py --run_train \ --config $config \ --exp ./runs/train/$guid \ --edit_attr $guid \ --do_train 1 \ --do_test 1 \ --n_train_img 80 \ --n_test_img 100 \ --n_iter 1 \ --bs_train 1 \ --t_0 999 \ --n_inv_step 40 \ --n_train_step 40 \ --n_test_step 40 \ --get_h_num 1 \ --train_delta_block \ --sh_file_name $sh_file_name \ --save_x0 \ --use_x0_tensor \ --hs_coeff_delta_h 1.0 \ --lr_training 0.5 \ --clip_loss_w 0.8 \ --l1_loss_w 2.658 \ --retrain 1 \ --add_noise_from_xt \ --lpips_addnoise_th 1.2 \ --lpips_edit_th 0.33 \ --model_path "diffusion model path" \ --gan_model_path "gan inversion model path" \ --save_precomputed_images \ --save_x_origin \ --mysig 0.5 \ --gan_edit \ --save_src_and_gen \
After training, you can inference using script_inference.sh
. We provide some of it in the Pretrained Checkpoints.
-
python main.py --run_test \ --config $config \ --exp ./runs/test/gan_edit/$guid \ --edit_attr $guid \ --do_train 0 \ --do_test 1 \ --n_train_img 0 \ --n_test_img 300 \ --n_iter 1 \ --bs_train 1 \ --t_0 999 \ --n_inv_step 40 \ --n_train_step 40 \ --n_test_step $test_step \ --get_h_num 1 \ --train_delta_block \ --sh_file_name $sh_file_name \ --save_x0 \ --use_x0_tensor \ --hs_coeff_delta_h 1.0 \ --manual_checkpoint_name "pretrained ckpt" \ --add_noise_from_xt \ --lpips_addnoise_th 1.2 \ --lpips_edit_th 0.33 \ --model_path "diffusion model path" \ --gan_model_path "gan inversion model path" \ --save_x_origin \ --save_src_and_gen \ --mysig 0.5 \ --save_inv \ --gan_edit \
We would like to thank the authors of previous related projects for generously sharing their code, especially the DiffusionCLIP and Asyrp.