Giter Club home page Giter Club logo

gfpgan's Introduction

download PyPI Open issue Closed issue LICENSE python lint Publish-pip

  1. 💥 Updated online demo: Replicate. Here is the backup.
  2. 💥 Updated online demo: Huggingface Gradio
  3. Colab Demo for GFPGAN google colab logo; (Another Colab Demo for the original paper model)

🚀 Thanks for your interest in our work. You may also want to check our new updates on the tiny models for anime images and videos in Real-ESRGAN 😊

GFPGAN aims at developing a Practical Algorithm for Real-world Face Restoration.
It leverages rich and diverse priors encapsulated in a pretrained face GAN (e.g., StyleGAN2) for blind face restoration.

❓ Frequently Asked Questions can be found in FAQ.md.

🚩 Updates

  • ✅ Add RestoreFormer inference codes.
  • ✅ Add V1.4 model, which produces slightly more details and better identity than V1.3.
  • ✅ Add V1.3 model, which produces more natural restoration results, and better results on very low-quality / high-quality inputs. See more in Model zoo, Comparisons.md
  • ✅ Integrated to Huggingface Spaces with Gradio. See Gradio Web Demo.
  • ✅ Support enhancing non-face regions (background) with Real-ESRGAN.
  • ✅ We provide a clean version of GFPGAN, which does not require CUDA extensions.
  • ✅ We provide an updated model without colorizing faces.

If GFPGAN is helpful in your photos/projects, please help to ⭐ this repo or recommend it to your friends. Thanks😊 Other recommended projects:
▶️ Real-ESRGAN: A practical algorithm for general image restoration
▶️ BasicSR: An open-source image and video restoration toolbox
▶️ facexlib: A collection that provides useful face-relation functions
▶️ HandyView: A PyQt5-based image viewer that is handy for view and comparison


📖 GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior

[Paper]   [Project Page]   [Demo]
Xintao Wang, Yu Li, Honglun Zhang, Ying Shan
Applied Research Center (ARC), Tencent PCG


🔧 Dependencies and Installation

Installation

We now provide a clean version of GFPGAN, which does not require customized CUDA extensions.
If you want to use the original model in our paper, please see PaperModel.md for installation.

  1. Clone repo

    git clone https://github.com/TencentARC/GFPGAN.git
    cd GFPGAN
  2. Install dependent packages

    # Install basicsr - https://github.com/xinntao/BasicSR
    # We use BasicSR for both training and inference
    pip install basicsr
    
    # Install facexlib - https://github.com/xinntao/facexlib
    # We use face detection and face restoration helper in the facexlib package
    pip install facexlib
    
    pip install -r requirements.txt
    python setup.py develop
    
    # If you want to enhance the background (non-face) regions with Real-ESRGAN,
    # you also need to install the realesrgan package
    pip install realesrgan

⚡ Quick Inference

We take the v1.3 version for an example. More models can be found here.

Download pre-trained models: GFPGANv1.3.pth

wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models

Inference!

python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2
Usage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...

  -h                   show this help
  -i input             Input image or folder. Default: inputs/whole_imgs
  -o output            Output folder. Default: results
  -v version           GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3
  -s upscale           The final upsampling scale of the image. Default: 2
  -bg_upsampler        background upsampler. Default: realesrgan
  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400
  -suffix              Suffix of the restored faces
  -only_center_face    Only restore the center face
  -aligned             Input are aligned faces
  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto

If you want to use the original model in our paper, please see PaperModel.md for installation and inference.

🏰 Model Zoo

Version Model Name Description
V1.3 GFPGANv1.3.pth Based on V1.2; more natural restoration results; better results on very low-quality / high-quality inputs.
V1.2 GFPGANCleanv1-NoCE-C2.pth No colorization; no CUDA extensions are required. Trained with more data with pre-processing.
V1 GFPGANv1.pth The paper model, with colorization.

The comparisons are in Comparisons.md.

Note that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.

Version Strengths Weaknesses
V1.3 ✓ natural outputs
✓better results on very low-quality inputs
✓ work on relatively high-quality inputs
✓ can have repeated (twice) restorations
✗ not very sharp
✗ have a slight change on identity
V1.2 ✓ sharper output
✓ with beauty makeup
✗ some outputs are unnatural

You can find more models (such as the discriminators) here: [Google Drive], OR [Tencent Cloud 腾讯微云]

💻 Training

We provide the training codes for GFPGAN (used in our paper).
You could improve it according to your own needs.

Tips

  1. More high quality faces can improve the restoration quality.
  2. You may need to perform some pre-processing, such as beauty makeup.

Procedures

(You can try a simple version ( options/train_gfpgan_v1_simple.yml) that does not require face component landmarks.)

  1. Dataset preparation: FFHQ

  2. Download pre-trained models and other data. Put them in the experiments/pretrained_models folder.

    1. Pre-trained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth
    2. Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth
    3. A simple ArcFace model: arcface_resnet18.pth
  3. Modify the configuration file options/train_gfpgan_v1.yml accordingly.

  4. Training

python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1.yml --launcher pytorch

📜 License and Acknowledgement

GFPGAN is released under Apache License Version 2.0.

BibTeX

@InProceedings{wang2021gfpgan,
    author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},
    title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},
    booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}

📧 Contact

If you have any question, please email [email protected] or [email protected].

gfpgan's People

Contributors

ak391 avatar amckenna41 avatar bramton avatar chenxwh avatar darbazali avatar mdanish-kh avatar mostafavtp avatar tuhins avatar vincentsc avatar wscats avatar xinntao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gfpgan's Issues

landmark

请问下,预训练里面的landmark里面的眼睛,嘴巴的三个坐标分别代表什么,我想在自己训练集上训练,所以要重新生成这个landmark,谢谢

训练问题

作者,您好,我想问下,为什么我在本地训练都是正常的,然后在集群训练loss总是NAN,我尝试从新建环境,最终训练,始终保持本地和集群环境一直,可是最终还是本地正常,集群异常,请问怎么回事,谢谢

update broke the program

I'm getting:

Traceback (most recent call last):
File "E:\location to GFPGAN\inference_gfpgan.py", line 98, in
main()
File "E:\location to GFPGAN\inference_gfpgan.py", line 52, in main
restorer = GFPGANer(
File "E:\location to GFPGAN\gfpgan\utils.py", line 50, in init
self.face_helper = FaceRestoreHelper(
TypeError: init() got an unexpected keyword argument 'device'

without BASICSR_JIt=TRUE

and with BASICR_JIT=TRUE it just hangs, resources are not used at all and it just waits and waits. Waited 30 minutes already.

Old version works though.

Is it a BasicSR issue?

Also, the command in upscale_factor is only "upscale" now.

Error with the new model

$ python inference_gfpgan.py --upscale_factor 1 --model_path experiments/pretrained_models/GFPGANCleanv1-NoCE-C2.pth --test_path inputs/whole_imgs --paste_back Traceback (most recent call last): File "E:\IA\GFPGAN\inference_gfpgan.py", line 9, in <module> from gfpgan import GFPGANer File "E:\IA\GFPGAN\gfpgan\__init__.py", line 6, in <module> from .version import __gitsha__, __version__ ModuleNotFoundError: No module named 'gfpgan.version'

Can't use .half() with new GFPGAN clean

Hi! For the previous paper version, I was used to using FP16 weights with the model, which helped to have faster processing, now very weird results are generated if I apply model.half() and I use HalfTensors as the model input.

This is what I am doing:

For loading the model:

gfpgan = gfpgan.half()

For the model inputs:

cropped_faces_t = cropped_faces_t.half()

When I remove the above lines of codes everything works fine, but inference time is 44% bigger compared to the paper's version with half weights (looks like that quality isn't that good either, but that's another story). With .half() inference time is 20% bigger, but no result is generated.

These are the results I get:
image

"stylegan2_arch.py" seems to be missing

GFPGANv1 seems to import "stylegan2_arch.py" here, but there is no such file in this repo. StyleGAN2Generator and several other things are missing. This file from the BasicSR repo should be included in this repo as well.

使用checkpoint继续训练的bug

当我想要从断点继续训练,我修改了.yml文件以下内容:

# path
path:
  pretrain_network_g: experiments/train_GFPGANv1_512/models/net_g_490000.pth
  param_key_g: params_ema
  strict_load_g: ~
  pretrain_network_d: experiments/train_GFPGANv1_512/models/net_d_490000.pth
  pretrain_network_d_left_eye: experiments/train_GFPGANv1_512/models/net_d_left_eye_490000.pth
  pretrain_network_d_right_eye: experiments/train_GFPGANv1_512/models/net_d_right_eye_490000.pth
  pretrain_network_d_mouth: experiments/train_GFPGANv1_512/models/net_d_mouth_490000.pth
  pretrain_network_identity: experiments/pretrained_models/arcface_resnet18.pth
  # resume
  resume_state: experiments/train_GFPGANv1_512/training_states/490000.state
  ignore_resume_networks: ['network_identity']

我并没有修改pretrain_network_identity项。
但是随后报错:

FileNotFoundError: [Errno 2] No such file or directory: 'GFPGAN/experiments/train_GFPGANv1_512/models/net_identity_490000.pth'

一脸懵啊。。。
翻看log初始打印所有配置,此时pretrain_network_identity已经变了:

2021-07-11 22:21:11,000 INFO: Loading ResNetArcFace model from GFPGAN/experiments/train_GFPGANv1_512/models/net_identity_490000.pth.

这。。。。

FaceRestoreHelper

I get the following error with the latest version:

Traceback (most recent call last):
  File "inference_gfpgan_full.py", line 132, in <module>
    face_helper = FaceRestoreHelper(
TypeError: __init__() got an unexpected keyword argument 'device'

Removing the device parameter in the call to FaceRestoreHelper solved my problem. The call in inference_gfpgan_full.py then becomes:

    # initialize face helper
    face_helper = FaceRestoreHelper(
        args.upscale_factor,
        face_size=512,
        crop_ratio=(1, 1),
        det_model='retinaface_resnet50',
        save_ext='png')

Training on custom dataset

Hello, first of all fantastic work and thanks for sharing.

I would like to know how can I train the model on a custom dataset?
I noticed in the training explanation, there are 3 files I need to download excluding the dataset:
-Pre trained styleGAN2 model
-FFHQ component locations
-Arcface

I know that Arcface is used for face recognition. I assume that the pretrained styleGAN2 model, is to train the GFPGAN model from scratch so if i wanted to continue training I could just use the model you have provided for inference to continue training on my custom dataset. Finally for the components locations file, will it work with my custom dataset or is it specific to the FFHQ dataset, and if it won't work how will I be able to create my own so it may work with my dataset?

I hope my issue is clear, thanks.

GAN训练

您好,有一个问题困扰我。在GAN的训练中,参考https://github.com/rosinality/stylegan2-pytorch/blob/master/train.py
在训练Discriminator时,rosinality将Generator的梯度更新关闭:

        requires_grad(generator, False)
        requires_grad(discriminator, True)

同样,训练Generator时,也会将Discriminator的梯度更新关闭:

        requires_grad(generator, True)
        requires_grad(discriminator, False)

我只在您的代码中找到了对Discriminator进行梯度控制,没有对Generator的梯度调节:

        for p in self.net_d.parameters():
            p.requires_grad = False

&

        for p in self.net_d.parameters():
            p.requires_grad = True

1、这是不是意味着Generator始终会得到梯度更新(哪怕是在训练Discriminator时)?如果是这样,是否等价于每份数据都会在Generator前降传播两次呢?
2、如果Generator的梯度更新也会受到调节,请问这是在哪个位置实现的呢?

Finetuning provided model

Hello and thanks for an awesome project!

I am trying to finetune this model - https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth , but when I have
network_g:
type: GFPGANv1
I get a lot of mismatches between the layers. So I figured the models are different.

Am I right to assume that this ckpt is the GFPGANv1Clean model and if so, how can I finetune it.

When I change the config to:
network_g:
type: GFPGANv1Clean
I get:
KeyError: "No object named 'GFPGANv1Clean' found in 'arch' registry!"

using my own pretrained stylegan2 network

Hello :)
Thank you very much for your repo. I got a lot of beatiful photos. :))))

I try to use my own pretrained stylegan from official repo
https://github.com/NVlabs/stylegan2

But I cannot understand how I should use pkl-file, that I have.

I convert it to pt-file by
https://github.com/dvschultz/ai/blob/master/Convert_pkl_to_pt.ipynb
and change yml:
decoder_load_path: experiments/pretrained_models/stylegan2-ffhq-config-f.pt
but got error:
KeyError: 'params_ema'

May be I should use another repo for getting correct pth-file for decoder ?

Thank you :)))

Have better result in V1, V2 is too sharp

Have better result in faces with V1 model looks more natural. In V2 is too sharp, but I don't have the color problem and the final result is more homogeneous throughout the image. Many thanks
testV2model

About the rgb results from from Degradation Removal.

Thanks for sharing your wonderful work!!!!!!
I have try to visualize the rgb restoration result (512*512) for the degradation removal module by using the the code below.

    _, output_list = gfpgan(cropped_face_t, return_rgb=True)
    output = output_list[-1]
    # convert to image
    rec_face = tensor2img(output.squeeze(0), rgb2bgr=True, min_max=(-1, 1))

But I got such a results, the color of the restoration result from degradation removal module is different from the original image, is this normal? Many thanks~, have a nice day.
image

Hugging Face Hub Integration

Hi GFPGAN team!

Would you be interested in sharing the pretrained models in the Hugging Face Hub? The Hub offers free hosting of models (over 10,000 models have been uploaded by many research organizations) and it would make your work more accessible and visible to others.

We could create a widget for anyone to try the model directly in the browser. The only thing required would be to upload the models to the Hub. I'm happy to answer any questions about this.

Happy to hear your thoughts,
Omar

RuntimeError:"Distributed package doesn't have NCCL" ???

How to train a custom model under Windows 10 with miniconda?
Inference works great but when I try to start a custom training only errors come up.
Latest RTX/Quadro driver and Nvida Cuda Toolkit 11.3 + cudnn 11.3 + ms vs buildtools are installed.

My Miniconda Env:
pytorchconda

Training:
python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan\train.py -opt c:\GFPGAN\options\test.yml --launcher pytorch

Train_Error.txt

results are strange while testing on whole img

Hi, I'm using pretrain model for testing, while testing on align cropped faces, the results are promising.
However, while testing as whole img the results are wrong, seems the detection is not working well because in results cmp folder there is no faces or sereval faces in one img.

I didn't change the code, is there anything wrong the whole img testing? Thanks in advance.

train

This training process looks a bit strange, is there something wrong with my configuration file, I am based on 1024 resolution

021-07-13 16:20:56,380 INFO: [train..][epoch: 0, iter: 100, lr:(2.000e-03,)] [eta: 112 days, 17:17:33, time (data): 1.796 (0.015)] l_g_pix: inf l_p_8: nan l_p_16: nan l_p_32: nan l_p_64: nan l_p_128: nan l_p_256: nan l_p_512: inf l_p_1024: inf l_g_percep: inf l_g_style: inf l_g_gan: nan l_g_gan_left_eye: nan l_g_gan_right_eye: nan l_g_gan_mouth: nan l_g_comp_style_loss: nan l_identity: nan l_d: nan real_score: nan fake_score: nan l_d_left_eye: nan l_d_right_eye: nan l_d_mouth: nan

ImportError: No module named 'deform_conv'

I run Quick Inference after I finish the installation:
"python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --aligned".
Then I encountered the following error:

`(GFPGAN) D:\Profession\Git\GFPGAN>python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --aligned
----compiler_info: 用于 x86 的 Microsoft (R) C/C++ 优化编译器 19.16.27045 版
版权所有(C) Microsoft Corporation。保留所有权利。

用法: cl [ 选项... ] 文件名... [ /link 链接选项... ]

----match: <re.Match object; span=(35, 46), match='19.16.27045'>
Traceback (most recent call last):
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\ops\dcn\deform_conv.py", line 10, in
from . import deform_conv_ext
ImportError: cannot import name 'deform_conv_ext' from 'basicsr.ops.dcn' (D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\ops\dcn_init_.py)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "inference_gfpgan_full.py", line 10, in
from archs.gfpganv1_arch import GFPGANv1
File "D:\Profession\Git\GFPGAN\archs_init_.py", line 4, in
from basicsr.utils import scandir
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr_init_.py", line 3, in
from .archs import *
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\archs_init_.py", line 16, in
arch_modules = [importlib.import_module(f'basicsr.archs.{file_name}') for file_name in arch_filenames]
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\archs_init
.py", line 16, in
arch_modules = [importlib.import_module(f'basicsr.archs.{file_name}') for file_name in arch_filenames]
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\importlib_init
.py", line 127, in import_module
return _bootstrap.gcd_import(name[level:], package, level)
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\archs\edsr_arch.py", line 4, in
from basicsr.archs.arch_util import ResidualBlockNoBN, Upsample, make_layer
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\archs\arch_util.py", line 8, in
from basicsr.ops.dcn import ModulatedDeformConvPack, modulated_deform_conv
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\ops\dcn_init
.py", line 1, in
from .deform_conv import (DeformConv, DeformConvPack, ModulatedDeformConv, ModulatedDeformConvPack, deform_conv,
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\basicsr\ops\dcn\deform_conv.py", line 22, in
os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'),
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\torch\utils\cpp_extension.py", line 983, in load
keep_intermediates=keep_intermediates)
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\torch\utils\cpp_extension.py", line 1199, in _jit_compile
return _import_module_from_library(name, build_directory, is_python_module)
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\site-packages\torch\utils\cpp_extension.py", line 1546, in _import_module_from_library
file, path, description = imp.find_module(module_name, [path])
File "D:\Profession\ProgramData\Anaconda3\envs\GFPGAN\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'deform_conv'`

I found that someone had the same problem as me, but it didn’t solve it.
Is there a solution to this ERROR?Thanks for the ANY HELP~

Love it, thank you! Settings...

Hey

I'ts incredible. I was able to restore some old family photos and it's just... incredible! Thank you so much!

However... is it possible to change few params to edit the output?
Size - it's limited to 512, is there a way to do higher?
Auto-cropping - is there a way to disable it somehow? It crops/rotates the head... just wondering if I can control it somehow?

I tried editing > inference_gfpgan_full but nothing good came out of it :D

In any case, thank you amazing for this amazing tool!

一个小时300张的处理速度正常吗?

你好涛哥,win10本地搭建的虚拟环境,cuda模式下,一个小时处理300张左右,显存占用2G左右。但是看别人都是平均1s/张,我这是哪里出了问题了?图片是512px的人像,没有多少背景,基本都是大头照

Is this possible, to improve?

It would be great if in some of next releases, code will be optimised for whole area of image denoising/deartifact/deblur, like as it is now in square around faces....
thank you dev for hard work!

I can't use GFPGANv1.pth

I already use GFPGANCleanv1-NoCE-C2.pth and is working great. Please help me with this problem...

Thanks.

python inference_gfpgan.py --upscale 2 --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs --save_root results

inference_gfpgan.py:37: UserWarning: The unoptimized RealESRGAN is very slow on CPU. We do not use it. If you really want to use it, please modify the corresponding codes.
  warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. '
Traceback (most recent call last):
  File "inference_gfpgan.py", line 98, in <module>
    main()
  File "inference_gfpgan.py", line 52, in main
    restorer = GFPGANer(
  File "C:\Users\Zeus\Downloads\GFPGAN\gfpgan\utils.py", line 65, in __init__
    self.gfpgan.load_state_dict(loadnet[keyname], strict=True)
  File "C:\Users\Zeus\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1406, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
        Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
        Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
        size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
        size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
        size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
        size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
        size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
        size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
        size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
        size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
        size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
        size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
        size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
        size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
        size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
        size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
        size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
        size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
        size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
        size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
        size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
        size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
        size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
        size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
        size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
        size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
        size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
        size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
        size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
        size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
        size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
        size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
        size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
        size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
        size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
        size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
        size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
        size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
        size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
        size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
        size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
        size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
        size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
        size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
        size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).

重新训练自己的数据集

您好,请问如果我想训练自己的数据集,是否只需要修改数据集的面部landmarks?
还有另一个问题:在其他条件不变的情况下,训练时增加gpu数量至8,学习率是否需要相应的调整呢?

Complile error when building extension 'deform_conv'

I have followed the "Installation" but encountering below error. Details are attached below. Any help will be grateful.
When running cmd: CUDA_HOME=/usr/local/cuda-10.2 BASICSR_EXT=True pip install basicsr
below error shows:
ERROR: Command errored out with exit status 1: command: /data/anaconda3/envs/pytorch18/bin/python3.8 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ekcq6eum/basicsr_4146e4815a4548d195157ceecc274ac0/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ekcq6eum/basicsr_4146e4815a4548d195157ceecc274ac0/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-uoof9gcv cwd: /tmp/pip-install-ekcq6eum/basicsr_4146e4815a4548d195157ceecc274ac0/ Complete output (260 lines): Traceback (most recent call last): File "/tmp/pip-install-ekcq6eum/basicsr_4146e4815a4548d195157ceecc274ac0/basicsr/ops/dcn/deform_conv.py", line 10, in <module> from . import deform_conv_ext ImportError: cannot import name 'deform_conv_ext' from partially initialized module 'basicsr.ops.dcn' (most likely due to a circular import) (/tmp/pip-install-ekcq6eum/basicsr_4146e4815a4548d195157ceecc274ac0/basicsr/ops/dcn/__init__.py)

The error shows we can't import deform_conv_ext. I think it says that module deform_conv_ext has not been built successfully. But i don't know how to check it when using pip install. So i try to install basicsr by git clone and compile following this link. But when i ran command:
CUDA_HOME=/usr/local/cuda-10.2 BASICSR_EXT=True python setup.py develop
It failed when building 'basicsr.ops.dcn.deform_conv_ext' extension
image

I have googled it and found no useful information. Here is my enviroment:
gcc: 7.5.0
cuda: 10.2
image
torch.config.show()
'PyTorch built with:\n - GCC 7.3\n - C++ Version: 201402\n

Train with GPU and inference without GPU. Is it possible ?

Hello :)
One more - thank you very much for your beatifull project !

  1. I trained model on my ouw dataset - mymodel.pth
  2. I ran inference on CPU with your model - GFPGANCleanv1-NoCE-C2.pth
  3. I see that GFPGANv1.pth (and mymodel.pth) has 2x size that GFPGANCleanv1-NoCE-C2.pth

So, how I can transform mymodel.pth for using inference on CPU ? or may be I should train anotther model ?

Thank you :))

RuntimeError: Could not run 'torchvision::roi_align' with arguments from the 'CUDA' backend.

When I train the code, I get the following error

RuntimeError: Could not run 'torchvision::roi_align' with arguments from the 'CUDA' backend. 'torchvision::roi_align' is only available for these backends: [CPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

image

Expected input and output size for notebook?

I am not getting strong results feeding 16x16 input faces to the notebook and scaling up by 16x. What are the intended input and output sizes for best performance with the notebook?

Final Quality

in whole image --paste_back option,
the face restoration is 512x512, I would like the original image to be adjusted to the new size of the restored face and not the other way around, bc the image with the restored face when returning to the original image loses quality.
i was tried --upscale_factor but not work or is not implemented.
many thanks

训练日志异常

为什么当我将4gpu调整为2gpu,训练时就不输出结果?无论是实时终端,亦或是experiments内项目文件下本应出现的log文件,都无。改动如下:

# general settings
name: train_GFPGANv1_512_2gpu
model_type: GFPGANModel
num_gpu: 2     # 4
manual_seed: 0

2021-07-06 10-33-58 的屏幕截图
gpu看情况是进入训练状态的。
ps:之前复现您的BasicSR中的esrgan也是类似情况

About the facexlib

Thanks for your open source code.
I have install the facexlib according to the requirement document.
But I got this problem.

FileNotFoundError: [Errno 2] No such file or directory: '/home/ziy/anaconda3/envs/python37/lib/python3.7/site-packages/facexlib/weights/tmpseizp2dz'

(python37) ziy@:/home/ziy/GFPGAN$ pip install facexlib
Requirement already satisfied: facexlib in /home/ziy/anaconda3/envs/python37/lib/python3.7/site-packages (0.1.3)
Requirement already satisfied: numpy in /home/ziy/anaconda3/envs/python37/lib/python3.7/site-packages (from facexlib) (1.21.0)

Do you know the reason? Looking forward to your reply.

您好,请问为什么我没办法使用V1?

python inference_gfpgan.py --upscale 2 --test_path inputs/whole_imgs --save_root results --model_path experiments/pretrained_models/GFPGANv1.pth
Traceback (most recent call last):
File "inference_gfpgan.py", line 98, in
main()
File "inference_gfpgan.py", line 57, in main
bg_upsampler=bg_upsampler)
File "W:\MyWork\My_GAN_Work\GFPGAN\gfpgan\utils.py", line 65, in init
self.gfpgan.load_state_dict(loadnet[keyname], strict=True)
File "C:\Users\Creator\miniconda3\lib\site-packages\torch\nn\modules\module.py", line 1224, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).

你好大佬,请教几个环境问题

大佬好,今天尝试运行这个项目,安装环境失败。
第一次尝试torch==1.9.0+cu111,提示CUDA driver initialization failed, you might not have a CUDA gpu
第二次尝试torch==1.7.0+cu10.2,提示The NVIDIA driver on your system is too old (found version 10010). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver.
之前运行DFDNET和GPEN都是没问题的,请问我这是哪里出问题了?希望百忙之中能抽时间帮忙解答下,感谢!

only paste back from already restored faces

Is it possible to do this without restoring faces again just with 2x esrgan?

like

"python inference_gfpgan.py --upscale 2 --model_path nomodel --test_path results/restored_faces --save_root results/restored_images --paste_back_only"

?

NameError: name 'fused_act_ext' is not defined

Attempt for quick inference is throwing NameError: name 'fused_act_ext' is not defined.
I'll highly appreciate your assistance. Below are the details of the error and conda packages.

Your assistance can also come in a form of simply sharing your list installed packages.

$ python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs
Processing 00.jpg ...
Traceback (most recent call last):
File "/deepmind/github/tencentarc/gfpgan/inference_gfpgan_full.py", line 120, in <module>
restoration(
File "/deepmind/github/tencentarc/gfpgan/inference_gfpgan_full.py", line 49, in restoration
output = gfpgan(cropped_face_t, return_rgb=False)[0]
File "/storage/usr/conda/envs/gfpgan/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/deepmind/github/tencentarc/gfpgan/archs/gfpganv1_arch.py", line 348, in forward
feat = self.conv_body_first(x)
File "/storage/usr/conda/envs/gfpgan/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/storage/usr/conda/envs/gfpgan/lib/python3.9/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/storage/usr/conda/envs/gfpgan/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/storage/usr/conda/envs/gfpgan/lib/python3.9/site-packages/basicsr/ops/fused_act/fused_act.py", line 85, in forward
return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
File "/storage/usr/conda/envs/gfpgan/lib/python3.9/site-packages/basicsr/ops/fused_act/fused_act.py", line 89, in fused_leaky_relu
return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
File "/storage/usr/conda/envs/gfpgan/lib/python3.9/site-packages/basicsr/ops/fused_act/fused_act.py", line 59, in forward
out = fused_act_ext.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
NameError: name 'fused_act_ext' is not defined
$ conda list
# packages in environment at /storage/usr/conda/envs/gfpgan:
#
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                 conda_forge    conda-forge
_openmp_mutex             4.5                      1_llvm    conda-forge
_pytorch_select           0.1                       cpu_0    anaconda
absl-py                   0.13.0             pyhd8ed1ab_0    conda-forge
addict                    2.4.0            py39hf3d152e_0    conda-forge
aiohttp                   3.7.4.post0      py39h3811e60_0    conda-forge
alsa-lib                  1.2.3                h516909a_0    conda-forge
appdirs                   1.4.4              pyh9f0ad1d_0    conda-forge
async-timeout             3.0.1                   py_1000    conda-forge
attrs                     21.2.0             pyhd8ed1ab_0    conda-forge
basicsr                   1.3.3.4                  pypi_0    pypi
blas                      1.0                         mkl    anaconda
blinker                   1.4                        py_1    conda-forge
blosc                     1.21.0               h9c3ff4c_0    conda-forge
brotli                    1.0.9                h9c3ff4c_4    conda-forge
brotlipy                  0.7.0           py39h3811e60_1001    conda-forge
brunsli                   0.1                  h9c3ff4c_0    conda-forge
bzip2                     1.0.8                h7f98852_4    conda-forge
c-ares                    1.17.1               h7f98852_1    conda-forge
ca-certificates           2021.5.30            ha878542_0    conda-forge
cachetools                4.2.2              pyhd8ed1ab_0    conda-forge
cairo                     1.16.0            h6cf1ce9_1008    conda-forge
certifi                   2021.5.30        py39hf3d152e_0    conda-forge
cffi                      1.14.5           py39he32792d_0    conda-forge
cfitsio                   3.470                hb418390_7    conda-forge
chardet                   4.0.0            py39hf3d152e_1    conda-forge
charls                    2.2.0                h9c3ff4c_0    conda-forge
click                     8.0.1            py39hf3d152e_0    conda-forge
cloudpickle               1.6.0                      py_0    conda-forge
cryptography              3.4.7            py39hbca0aa6_0    conda-forge
cudatoolkit               10.2.89              h8f6ccaa_8    conda-forge
cycler                    0.10.0                     py_2    conda-forge
cytoolz                   0.11.0           py39h3811e60_3    conda-forge
dask-core                 2021.6.1           pyhd8ed1ab_0    conda-forge
dataclasses               0.8                pyhc8e2a94_1    conda-forge
dbus                      1.13.18              hb2f20db_0    anaconda
decorator                 5.0.9              pyhd8ed1ab_0    conda-forge
expat                     2.4.1                h9c3ff4c_0    conda-forge
facexlib                  0.1.3                    pypi_0    pypi
ffmpeg                    4.3.1                hca11adc_2    conda-forge
fontconfig                2.13.1            hba837de_1005    conda-forge
freetype                  2.10.4               h0708190_1    conda-forge
fsspec                    2021.6.0           pyhd8ed1ab_0    conda-forge
future                    0.18.2           py39hf3d152e_3    conda-forge
gettext                   0.21.0               hf68c758_0  
giflib                    5.2.1                h36c2ea0_2    conda-forge
glib                      2.68.3               h9c3ff4c_0    conda-forge
glib-tools                2.68.3               h9c3ff4c_0    conda-forge
gmp                       6.2.1                h58526e2_0    conda-forge
gnutls                    3.6.15               he1e5248_0  
google-auth               1.31.0             pyhd3eb1b0_0  
google-auth-oauthlib      0.4.1                      py_2    conda-forge
graphite2                 1.3.14               h23475e2_0    anaconda
grpcio                    1.38.0           py39hff7568b_0    conda-forge
gst-plugins-base          1.18.4               hf529b03_2    conda-forge
gstreamer                 1.18.4               h76c114f_2    conda-forge
harfbuzz                  2.8.1                h83ec7ef_0    conda-forge
hdf5                      1.10.6          nompi_h6a2412b_1114    conda-forge
icu                       68.1                 h58526e2_0    conda-forge
idna                      2.10               pyh9f0ad1d_0    conda-forge
imagecodecs               2021.6.8         py39h44211f0_0    conda-forge
imageio                   2.9.0                      py_0    conda-forge
importlib-metadata        4.5.0            py39hf3d152e_0    conda-forge
jasper                    1.900.1           h07fcdf6_1006    conda-forge
jbig                      2.1               h7f98852_2003    conda-forge
jpeg                      9d                   h36c2ea0_0    conda-forge
jxrlib                    1.1                  h7f98852_2    conda-forge
kiwisolver                1.3.1            py39h1a9c180_1    conda-forge
krb5                      1.19.1               hcc1bbae_0    conda-forge
lame                      3.100             h7f98852_1001    conda-forge
lcms2                     2.12                 hddcbb42_0    conda-forge
ld_impl_linux-64          2.35.1               hea4e1c9_2    conda-forge
lerc                      2.2.1                h9c3ff4c_0    conda-forge
libaec                    1.0.5                h9c3ff4c_0    conda-forge
libblas                   3.9.0                     8_mkl    conda-forge
libcblas                  3.9.0                     8_mkl    conda-forge
libclang                  11.1.0          default_ha53f305_1    conda-forge
libcurl                   7.77.0               h2574ce0_0    conda-forge
libdeflate                1.7                  h7f98852_5    conda-forge
libedit                   3.1.20210216         h27cfd23_1  
libev                     4.33                 h516909a_1    conda-forge
libevent                  2.1.10               hcdb4288_3    conda-forge
libffi                    3.3                  h58526e2_2    conda-forge
libgcc-ng                 9.3.0               h2828fa1_19    conda-forge
libgfortran-ng            9.3.0               hff62375_19    conda-forge
libgfortran5              9.3.0               hff62375_19    conda-forge
libglib                   2.68.3               h3e27bee_0    conda-forge
libiconv                  1.16                 h516909a_0    conda-forge
libidn2                   2.3.1                h7f98852_0    conda-forge
liblapack                 3.9.0                     8_mkl    conda-forge
liblapacke                3.9.0                     8_mkl    conda-forge
libllvm11                 11.1.0               hf817b99_2    conda-forge
libnghttp2                1.43.0               h812cca2_0    conda-forge
libogg                    1.3.5                h27cfd23_1  
libopencv                 4.5.2            py39h2406f9b_0    conda-forge
libopus                   1.3.1                h7f98852_1    conda-forge
libpng                    1.6.37               h21135ba_2    conda-forge
libpq                     13.3                 hd57d9b9_0    conda-forge
libprotobuf               3.15.8               h780b84a_0    conda-forge
libssh2                   1.9.0                ha56f1ee_6    conda-forge
libstdcxx-ng              9.3.0               h6de172a_19    conda-forge
libtasn1                  4.16.0               h27cfd23_0  
libtiff                   4.3.0                hf544144_1    conda-forge
libunistring              0.9.10               h14c3975_0    conda-forge
libuuid                   2.32.1            h7f98852_1000    conda-forge
libuv                     1.41.0               h7f98852_0    conda-forge
libvorbis                 1.3.7                h9c3ff4c_0    conda-forge
libwebp-base              1.2.0                h7f98852_2    conda-forge
libxcb                    1.14                 h7b6447c_0    anaconda
libxkbcommon              1.0.3                he3ba5ed_0    conda-forge
libxml2                   2.9.12               h72842e0_0    conda-forge
libzopfli                 1.0.3                h9c3ff4c_0    conda-forge
llvm-openmp               11.1.0               h4bd325d_1    conda-forge
lmdb                      1.2.1                    pypi_0    pypi
locket                    0.2.1            py39h06a4308_1  
lz4-c                     1.9.3                h9c3ff4c_0    conda-forge
markdown                  3.3.4              pyhd8ed1ab_0    conda-forge
matplotlib-base           3.4.2            py39h2fa2bec_0    conda-forge
mkl                       2020.4             h726a3e6_304    conda-forge
multidict                 5.1.0            py39h3811e60_1    conda-forge
mysql-common              8.0.25               ha770c72_2    conda-forge
mysql-libs                8.0.25               hfa10184_2    conda-forge
ncurses                   6.2                  h58526e2_4    conda-forge
nettle                    3.7.3                hbbd107a_1  
networkx                  2.5                        py_0    anaconda
ninja                     1.10.2               h4bd325d_0    conda-forge
nspr                      4.30                 h9c3ff4c_0    conda-forge
nss                       3.67                 hb5efdd6_0    conda-forge
numpy                     1.20.3           py39hdbf815f_1    conda-forge
oauthlib                  3.1.1              pyhd8ed1ab_0    conda-forge
olefile                   0.46               pyh9f0ad1d_1    conda-forge
opencv                    4.5.2            py39hf3d152e_0    conda-forge
opencv-python             4.5.2.54                 pypi_0    pypi
openh264                  2.1.1                h780b84a_0    conda-forge
openjpeg                  2.4.0                hb52868f_1    conda-forge
openssl                   1.1.1k               h7f98852_0    conda-forge
packaging                 20.9               pyh44b312d_0    conda-forge
partd                     1.2.0              pyhd8ed1ab_0    conda-forge
pcre                      8.45                 h9c3ff4c_0    conda-forge
pillow                    8.2.0            py39hf95b381_1    conda-forge
pip                       21.1.2             pyhd8ed1ab_0    conda-forge
pixman                    0.40.0               h36c2ea0_0    conda-forge
pooch                     1.4.0              pyhd8ed1ab_0    conda-forge
protobuf                  3.15.8           py39he80948d_0    conda-forge
py-opencv                 4.5.2            py39hef51801_0    conda-forge
pyasn1                    0.4.8                      py_0    conda-forge
pyasn1-modules            0.2.8                      py_0    anaconda
pycparser                 2.20               pyh9f0ad1d_2    conda-forge
pyjwt                     2.1.0              pyhd8ed1ab_0    conda-forge
pyopenssl                 20.0.1             pyhd8ed1ab_0    conda-forge
pyparsing                 2.4.7              pyh9f0ad1d_0    conda-forge
pysocks                   1.7.1            py39hf3d152e_3    conda-forge
python                    3.9.5           h49503c6_0_cpython    conda-forge
python-dateutil           2.8.1                      py_0    conda-forge
python_abi                3.9                      1_cp39    conda-forge
pytorch                   1.7.1           py3.9_cuda10.2.89_cudnn7.6.5_0    pytorch
pywavelets                1.1.1            py39hce5d2b2_3    conda-forge
pyyaml                    5.4.1            py39h3811e60_0    conda-forge
qt                        5.12.9               hda022c4_4    conda-forge
readline                  8.1                  h46c0cb4_0    conda-forge
requests                  2.25.1             pyhd3deb0d_0    conda-forge
requests-oauthlib         1.3.0              pyh9f0ad1d_0    conda-forge
rsa                       4.7.2              pyh44b312d_0    conda-forge
scikit-image              0.18.1           py39hde0f152_0    conda-forge
scipy                     1.6.3            py39hee8e79c_0    conda-forge
setuptools                52.0.0           py39h06a4308_0  
six                       1.16.0             pyh6c4a22f_0    conda-forge
sleef                     3.5.1                h7f98852_1    conda-forge
snappy                    1.1.8                he1b5a44_3    conda-forge
sqlite                    3.35.5               h74cdb3f_0    conda-forge
tb-nightly                2.6.0a20210615           pypi_0    pypi
tensorboard               2.5.0              pyhd8ed1ab_0    conda-forge
tensorboard-data-server   0.6.0            py39h3da14fd_0    conda-forge
tensorboard-plugin-wit    1.8.0              pyh44b312d_0    conda-forge
tifffile                  2021.6.14          pyhd8ed1ab_0    conda-forge
tk                        8.6.10               h21135ba_1    conda-forge
toolz                     0.11.1                     py_0    conda-forge
torchvision               0.8.2           cpu_py39ha229d99_0  
tornado                   6.1              py39h3811e60_1    conda-forge
tqdm                      4.61.1             pyhd8ed1ab_0    conda-forge
typing-extensions         3.10.0.0             hd8ed1ab_0    conda-forge
typing_extensions         3.10.0.0           pyha770c72_0    conda-forge
tzdata                    2021a                he74cb21_0    conda-forge
urllib3                   1.26.5             pyhd8ed1ab_0    conda-forge
werkzeug                  2.0.1              pyhd8ed1ab_0    conda-forge
wheel                     0.36.2             pyhd3deb0d_0    conda-forge
x264                      1!161.3030           h7f98852_1    conda-forge
xorg-kbproto              1.0.7             h7f98852_1002    conda-forge
xorg-libice               1.0.10               h7f98852_0    conda-forge
xorg-libsm                1.2.3             hd9c2040_1000    conda-forge
xorg-libx11               1.7.2                h7f98852_0    conda-forge
xorg-libxext              1.3.4                h7f98852_1    conda-forge
xorg-libxrender           0.9.10            h7f98852_1003    conda-forge
xorg-renderproto          0.11.1            h7f98852_1002    conda-forge
xorg-xextproto            7.3.0             h7f98852_1002    conda-forge
xorg-xproto               7.0.31            h7f98852_1007    conda-forge
xz                        5.2.5                h516909a_1    conda-forge
yaml                      0.2.5                h516909a_0    conda-forge
yapf                      0.31.0             pyhd8ed1ab_0    conda-forge
yarl                      1.6.3            py39h3811e60_1    conda-forge
zfp                       0.5.5                h9c3ff4c_5    conda-forge
zipp                      3.4.1              pyhd8ed1ab_0    conda-forge
zlib                      1.2.11            h516909a_1010    conda-forge
zstd                      1.5.0                ha95c52a_0    conda-forge

channel_multiplier

您好, @xinntao
我注意到新的Clean版本,你将channel_multiplier从1调整为2。
但当我查看train_gfpgan_v1_simple.yml时发现预训练的stylegan2模型没有作相应改变:StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth,请问这是和原来channel_multiplier=1时一模一样的stylegan2模型吗?

want to whole image in the result (without cropping)

Is it possible to have whole images as result?
when inference my images are been cropped in the result folder.
I NEED FULL IMAGE AS RESULT

I used "Load extensions just-in-time(JIT)":
BASICSR_JIT=True python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.