Giter Club home page Giter Club logo

Comments (10)

xinntao avatar xinntao commented on July 28, 2024
  1. For inference, you can find the original model with colorization in readme.md
  2. For training, the option file train_gfpgan_v1.yml contains colorization settings.

from gfpgan.

z3Nsk1Fh5a avatar z3Nsk1Fh5a commented on July 28, 2024

I am not experienced so I made what attempt I could. I changed references of the current file with "GFPGANv1.pth"; while I am unknowledgeable with training and am only trying to output images both color and B&W to test the accuracy.

Connecting to github-releases.githubusercontent.com (github-releases.githubusercontent.com)|185.199.108.154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 615378983 (587M) [application/octet-stream]
Saving to: ‘experiments/pretrained_models/GFPGANv1.pth’

GFPGANv1.pth 100%[===================>] 586.87M 24.3MB/s in 26s

2021-08-08 09:40:07 (22.1 MB/s) - ‘experiments/pretrained_models/GFPGANv1.pth’ saved [615378983/615378983]

Then Inference fails

Traceback (most recent call last):
File "inference_gfpgan_full.py", line 128, in
gfpgan.load_state_dict(torch.load(args.model_path, map_location=lambda storage, loc: storage)['params_ema'])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
ls: cannot access 'results/cmp': No such file or directory

from gfpgan.

xinntao avatar xinntao commented on July 28, 2024

If you want to use the original model with colorization, please see: https://github.com/TencentARC/GFPGAN/blob/master/PaperModel.md

from gfpgan.

z3Nsk1Fh5a avatar z3Nsk1Fh5a commented on July 28, 2024

I chose option 2 on both and added the code to the first cell.

File "", line 12
BASICSR_EXT=True pip install basicsr -vvv
^
SyntaxError: invalid syntax

from gfpgan.

z3Nsk1Fh5a avatar z3Nsk1Fh5a commented on July 28, 2024

Inference:

File "", line 3
BASICSR_JIT=True python inference_gfpgan_full.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/whole_imgs --save_root results --arch original --channel 1
^
SyntaxError: invalid syntax

from gfpgan.

woctezuma avatar woctezuma commented on July 28, 2024

Please create other "Github issues" for other issues.

As for the colorization, reply was above: there are 2 models now, one with and one without colorization.
cf. https://github.com/TencentARC/GFPGAN/releases/tag/v0.2.0

from gfpgan.

z3Nsk1Fh5a avatar z3Nsk1Fh5a commented on July 28, 2024

Apologies, those following issues correlate and result from an inexperienced attempt to follow the coders instructions. Total lack of Python experience or knowledge to copy/paste in the correct areas knowingly.

from gfpgan.

xinntao avatar xinntao commented on July 28, 2024

@Ghee36 If you want to use the original model, please see the instructions: https://github.com/TencentARC/GFPGAN/blob/master/PaperModel.md for installation and inference.

As for the SyntaxError: invalid syntax, it seems that it has some unsupported (invisible) characters.

from gfpgan.

z3Nsk1Fh5a avatar z3Nsk1Fh5a commented on July 28, 2024

I mirrored the changes from the original model to the current notebook.


Clone GFPGAN and enter the GFPGAN folder
%cd /content
!rm -rf GFPGAN
!git clone https://github.com/TencentARC/GFPGAN.git
%cd GFPGAN

Set up the environment
Install basicsr - https://github.com/xinntao/BasicSR
Set BASICSR_EXT=True to compile the cuda extensions in the BasicSR - It may take several minutes to compile, please be patient.
!BASICSR_EXT=True pip install basicsr
Install facexlib - https://github.com/xinntao/facexlib
We use face detection and face restoration helper in the facexlib package
!pip install facexlib
Install other depencencies
!pip install -r requirements.txt
!python setup.py develop
!pip install realesrgan # used for enhancing the background (non-face) regions
Download the pre-trained model
!wget https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth -P experiments/pretrained_models

Works good no problems

Now we use the GFPGAN to restore the above low-quality images
We use Real-ESRGAN for enhancing the background (non-face) regions
!rm -rf results
!python inference_gfpgan.py --upscale 2 --test_path inputs/upload --save_root results --model_path experiments/pretrained_models/GFPGANv1.pth --bg_upsampler realesrgan

!ls results/cmp

This provides errors despite no longer referencing GFPGANv1Clean(is this for color?)

Downloading: "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth" to /usr/local/lib/python3.7/dist-packages/realesrgan/weights/RealESRGAN_x2plus.pth

100% 64.0M/64.0M [00:01<00:00, 66.7MB/s]
Downloading: "https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth" to /usr/local/lib/python3.7/dist-packages/facexlib/weights/detection_Resnet50_Final.pth

100% 104M/104M [00:01<00:00, 65.1MB/s]
Traceback (most recent call last):
File "inference_gfpgan.py", line 98, in
main()
File "inference_gfpgan.py", line 57, in main
bg_upsampler=bg_upsampler)
File "/content/GFPGAN/gfpgan/utils.py", line 65, in init
self.gfpgan.load_state_dict(loadnet[keyname], strict=True)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GFPGANv1Clean:
Missing key(s) in state_dict: "conv_body_first.weight", "conv_body_first.bias", "conv_body_down.0.conv1.weight", "conv_body_down.0.conv1.bias", "conv_body_down.0.conv2.weight", "conv_body_down.0.conv2.bias", "conv_body_down.0.skip.weight", "conv_body_down.1.conv1.weight", "conv_body_down.1.conv1.bias", "conv_body_down.1.conv2.weight", "conv_body_down.1.conv2.bias", "conv_body_down.1.skip.weight", "conv_body_down.2.conv1.weight", "conv_body_down.2.conv1.bias", "conv_body_down.2.conv2.weight", "conv_body_down.2.conv2.bias", "conv_body_down.2.skip.weight", "conv_body_down.3.conv1.weight", "conv_body_down.3.conv1.bias", "conv_body_down.3.conv2.weight", "conv_body_down.3.conv2.bias", "conv_body_down.3.skip.weight", "conv_body_down.4.conv1.weight", "conv_body_down.4.conv1.bias", "conv_body_down.4.conv2.weight", "conv_body_down.4.conv2.bias", "conv_body_down.4.skip.weight", "conv_body_down.5.conv1.weight", "conv_body_down.5.conv1.bias", "conv_body_down.5.conv2.weight", "conv_body_down.5.conv2.bias", "conv_body_down.5.skip.weight", "conv_body_down.6.conv1.weight", "conv_body_down.6.conv1.bias", "conv_body_down.6.conv2.weight", "conv_body_down.6.conv2.bias", "conv_body_down.6.skip.weight", "final_conv.weight", "final_conv.bias", "conv_body_up.0.conv1.weight", "conv_body_up.0.conv1.bias", "conv_body_up.0.conv2.bias", "conv_body_up.1.conv1.weight", "conv_body_up.1.conv1.bias", "conv_body_up.1.conv2.bias", "conv_body_up.2.conv1.weight", "conv_body_up.2.conv1.bias", "conv_body_up.2.conv2.bias", "conv_body_up.3.conv1.weight", "conv_body_up.3.conv1.bias", "conv_body_up.3.conv2.bias", "conv_body_up.4.conv1.weight", "conv_body_up.4.conv1.bias", "conv_body_up.4.conv2.bias", "conv_body_up.5.conv1.weight", "conv_body_up.5.conv1.bias", "conv_body_up.5.conv2.bias", "conv_body_up.6.conv1.weight", "conv_body_up.6.conv1.bias", "conv_body_up.6.conv2.bias", "stylegan_decoder.style_mlp.9.weight", "stylegan_decoder.style_mlp.9.bias", "stylegan_decoder.style_mlp.11.weight", "stylegan_decoder.style_mlp.11.bias", "stylegan_decoder.style_mlp.13.weight", "stylegan_decoder.style_mlp.13.bias", "stylegan_decoder.style_mlp.15.weight", "stylegan_decoder.style_mlp.15.bias", "stylegan_decoder.style_conv1.bias", "stylegan_decoder.style_convs.0.bias", "stylegan_decoder.style_convs.1.bias", "stylegan_decoder.style_convs.2.bias", "stylegan_decoder.style_convs.3.bias", "stylegan_decoder.style_convs.4.bias", "stylegan_decoder.style_convs.5.bias", "stylegan_decoder.style_convs.6.bias", "stylegan_decoder.style_convs.7.bias", "stylegan_decoder.style_convs.8.bias", "stylegan_decoder.style_convs.9.bias", "stylegan_decoder.style_convs.10.bias", "stylegan_decoder.style_convs.11.bias", "stylegan_decoder.style_convs.12.bias", "stylegan_decoder.style_convs.13.bias".
Unexpected key(s) in state_dict: "conv_body_first.0.weight", "conv_body_first.1.bias", "conv_body_down.0.conv1.0.weight", "conv_body_down.0.conv1.1.bias", "conv_body_down.0.conv2.1.weight", "conv_body_down.0.conv2.2.bias", "conv_body_down.0.skip.1.weight", "conv_body_down.1.conv1.0.weight", "conv_body_down.1.conv1.1.bias", "conv_body_down.1.conv2.1.weight", "conv_body_down.1.conv2.2.bias", "conv_body_down.1.skip.1.weight", "conv_body_down.2.conv1.0.weight", "conv_body_down.2.conv1.1.bias", "conv_body_down.2.conv2.1.weight", "conv_body_down.2.conv2.2.bias", "conv_body_down.2.skip.1.weight", "conv_body_down.3.conv1.0.weight", "conv_body_down.3.conv1.1.bias", "conv_body_down.3.conv2.1.weight", "conv_body_down.3.conv2.2.bias", "conv_body_down.3.skip.1.weight", "conv_body_down.4.conv1.0.weight", "conv_body_down.4.conv1.1.bias", "conv_body_down.4.conv2.1.weight", "conv_body_down.4.conv2.2.bias", "conv_body_down.4.skip.1.weight", "conv_body_down.5.conv1.0.weight", "conv_body_down.5.conv1.1.bias", "conv_body_down.5.conv2.1.weight", "conv_body_down.5.conv2.2.bias", "conv_body_down.5.skip.1.weight", "conv_body_down.6.conv1.0.weight", "conv_body_down.6.conv1.1.bias", "conv_body_down.6.conv2.1.weight", "conv_body_down.6.conv2.2.bias", "conv_body_down.6.skip.1.weight", "final_conv.0.weight", "final_conv.1.bias", "conv_body_up.0.conv1.0.weight", "conv_body_up.0.conv1.1.bias", "conv_body_up.0.conv2.activation.bias", "conv_body_up.1.conv1.0.weight", "conv_body_up.1.conv1.1.bias", "conv_body_up.1.conv2.activation.bias", "conv_body_up.2.conv1.0.weight", "conv_body_up.2.conv1.1.bias", "conv_body_up.2.conv2.activation.bias", "conv_body_up.3.conv1.0.weight", "conv_body_up.3.conv1.1.bias", "conv_body_up.3.conv2.activation.bias", "conv_body_up.4.conv1.0.weight", "conv_body_up.4.conv1.1.bias", "conv_body_up.4.conv2.activation.bias", "conv_body_up.5.conv1.0.weight", "conv_body_up.5.conv1.1.bias", "conv_body_up.5.conv2.activation.bias", "conv_body_up.6.conv1.0.weight", "conv_body_up.6.conv1.1.bias", "conv_body_up.6.conv2.activation.bias", "stylegan_decoder.style_mlp.2.weight", "stylegan_decoder.style_mlp.2.bias", "stylegan_decoder.style_mlp.4.weight", "stylegan_decoder.style_mlp.4.bias", "stylegan_decoder.style_mlp.6.weight", "stylegan_decoder.style_mlp.6.bias", "stylegan_decoder.style_mlp.8.weight", "stylegan_decoder.style_mlp.8.bias", "stylegan_decoder.style_conv1.activate.bias", "stylegan_decoder.style_convs.0.activate.bias", "stylegan_decoder.style_convs.1.activate.bias", "stylegan_decoder.style_convs.2.activate.bias", "stylegan_decoder.style_convs.3.activate.bias", "stylegan_decoder.style_convs.4.activate.bias", "stylegan_decoder.style_convs.5.activate.bias", "stylegan_decoder.style_convs.6.activate.bias", "stylegan_decoder.style_convs.7.activate.bias", "stylegan_decoder.style_convs.8.activate.bias", "stylegan_decoder.style_convs.9.activate.bias", "stylegan_decoder.style_convs.10.activate.bias", "stylegan_decoder.style_convs.11.activate.bias", "stylegan_decoder.style_convs.12.activate.bias", "stylegan_decoder.style_convs.13.activate.bias".
size mismatch for conv_body_up.3.conv2.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for conv_body_up.3.skip.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for conv_body_up.4.conv2.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for conv_body_up.4.skip.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for conv_body_up.5.conv2.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for conv_body_up.5.skip.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for conv_body_up.6.conv2.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for conv_body_up.6.skip.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for toRGB.3.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for toRGB.4.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for toRGB.5.weight: copying a param with shape torch.Size([3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 64, 1, 1]).
size mismatch for toRGB.6.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
size mismatch for stylegan_decoder.style_convs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.7.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.style_convs.8.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.9.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.style_convs.10.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.11.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 128, 3, 3]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.style_convs.12.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.weight: copying a param with shape torch.Size([1, 32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 64, 64, 3, 3]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.style_convs.13.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for stylegan_decoder.to_rgbs.3.modulated_conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for stylegan_decoder.to_rgbs.4.modulated_conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for stylegan_decoder.to_rgbs.5.modulated_conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.weight: copying a param with shape torch.Size([1, 3, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 64, 1, 1]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([64, 512]).
size mismatch for stylegan_decoder.to_rgbs.6.modulated_conv.modulation.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_scale.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_scale.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_scale.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_scale.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_scale.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_scale.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_scale.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_scale.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.3.0.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.0.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.3.2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for condition_shift.3.2.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for condition_shift.4.0.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.4.2.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for condition_shift.4.2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for condition_shift.5.0.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.0.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.5.2.weight: copying a param with shape torch.Size([32, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for condition_shift.5.2.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for condition_shift.6.0.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.0.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for condition_shift.6.2.weight: copying a param with shape torch.Size([16, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for condition_shift.6.2.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
ls: cannot access 'results/cmp': No such file or directory

from gfpgan.

xinntao avatar xinntao commented on July 28, 2024

Use the latest colab:

https://colab.research.google.com/drive/1Oa1WwKB4M4l1GmR7CtswDVgOCOeSLChA?usp=sharing

For GFPGANv1 model, you should the following cmd:
!python inference_gfpgan.py --model_path experiments/pretrained_models/GFPGANv1.pth --test_path inputs/cropped_faces --save_root results --arch original --channel 1

from gfpgan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.