Giter Club home page Giter Club logo

navigan's Introduction

Navigating the GAN Parameter Space for Semantic Image Editing

Authors official implementation of the CVPR'2021 paper Navigating the GAN Parameter Space for Semantic Image Editing by Anton Cherepkov, Andrey Voynov, and Artem Babenko.

Open In Colab

Main steps of our approach: An image

  • First: we form a low-dimensional subspace in the parameters space of a pretrained GAN;
  • Second: we solve an optimization problem to discover interpretable controls in this subspace.

Typical Visual Effects

FFHQ

original
An image
nose length
An image
eyes size
An image
eyes direction
An image
brows up
An image
vampire
An image

LSUN-Car

An image
Wheels size

LSUN-Church

An image
Add conic structures

LSUN-Horse

An image
Thickness

Real Images Domain

An image
An image
An image
An image

An image An image An image

Pix2PixHD

The proposed method is also applicable to pixel-to-pixel models. Here we present some of the effects discovered for the label-to-streetview model.

Alt Text
Add curb

Alt Text
Road darkening

check high-res videos here: curb1, curb2, darkening1, darkening2


Training

There are two options to form the low-dimensional parameters subspace: LPIPS-Hessian-based and SVD-based. The first one is recommended.

LPIPS-Hessian-based

Once you want to use the LPIPS-Hessian, first run its computation:

  • Calculating hessian's eigenvectors
python hessian_power_iteration.py \
    --out result \                             #  script output
    --gan_weights stylegan2-car-config-f.pt \  #  model weigths
    --resolution 512 \                         #  model resolution
    --gan_conv_layer_index 3 \                 #  target convolutional layer index starting from 0
    --num_samples 512 \                        #  z-samples count to use for hessian computation
    --num_eigenvectors 64 \                    #  number of leading eigenvectors to calculate

Second, run the interpretable directions search:

  • Interpretable directions in the hessian's eigenvectors subspace
python run_train.py \
    --out results \                           #  script out
    --gan_type StyleGAN2 \                    #  currently only StyleGAN2 is available
    --gan_weights stylegan2-car-config-f.pt \
    --resolution 512 \
    --shift_predictor_size 256 \              # resize to 256 before shift prediction [memory saving-option]
    --deformator_target weight_fixedbasis \
    --basis_vectors_path eigenvectors_layer3_stylegan2-car-config-f.pt \  # first step results
    --deformator_conv_layer_index 3 \         # should be the same as on the first step
    --directions_count 64 \
    --shift_scale 60 \
    --min_shift 15 \

SVD-based

The second option is to run the search over the SVD-based basis:

python run_train.py \
    --out results \
    --gan_type StyleGAN2 \
    --gan_weights stylegan2-car-config-f.pt \
    --resolution 512 \
    --shift_predictor_size 256 \
    --deformator_target weight_svd \
    --deformator_conv_layer_index 3 \  #  target convolutional layer index starting from 0
    --directions_count 64 \
    --shift_scale 3500 \
    --shift_weight 0.0025 \
    --min_shift 300 \

Though we successfully use the same shift_scale for different layers, its manual per-layer tuning can slightly improve performance.


Evaluation

Here we present the code to visualize controls discovered by the previous steps for:

  • SVD;
  • SVD with optimization (optimization-based);
  • Hessian (spectrum-based);
  • Hessian with optimization (hybrid)

First, import the required modules and load the generator:

from inference import GeneratorWithWeightDeformator
from loading import load_generator

G = load_generator(
    args={'resolution': 512, 'gan_type': 'StyleGAN2'},
    G_weights='stylegan2-car-config-f.pt'
)

Second, modify the GAN parameters using one of the methods below.

SVD-based
G = GeneratorWithWeightDeformator(
    generator=G,
    deformator_type='svd',
    layer_ix=3,
)
Optimization in the SVD basis
G = GeneratorWithWeightDeformator(
    generator=G,
    deformator_type='svd_rectification',
    layer_ix=3,
    checkpoint_path='_svd_based_train_/checkpoint.pt',
)
Hessian's eigenvectors
G = GeneratorWithWeightDeformator(
    generator=G,
    deformator_type='hessian',
    layer_ix=3,
    eigenvectors_path='eigenvectors_layer3_stylegan2-car-config-f.pt'
)
Optimization in the Hessian eigenvectors basis
G = GeneratorWithWeightDeformator(
    generator=G,
    deformator_type='hessian_rectification',
    layer_ix=3,
    checkpoint_path='_hessian_based_train_/checkpoint.pt',
    eigenvectors_path='eigenvectors_layer3_stylegan2-car-config-f.pt'
)

Now you can apply modified parameters for every element in the batch in the following manner:

# Generate some samples
zs = torch.randn((4, 512)).cuda()

# Specify deformation index and shift
direction = 0
shift = 100.0
G.deformate(direction, shift)

# Simply call the generator
imgs_deformated = G(zs)

Saving into a file

You can save the discovered parameters shifts (including layer_ix and data) into a file. In order to do this:

  1. Modify the GAN parameters in the manner described above;
  2. Call G.save_deformation(path, direction_ix).

Loading from file

First, import the required modules and load the generator:

from inference import GeneratorWithFixedWeightDeformation
from loading import load_generator

G = load_generator(
    args={'resolution': 512, 'gan_type': 'StyleGAN2'},
    G_weights='stylegan2-car-config-f.pt'
)

Second, modify the GAN:

G = GeneratorWithFixedWeightDeformation(generator=G, deformation_path='deformation.pt')

Now you can apply modified parameters for every element in the batch in the following manner:

# Generate some samples
zs = torch.randn((4, 512)).cuda()

# Deformate; G.scale is a recommended scale
G.deformate(0.5 * G.scale)

# Simply call the generator
imgs_deformated = G(zs)

Pretrained directions

Annotated generators directions and gif examples sources:
FFHQ: https://www.dropbox.com/s/7m838ewhzgcb3v5/ffhq_weights_deformations.tar
Car: https://www.dropbox.com/s/rojdcfvnsdue10o/car_weights_deformations.tar
Horse: https://www.dropbox.com/s/ir1lg5v2yd4cmkx/horse_weights_deformations.tar
Church: https://www.dropbox.com/s/do9yt3bggmggehm/church_weights_deformations.tar

StyleGAN2 weights: https://www.dropbox.com/s/d0aas2fyc9e62g5/stylegan2_weights.tar
generators weights are the original models weights converted to pytorch (see credits)

You can find loading and deformation example at example.ipynb


Citation

@InProceedings{Navigan_CVPR_2021,
    author    = {Cherepkov, Anton and Voynov, Andrey and Babenko, Artem},
    title     = {Navigating the GAN Parameter Space for Semantic Image Editing},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {3671-3680}
}

Credits

Our code is based on the Unsupervised Discovery of Interpretable Directions in the GAN Latent Space official implementation
https://github.com/anvoynov/GANLatentDiscovery
Generator model is implemented over the StyleGAN2-pytorch:
https://github.com/rosinality/stylegan2-pytorch
Generators weights were converted from the original StyleGAN2:
https://github.com/NVlabs/stylegan2

navigan's People

Contributors

anton-cherepkov avatar anvoynov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

navigan's Issues

Missing `torch_tools` dependency from requirements.txt

Hi, it's me again, same guy who reported #1

I noticed that Pylint reports the torch_tools import to be undefined. I can't find the dependency where this import is supposed to come from either and it does not seem to be included in PyTorch. Which library does this import come from? And can you add it to the requirements.txt as well? Thanks!

from torch_tools.modules import DataParallelPassthrough

Non-existent version of moviepy in requirements.txt

Hi Anton Cherepkov, Andrey Voynov, and Artem Babenko,

I'm doing research on linting and code quality practices in ML and am developing an automated tool to run static code analysis (Pylint with its default configuration) on a multitude of ML projects, yours included.

I was pleased to find a requirements.txt file in your repo (many ML projects I've come across so far don't have one), but pip installing it produces an error, stating that there is no moviepy version 0.1.3, which is correct. The latest version of moviepy is 1.0.3.

Personally, I can work around it for my research, but for the sake of the reproducibility of your research, could you fix this please? Also, while you're busy, it might be a good idea to check whether the rest of the requirements have the versions that you intend them to have and have tested your code with.

moviepy==0.1.3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.