Homomorpher
Used to create the Latent Compass
This repo contains code for the Latent Compass server (if you're interested in the client, please get in touch).
Latent Compass: Creation by Navigation
Sarah Schwettmann *, Hendrik Strobelt *, Mauro Martino
MIT CSAIL, MIT BCS, MIT-IBM Watson AI Lab, IBM Research
In NeurIPS Workshop for Creativity and Design, 2020
- To run the code yourself, start by cloning the repository:
git clone https://github.com/schwettmann/homomorpher
You will probably want to create a conda environment or virtual environment instead of installing the dependencies globally. E.g., to create a new virtual environment you can run:
python3 -m venv env
source env/bin/activate
- This code loads a pretrained BigGAN. For this model you will need to install pretorched-x:
cd ..
git clone https://github.com/alexandonian/pretorched-x.git
cd pretorched-x
git checkout dev
python setup.py install
- install deps
- install fastapi:
pip install fastapi[all] && pip install aiofiles
- run server:
uvicorn server:app --reload
OPENAPI_PREFIX=/frankenstein uvicorn server:app --port 5005 --reload
Our method works by training a linear model on noise vectors in the latent space of BigGAN, or activations of intermediate layers, that are associated with images. The images are sorted into two classes by a user. The beauty of this method is that you only need a few examples! We can then learn a direction that transforms any new image from one class into the other, by steering through latent space or activation space using that learned direction. Code for implementing this method is in the homomorpher
module in backend
.
You can try generating images from BigGAN and capturing perceptual dimensions you find salient by sorting the images (minimum 10!) into two classes (with labels 1
and 0
). To train a model and learn a direction corresponding to the distinction between the two classes, first decide whether you are going to work in Z-space or in the space of intermediate layer activations (layers closer to the image output control increasingly fine-grained image features). homomorpher.train_model(z, z_lbl)
learns a model in Z-space and homomorpher.train_model_layer(z, z_lbl, l)
learns a model in featurespace in layer l
. The homomorpher
module also contains code for using learned models to transform images.
The Latent Compass makes navigating the model's concept space easy with an interface for sorting generated images, learning directions, and using those directions to steer through visual space.
Example "fullness" direction found using the Latent Compass, applied to images across classes:
If you use this code for your research, please cite our paper :
@misc{schwettmann2020latent,
title={Latent Compass: Creation by Navigation},
author={Sarah Schwettmann and Hendrik Strobelt and Mauro Martino},
year={2020},
eprint={2012.14283},
archivePrefix={arXiv},
primaryClass={cs.AI}
}