The code for the paper Learning Continuous Image Representation with Local Implicit Image Function.
@misc{chen2020learning,
title={Learning Continuous Image Representation with Local Implicit Image Function},
author={Yinbo Chen and Sifei Liu and Xiaolong Wang},
year={2020},
eprint={2012.09161},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- The code is under testing, will be done soon.
- To provide visualization code.
- Python 3
- Pytorch 1.6.0
- TensorboardX
- yaml, numpy, tqdm, imageio
mkdir load
for putting the dataset folders.
-
DIV2K:
mkdir
andcd
intoload/div2k
. Download HR images and bicubic validation LR images from DIV2K website (i.e. Train_HR, Valid_HR, Valid_LR_X2, Valid_LR_X3, Valid_LR_X4).unzip
these files to get the image folders. -
benchmark datasets:
mkdir
andcd
intoload/benchmark
. Download andtar -xf
the benchmark datasets (provided by this repo), get the image foldersSet5/, Set14/, B100/, Urban100/
. -
celebAHQ:
mkdir load/celebAHQ
andcp scripts/resize.py load/celebAHQ/
, thencd load/celebAHQ/
. Download andunzip
data1024x1024.zip from the Google Drive link (provided by this repo). Runpython resize.py
and get image folders256/, 128/, 64/, 32/
. Download the split.json.
0. Preliminaries
-
For
train_liif.py
ortest.py
, use--gpu [GPU]
to specify the GPU IDs for running (e.g.--gpu 0
or--gpu 0,1
). -
For
train_liif.py
, by default the saving folder is atsave/_[CONFIG_NAME]
. We can use--name
to specify a name if needed. -
For dataset args in configs,
cache: in_memory
denotes pre-loading into memory (may require large memory, e.g. ~40GB for DIV2K),cache: bin
denotes creating binary files (in the same folder) for the first time,cache: none
denotes direct loading. We can modify it according to the hardware resources before running the training scripts.
1. DIV2K experiments
Train: python train_liif.py --config configs/train-div2k/train_edsr-baseline-liif.yaml
(with EDSR-baseline backbone, for RDN replace edsr-baseline
with rdn
). We use 1 GPU for training EDSR-baseline-LIIF and 4 GPUs for RDN-LIIF.
Test: bash scripts/test-div2k.sh [MODEL_PATH] [GPU]
for div2k validation set, bash scripts/test-benchmark.sh [MODEL_PATH] [GPU]
for benchmark datasets. [MODEL_PATH]
is the path to a .pth
file, we use epoch-last.pth
in corresponding saving folder.
Name | Pretrained model |
---|---|
EDSR-baseline-LIIF | Download (19M) |
RDN-LIIF | Download (256M) |
2. celebAHQ experiments
Train: python train_liif.py --config configs/train-celebAHQ/[CONFIG_NAME].yaml
.
Test: python test.py --config configs/test/test-celebAHQ-32-256.yaml --model [MODEL_PATH]
(or test-celebAHQ-64-128.yaml
). We use epoch-best.pth
in corresponding saving folder.