Giter Club home page Giter Club logo

vid2vid's Introduction





vid2vid

Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic video-to-video translation. It can be used for turning semantic label maps into photo-realistic videos, synthesizing people talking from edge maps, or generating human motions from poses. The core of video-to-video translation is image-to-image translation. Some of our work in that space can be found in pix2pixHD and SPADE.

Video-to-Video Synthesis
Ting-Chun Wang1, Ming-Yu Liu1, Jun-Yan Zhu2, Guilin Liu1, Andrew Tao1, Jan Kautz1, Bryan Catanzaro1
1NVIDIA Corporation, 2MIT CSAIL
In Neural Information Processing Systems (NeurIPS) 2018

Video-to-Video Translation

  • Label-to-Streetview Results

  • Edge-to-Face Results

  • Pose-to-Body Results

  • Frame Prediction Results

Prerequisites

  • Linux or macOS
  • Python 3
  • NVIDIA GPU + CUDA cuDNN
  • PyTorch 0.4

Getting Started

Installation

  • Install python libraries dominate and requests.
pip install dominate requests
  • If you plan to train with face datasets, please install dlib.
pip install dlib
  • If you plan to train with pose datasets, please install DensePose and/or OpenPose.
  • Clone this repo:
git clone https://github.com/NVIDIA/vid2vid
cd vid2vid
  • Docker Image If you have difficulty building the repo, a docker image can be found in the docker folder.

Testing

  • Please first download example dataset by running python scripts/download_datasets.py.

  • Next, compile a snapshot of FlowNet2 by running python scripts/download_flownet2.py.

  • Cityscapes

    • Please download the pre-trained Cityscapes model by:

      python scripts/street/download_models.py
    • To test the model (bash ./scripts/street/test_2048.sh):

      #!./scripts/street/test_2048.sh
      python test.py --name label2city_2048 --label_nc 35 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G

      The test results will be saved in: ./results/label2city_2048/test_latest/.

    • We also provide a smaller model trained with single GPU, which produces slightly worse performance at 1024 x 512 resolution.

      • Please download the model by
      python scripts/street/download_models_g1.py
      • To test the model (bash ./scripts/street/test_g1_1024.sh):
      #!./scripts/street/test_g1_1024.sh
      python test.py --name label2city_1024_g1 --label_nc 35 --loadSize 1024 --n_scales_spatial 3 --use_instance --fg --n_downsample_G 2 --use_single_G
    • You can find more example scripts in the scripts/street/ directory.

  • Faces

    • Please download the pre-trained model by:
      python scripts/face/download_models.py
    • To test the model (bash ./scripts/face/test_512.sh):
      #!./scripts/face/test_512.sh
      python test.py --name edge2face_512 --dataroot datasets/face/ --dataset_mode face --input_nc 15 --loadSize 512 --use_single_G
      The test results will be saved in: ./results/edge2face_512/test_latest/.

Dataset

  • Cityscapes
    • We use the Cityscapes dataset as an example. To train a model on the full dataset, please download it from the official website (registration required).
    • We apply a pre-trained segmentation algorithm to get the corresponding semantic maps (train_A) and instance maps (train_inst).
    • Please add the obtained images to the datasets folder in the same way the example images are provided.
  • Face
    • We use the FaceForensics dataset. We then use landmark detection to estimate the face keypoints, and interpolate them to get face edges.
  • Pose
    • We use random dancing videos found on YouTube. We then apply DensePose / OpenPose to estimate the poses for each frame.

Training with Cityscapes dataset

  • First, download the FlowNet2 checkpoint file by running python scripts/download_models_flownet2.py.
  • Training with 8 GPUs:
    • We adopt a coarse-to-fine approach, sequentially increasing the resolution from 512 x 256, 1024 x 512, to 2048 x 1024.
    • Train a model at 512 x 256 resolution (bash ./scripts/street/train_512.sh)
    #!./scripts/street/train_512.sh
    python train.py --name label2city_512 --label_nc 35 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 6 --n_frames_total 6 --use_instance --fg
    • Train a model at 1024 x 512 resolution (must train 512 x 256 first) (bash ./scripts/street/train_1024.sh):
    #!./scripts/street/train_1024.sh
    python train.py --name label2city_1024 --label_nc 35 --loadSize 1024 --n_scales_spatial 2 --num_D 3 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 4 --use_instance --fg --niter_step 2 --niter_fix_global 10 --load_pretrain checkpoints/label2city_512

If you have TensorFlow installed, you can see TensorBoard logs in ./checkpoints/label2city_1024/logs by adding --tf_log to the training scripts.

  • Training with a single GPU:

    • We trained our models using multiple GPUs. For convenience, we provide some sample training scripts (train_g1_XXX.sh) for single GPU users, up to 1024 x 512 resolution. Again a coarse-to-fine approach is adopted (256 x 128, 512 x 256, 1024 x 512). Performance is not guaranteed using these scripts.
    • For example, to train a 256 x 128 video with a single GPU (bash ./scripts/street/train_g1_256.sh)
    #!./scripts/street/train_g1_256.sh
    python train.py --name label2city_256_g1 --label_nc 35 --loadSize 256 --use_instance --fg --n_downsample_G 2 --num_D 1 --max_frames_per_gpu 6 --n_frames_total 6
  • Training at full (2k x 1k) resolution

    • To train the images at full resolution (2048 x 1024) requires 8 GPUs with at least 24G memory (bash ./scripts/street/train_2048.sh). If only GPUs with 12G/16G memory are available, please use the script ./scripts/street/train_2048_crop.sh, which will crop the images during training. Performance is not guaranteed with this script.

Training with face datasets

  • If you haven't, please first download example dataset by running python scripts/download_datasets.py.
  • Run the following command to compute face landmarks for training dataset:
    python data/face_landmark_detection.py train
  • Run the example script (bash ./scripts/face/train_512.sh)
    python train.py --name edge2face_512 --dataroot datasets/face/ --dataset_mode face --input_nc 15 --loadSize 512 --num_D 3 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 6 --n_frames_total 12  
  • For single GPU users, example scripts are in train_g1_XXX.sh. These scripts are not fully tested and please use at your own discretion. If you still hit out of memory errors, try reducing max_frames_per_gpu.
  • More examples scripts can be found in scripts/face/.
  • Please refer to More Training/Test Details for more explanations about training flags.

Training with pose datasets

  • If you haven't, please first download example dataset by running python scripts/download_datasets.py.
  • Example DensePose and OpenPose results are included. If you plan to use your own dataset, please generate these results and put them in the same way the example dataset is provided.
  • Run the example script (bash ./scripts/pose/train_256p.sh)
    python train.py --name pose2body_256p --dataroot datasets/pose --dataset_mode pose --input_nc 6 --num_D 2 --resize_or_crop ScaleHeight_and_scaledCrop --loadSize 384 --fineSize 256 --gpu_ids 0,1,2,3,4,5,6,7 --batchSize 8 --max_frames_per_gpu 3 --no_first_img --n_frames_total 12 --max_t_step 4
  • Again, for single GPU users, example scripts are in train_g1_XXX.sh. These scripts are not fully tested and please use at your own discretion. If you still hit out of memory errors, try reducing max_frames_per_gpu.
  • More examples scripts can be found in scripts/pose/.
  • Please refer to More Training/Test Details for more explanations about training flags.

Training with your own dataset

  • If your input is a label map, please generate label maps which are one-channel whose pixel values correspond to the object labels (i.e. 0,1,...,N-1, where N is the number of labels). This is because we need to generate one-hot vectors from the label maps. Please use --label_nc N during both training and testing.
  • If your input is not a label map, please specify --input_nc N where N is the number of input channels (The default is 3 for RGB images).
  • The default setting for preprocessing is scaleWidth, which will scale the width of all training images to opt.loadSize (1024) while keeping the aspect ratio. If you want a different setting, please change it by using the --resize_or_crop option. For example, scaleWidth_and_crop first resizes the image to have width opt.loadSize and then does random cropping of size (opt.fineSize, opt.fineSize). crop skips the resizing step and only performs random cropping. scaledCrop crops the image while retraining the original aspect ratio. randomScaleHeight will randomly scale the image height to be between opt.loadSize and opt.fineSize. If you don't want any preprocessing, please specify none, which will do nothing other than making sure the image is divisible by 32.

More Training/Test Details

  • We generate frames in the video sequentially, where the generation of the current frame depends on previous frames. To generate the first frame for the model, there are 3 different ways:

      1. Using another generator which was trained on generating single images (e.g., pix2pixHD) by specifying --use_single_G. This is the option we use in the test scripts.
      1. Using the first frame in the real sequence by specifying --use_real_img.
      1. Forcing the model to also synthesize the first frame by specifying --no_first_img. This must be trained separately before inference.
  • The way we train the model is as follows: suppose we have 8 GPUs, 4 for generators and 4 for discriminators, and we want to train 28 frames. Also, assume each GPU can generate only one frame. The first GPU generates the first frame, and pass it to the next GPU, and so on. After the 4 frames are generated, they are passed to the 4 discriminator GPUs to compute the losses. Then the last generated frame becomes input to the next batch, and the next 4 frames in the training sequence are loaded into GPUs. This is repeated 7 times (4 x 7 = 28), to train all the 28 frames.

  • Some important flags:

    • n_gpus_gen: the number of GPUs to use for generators (while the others are used for discriminators). We separate generators and discriminators into different GPUs since when dealing with high resolutions, even one frame cannot fit in a GPU. If the number is set to -1, there is no separation and all GPUs are used for both generators and discriminators (only works for low-res images).
    • n_frames_G: the number of input frames to feed into the generator network; i.e., n_frames_G - 1 is the number of frames we look into the past. the default is 3 (conditioned on previous two frames).
    • n_frames_D: the number of frames to feed into the temporal discriminator. The default is 3.
    • n_scales_spatial: the number of scales in the spatial domain. We train from the coarsest scale and all the way to the finest scale. The default is 3.
    • n_scales_temporal: the number of scales for the temporal discriminator. The finest scale takes in the sequence in the original frame rate. The coarser scales subsample the frames by a factor of n_frames_D before feeding the frames into the discriminator. For example, if n_frames_D = 3 and n_scales_temporal = 3, the discriminator effectively sees 27 frames. The default is 3.
    • max_frames_per_gpu: the number of frames in one GPU during training. If you run into out of memory error, please first try to reduce this number. If your GPU memory can fit more frames, try to make this number bigger to make training faster. The default is 1.
    • max_frames_backpropagate: the number of frames that loss backpropagates to previous frames. For example, if this number is 4, the loss on frame n will backpropagate to frame n-3. Increasing this number will slightly improve the performance, but also cause training to be less stable. The default is 1.
    • n_frames_total: the total number of frames in a sequence we want to train with. We gradually increase this number during training.
    • niter_step: for how many epochs do we double n_frames_total. The default is 5.
    • niter_fix_global: if this number if not 0, only train the finest spatial scale for this number of epochs before starting to fine-tune all scales.
    • batchSize: the number of sequences to train at a time. We normally set batchSize to 1 since often, one sequence is enough to occupy all GPUs. If you want to do batchSize > 1, currently only batchSize == n_gpus_gen is supported.
    • no_first_img: if not specified, the model will assume the first frame is given and synthesize the successive frames. If specified, the model will also try to synthesize the first frame instead.
    • fg: if specified, use the foreground-background separation model as stated in the paper. The foreground labels must be specified by --fg_labels.
    • no_flow: if specified, do not use flow warping and directly synthesize frames. We found this usually still works reasonably well when the background is static, while saving memory and training time.
    • sparse_D: if specified, only apply temporal discriminator on sparse frames in the sequence. This helps save memory while having little effect on performance.
  • For other flags, please see options/train_options.py and options/base_options.py for all the training flags; see options/test_options.py and options/base_options.py for all the test flags.

  • Additional flags for edge2face examples:

    • no_canny_edge: do not use canny edges for background as input.
    • no_dist_map: by default, we use distrance transform on the face edge map as input. This flag will make it directly use edge maps.
  • Additional flags for pose2body examples:

    • densepose_only: use only densepose results as input. Please also remember to change input_nc to be 3.
    • openpose_only: use only openpose results as input. Please also remember to change input_nc to be 3.
    • add_face_disc: add an additional discriminator that only works on the face region.
    • remove_face_labels: remove densepose results for face, and add noise to openpose face results, so the network can get more robust to different face shapes. This is important if you plan to do inference on half-body videos (if not, usually this flag is unnecessary).
    • random_drop_prob: the probability to randomly drop each pose segment during training, so the network can get more robust to missing poses at inference time. Default is 0.05.
    • basic_point_only: if specified, only use basic joint keypoints for OpenPose output, without using any hand or face keypoints.

Citation

If you find this useful for your research, please cite the following paper.

@inproceedings{wang2018vid2vid,
   author    = {Ting-Chun Wang and Ming-Yu Liu and Jun-Yan Zhu and Guilin Liu
                and Andrew Tao and Jan Kautz and Bryan Catanzaro},
   title     = {Video-to-Video Synthesis},
   booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},   
   year      = {2018},
}

Acknowledgments

We thank Karan Sapra, Fitsum Reda, and Matthieu Le for generating the segmentation maps for us. We also thank Lisa Rhee for allowing us to use her dance videos for training. We thank William S. Peebles for proofreading the paper.
This code borrows heavily from pytorch-CycleGAN-and-pix2pix and pix2pixHD.

vid2vid's People

Contributors

jiaxianhua avatar junyanz avatar mingyuliutw avatar tcwang0509 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vid2vid's Issues

Add demo gif to README

Disclaimer: This is a bot

It looks like your repo is trending. The github_trending_videos Instgram account automatically shows the demo gifs of trending repos in Github.

Your README doesn't seem to have any demo gifs. Add one and the next time the parser runs it will pick it up and post it on its Instagram feed. If you don't want to just close this issue we won't bother you again.

Pose To Body To Be Used On Defined Body?

Hey,

This is excellent work. I am curious, could I be able to use Pose to Body where I can define which body to use on?

That is, say I could make myself dance in a video by giving the poses of some dancer. Is that possible? If so, please elaborate a little on how to achieve it. Also how many GPUs would be required for such a task.

Thanks in advance!

Using model on own data set.

Hi this is an extremely interesting project, I have another set of data set that I would like to use. The data I want to use are images that are taken in 15 minute increments and I want to transform them into another image. We have used pix2pix for this problem set, but we also wanted to see if vid2vid will yield better results.
I want to know how I need to have the images to be formatted or organized before training on them?
Any help would be much appreciated.

Is 8 GB of graphics memory not enough? [ cuda runtime error (2) : out of memory

I ran out of memory using the suggested test

/vid2vid# bash ./scripts/test_2048.sh 
bash ./scripts/test_2048.sh 
------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/test_A
dataset_mode: temporal
debug: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain: 
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_first_img: False
no_flip: False
norm: batch
ntest: inf
output_nc: 3
phase: test
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
---------- Networks initialized -------------
-----------------------------------------------
Doing 28 frames
process image... ['datasets/Cityscapes/test_A/stuttgart_00/stuttgart_00_000000_000003_leftImg8bit.png']
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
  File "test.py", line 43, in <module>
    generated = model.inference(A, B, inst)
  File "/vid2vid/models/vid2vid_model_G.py", line 198, in inference
    fake_B = self.generate_frame_infer(real_A[self.n_scales-1-s], s)
  File "/vid2vid/models/vid2vid_model_G.py", line 216, in generate_frame_infer
    self.fake_B_feat, self.flow_feat, self.fake_B_fg_feat, use_raw_only)    
  File "/vid2vid/models/networks.py", line 270, in forward
    img_fg_feat = self.indv_up(self.indv_down(input) + img_fg_feat_coarse)        
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 91, in forward
    input = module(input)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/padding.py", line 163, in forward
    return F.pad(input, self.padding, 'reflect')
  File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 1941, in pad
    return torch._C._nn.reflection_pad2d(input, pad)
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58

ModuleNotFoundError: No module named 'resample2d_cuda'

Hi,

I faced an issue, 'ModuleNotFoundError: No module named 'resample2d_cuda'.
Do you know how to solve this?

'resample2d_package' folder contains as below.
D:\download\vid2vid\models\flownet2_pytorch\networks\resample2d_package
pycache
init.py
resample2d.py
resample2d_cuda.cc
resample2d_kernel.cu
resample2d_kernel.cuh
setup.py

Following is the cmd command.

D:\download\vid2vid>python test.py --name label2city_2048 --dataroot datasets/Cityscapes/test_A --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G
------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/test_A
dataset_mode: temporal
debug: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain:
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_first_img: False
no_flip: False
norm: batch
ntest: inf
output_nc: 3
phase: test
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
Traceback (most recent call last):
File "test.py", line 24, in
model = create_model(opt)
File "D:\download\vid2vid\models\models.py", line 7, in create_model
from .vid2vid_model_G import Vid2VidModelG
File "D:\download\vid2vid\models\vid2vid_model_G.py", line 13, in
from . import networks
File "D:\download\vid2vid\models\networks.py", line 12, in
from .flownet2_pytorch.networks.resample2d_package.resample2d import Resample2d
File "D:\download\vid2vid\models\flownet2_pytorch\networks\resample2d_package\resample2d.py", line 3, in
import resample2d_cuda
ModuleNotFoundError: No module named 'resample2d_cuda'

Undefined symbol while importing resample2d_cuda

Dear all,

While importing resample2d_cuda, I am getting the following error.

ImportError: /home/xyz/.local/lib/python3.6/site-packages/resample2d_cuda-0.0.0-py3.6-linux-x86_64.egg/resample2d_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at5ErrorC1ENS_14SourceLocationESs

May I know the reason for this?

Thanks for any help,
Sagar

make_power_2 (data/base_dataset.py)

def make_power_2(n, base=32.0):    
    return int(round(n / base) * base)

Should this actually result in making the number 2^n where n is an integer rather than 32*n?

mportError: /vid2vid/models/flownet2_pytorch/networks/resample2d_package/_ext/resample2d/_resample2d.so: undefined symbol: PyInt_FromLong

I run this command

python3 test.py --name label2city_2048 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G

And got this error:

/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/
dataset_mode: temporal
debug: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain: 
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_first_img: False
no_flip: False
norm: batch
ntest: inf
output_nc: 3
phase: test
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
Traceback (most recent call last):
  File "test.py", line 24, in <module>
    model = create_model(opt)
  File "/vid2vid/models/models.py", line 19, in create_model
    modelG.initialize(opt)
  File "/vid2vid/models/vid2vid_model_G.py", line 37, in initialize
    opt.n_downsample_G, opt.norm, 0, self.gpu_ids, opt)
  File "/vid2vid/models/networks.py", line 42, in define_G
    netG = CompositeGenerator(input_nc, output_nc, prev_output_nc, ngf, n_downsampling, opt.n_blocks, opt.fg, norm_layer)
  File "/vid2vid/models/networks.py", line 83, in __init__
    from .flownet2_pytorch.networks.resample2d_package.modules.resample2d import Resample2d            
  File "/vid2vid/models/flownet2_pytorch/networks/resample2d_package/modules/resample2d.py", line 2, in <module>
    from ..functions.resample2d import Resample2dFunction
  File "/vid2vid/models/flownet2_pytorch/networks/resample2d_package/functions/resample2d.py", line 3, in <module>
    from .._ext import resample2d
  File "/vid2vid/models/flownet2_pytorch/networks/resample2d_package/_ext/resample2d/__init__.py", line 3, in <module>
    from ._resample2d import lib as _lib, ffi as _ffi
ImportError: /vid2vid/models/flownet2_pytorch/networks/resample2d_package/_ext/resample2d/_resample2d.so: undefined symbol: PyInt_FromLong

ImportError: /vid2vid/models/flownet2_pytorch/networks/resample2d_package/_ext/resample2d/_resample2d.so: undefined symbol: PyInt_FromLong

Why?How?

Training a face model with train_g1_256.sh

Did anyone successfully trained a model with train_g1_256.sh?
The readme says that single GPU models were to well tested.
For some reasons the training finishes after only a few hours and trying to test I get:

ISSUE: Pretrained network G0 has fewer layers; The following are not initialized:
['model_down_img', 'model_down_seg', 'model_final_flow', 'model_final_img', 'model_final_w', 'model_res_flow', 'model_res_img', 'model_up_flow', 'model_up_img']

rm: cannot remove 'Resample2d_kernel.o': No such file or directory

when i run
dreamer@FD:~/Documents/vid2vid-master$ python scripts/download_flownet2.py
i got

Compiling correlation kernels by nvcc...
Compiling resample2d kernels by nvcc...
rm: cannot remove 'Resample2d_kernel.o': No such file or directory
In file included from /home/dreamer/.local/lib/python3.5/site-packages/torch/lib/include/THC/THCGeneral.h:5:0,
                 from /home/dreamer/.local/lib/python3.5/site-packages/torch/lib/include/THC/THC.h:4,
                 from Resample2d_kernel.cu:1:
/home/dreamer/.local/lib/python3.5/site-packages/torch/lib/include/TH/THAllocator.h:6:28: fatal error: ATen/Allocator.h: No such file or directory
compilation terminated.
Compiling channelnorm kernels by nvcc...
rm: cannot remove 'ChannelNorm_kernel.o': No such file or directory
In file included from /home/dreamer/.local/lib/python3.5/site-packages/torch/lib/include/THC/THCGeneral.h:5:0,
                 from /home/dreamer/.local/lib/python3.5/site-packages/torch/lib/include/THC/THC.h:4,
                 from ChannelNorm_kernel.cu:1:
/home/dreamer/.local/lib/python3.5/site-packages/torch/lib/include/TH/THAllocator.h:6:28: fatal error: ATen/Allocator.h: No such file or directory
compilation terminated.

what should i do?

any demo with hand drawing sketches?

From the demo videos, it seems like they are using algorithm generated edge maps, or labelled segmentation ground truth, all with strong connection with the source video content or to say the lines are less distorted.

Can you show some demo with hand drawing sketches will may contain large distortion but still deliver good concept of what it's going to express. And this will be much more meaningful to show that we can generate vivid videos with hand drawing sketches.

Pretrained Pose-to-body Model?

Hi,
Thank you for this wonderful research — I was wondering if you were planning to release the pre-trained models for the pose to body translation task?
If not, could you release the hyper-parameters used to train the models for that task?
Thank you for any help!

Face landmark detection very slow

Running python data/face_landmark_detection.py train seems to be very slow (~1s per image).
I added a debug output to double-check if dlib makes use of CUDA:
print (dlib.DLIB_USE_CUDA) for which I get true.
But when checking with nvidia-smi there is nothing running on the GPU but I have one CPU thread running at 100%.

Is there any reason why dlib would not use the GPU here?

Install error - python scripts/download_flownet2.py

Hi,

I was following the instruction and tried to install flownet2 model with
python scripts/download_flownet2.py
on my machine

python3
ubuntu 18.04
Cuda 9.0

got error

error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
running install
running bdist_egg
running egg_info
writing channelnorm_cuda.egg-info/PKG-INFO
writing top-level names to channelnorm_cuda.egg-info/top_level.txt
writing dependency_links to channelnorm_cuda.egg-info/dependency_links.txt
reading manifest file 'channelnorm_cuda.egg-info/SOURCES.txt'
writing manifest file 'channelnorm_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'channelnorm_cuda' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=/build/python2.7-nbjU53/python2.7-2.7.15~rc1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/lib/python2.7/dist-packages/torch/lib/include -I/usr/lib/python2.7/dist-packages/torch/lib/include/TH -I/usr/lib/python2.7/dist-packages/torch/lib/include/THC -I/usr/include -I/usr/include/python2.7 -c channelnorm_cuda.cc -o build/temp.linux-x86_64-2.7/channelnorm_cuda.o -std=c++11 -DTORCH_EXTENSION_NAME=channelnorm_cuda
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from channelnorm_cuda.cc:1:0:
/usr/lib/python2.7/dist-packages/torch/lib/include/torch/torch.h:5:10: fatal error: pybind11/pybind11.h: No such file or directory
 #include <pybind11/pybind11.h>
          ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

Could you help me with this?

Single image generator does not exist?

I tried to run test.py using shell scripts provided in ./scripts/street/* . The testing works fine on Cityscapes-test with 2048 & 1024 model in given configurations.

I changed the configurations a bit
python test.py --name label2city_2048 --label_nc 35 --loadSize 1024 --n_scales_spatial 3 --use_instance --fg --n_downsample_G 2 --use_single_G

and try to understand if I change the --loadsize to 720or540 {which is not a multiple of 32} & even 1024 or 256 {which is a multiple of 32} What will be the outcome. Using the "label2city_2048" model, I am able use -loadsize: 1024, but code fails when I use 720or540or256 with error:

File "/home/ubuntu/Documents/anubhav/vid2vid/models/vid2vid_model_G.py", line 279, in load_single_G raise ValueError('Single image generator does not exist')
ValueError: Single image generator does not exist

question.1 : Could you please suggest me, why it's not working for another load size? and what could be a potential fix?

Furthermore, I was testing existing models on another dataset (BDD100K), My test-set images are (7201280) instead of your's CityScapes(10242048). I used the "label2city_1024_g1" model, with the following conf.

python test.py --name label2city_1024_g1 --dataroot datasets/BDD/ --label_nc 40 --loadSize 720 --n_scales_spatial 3 --n_downsample_G 2 --use_single_G it errors out

valueError: Single image generator does not exist
and without "--use_single_G" option it complains of

ValueError: Please specify the method for generating the first frame

Then, I have used "--use_real_img" option which looks for 3-channel Semantic segmentation masked images in Test_B(I provided), then again it errors out :

RuntimeError: invalid argument 2: size '[1 x -1 x 3 x 704 x 1280]' is invalid for input with 9011200 elements at /opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/TH/THStorage.cpp:80

My question 2 : , Could you please suggest what I am doing wrong which doesn't conform with your code, especially your thoughts on the last error?

thanks in advance

@junyanz, @tcwang0509, @mingyuliutw

Pre-trained model on body

Do you provide a pre-trained model on pose to body estimation testing or we have to generate dataset and train one necessarily ?

AttributeError: module

do you meet this problem,how to fix it.please!

AttributeError: module 'models.flownet2_pytorch.networks.resample2d_package._ext.resample2d._resample2d' has no attribute 'Resample2d_cuda_forward'

Which algorithm is used to generate train_A and train_inst?

I want to use my urban street video to generate a video just like in your videos.
And can you share which segmentation algorithm is used as you describe in README.md?
"We apply a pre-trained segmentation algorithm to get the corresponding semantic maps (train_A) and instance maps (train_inst)."

torch.FatalError: cuda runtime error (8) : invalid device function at Resample2d_kernel.cu:205

I have met this problem and have no idea deal with it...

THCudaCheck FAIL file=Resample2d_kernel.cu line=205 error=8 : invalid device function
Traceback (most recent call last):
  File "test.py", line 43, in <module>
    generated = model.inference(A, B, inst)
  File "/mnt/yry/vid2vid-master/models/vid2vid_model_G.py", line 198, in inference
    fake_B = self.generate_frame_infer(real_A[self.n_scales-1-s], s)
  File "/mnt/yry/vid2vid-master/models/vid2vid_model_G.py", line 216, in generate_frame_infer
    self.fake_B_feat, self.flow_feat, self.fake_B_fg_feat, use_raw_only)    
  File "/mnt/yry/vid2vid-master/models/networks.py", line 177, in forward
    img_warp = self.resample(img_prev[:,-3:,...].cuda(gpu_id), flow).cuda(gpu_id)        
  File "/home/liuzhiqing/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/yry/vid2vid-master/models/flownet2_pytorch/networks/resample2d_package/modules/resample2d.py", line 14, in forward
    return Resample2dFunction.apply(input1_c, input2, self.kernel_size)
  File "/mnt/yry/vid2vid-master/models/flownet2_pytorch/networks/resample2d_package/functions/resample2d.py", line 19, in forward
    resample2d.Resample2d_cuda_forward(input1, input2, output, kernel_size)
  File "/home/liuzhiqing/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 197, in safe_call
    result = torch._C._safe_call(*args, **kwargs)
torch.FatalError: cuda runtime error (8) : invalid device function at Resample2d_kernel.cu:205

Can anybody fix it?

Synthetic to Real video translation

Hello,

Thanks for this amazing work. However, I have a question, can this model run on translating synthetic to real videos ? Let's say as the testing that was done for the UNIT network, from GTA5 to Cityscape for example

'Segmentation fault (core dumped)' still exists.

I downloaded the latest version (Aug 24th), and followed the steps in 'Read me' Paragraph 'Test', but I still get such error. My CUDA version is 9.0, PyTorch version is 4.1.

Can someone help me?

------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/test_A
dataset_mode: temporal
debug: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [1, 2]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain: 
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 2
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_first_img: False
no_flip: False
norm: batch
ntest: inf
output_nc: 3
phase: test
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
---------- Networks initialized -------------
-----------------------------------------------
Doing 28 frames
Segmentation fault (core dumped)

Pose to Body returns CUDA error

When trying to run Pose2Body using the following setup:

  • pose images in ./datasets/Dance/dancer1/train_A and raw images in ./datasets/Dance/dancer1/train_B
  • using command:
python train.py --name pose2body_256_g1 --loadSize 256 --max_frames_per_gpu 6 --n_frames_total 6 --dataroot ./datasets/Dance/dancer1/ --label_nc 0
  • not using --fg because it is pose2body so there is no label map or foreground/background separation

I get the following error about CUDA:

error in correlation_forward_cuda_kernel: no kernel image is available for execution on the device
Traceback (most recent call last):
  File "train.py", line 273, in <module>
    train()
  File "train.py", line 105, in train
    flow_ref, conf_ref = flowNet(real_B, real_B_prev)  # reference flows and confidences
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet.py", line 33, in forward
    flow, conf = self.compute_flow_and_conf(input_A, input_B)
  File "/home/ubuntu/vid2vid/models/flownet.py", line 50, in compute_flow_and_conf
    flow1 = self.flowNet(data1)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet2_pytorch/models.py", line 126, in forward
    flownetc_flow2 = self.flownetc(x)[0]
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet2_pytorch/networks/FlowNetC.py", line 86, in forward
    out_corr = self.corr(out_conv3a, out_conv3b) # False
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet2_pytorch/networks/correlation_package/correlation.py", line 59, in forward
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallf.stride1, self.stride2, self.corr_multiply)(input1, input2)lel/data_parallel.py", line 121, in forward                                               ion.py", line 27, in forward
    return self.module(*inputs[0], **kwargs[0])                                           f.corr_multiply)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__                                                      6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet.py", line 33, in forward                      6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-    flow, conf = self.compute_flow_and_conf(input_A, input_B)
  File "/home/ubuntu/vid2vid/models/flownet.py", line 50, in compute_flow_and_conf        6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-    flow1 = self.flowNet(data1)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__                                                      /ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-g    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet2_pytorch/models.py", line 126, in forward
    flownetc_flow2 = self.flownetc(x)[0]
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet2_pytorch/networks/FlowNetC.py", line 86, in forward                                                                                     loadSize 128 --max_frames_per_gpu 6 --n_frames_total 6 --dataroot ./datasets/Fortnite/fortnite/ --    out_corr = self.corr(out_conv3a, out_conv3b) # False
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/flownet2_pytorch/networks/correlation_package/correlation.py", line 59, in forward
    result = CorrelationFunction(self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)(input1, input2)
  File "/home/ubuntu/vid2vid/models/flownet2_pytorch/networks/correlation_package/correlation.py", line 27, in forward
    self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)
RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:79)
frame #0: <unknown function> + 0x140f8 (0x7fef4d7c40f8 in /home/ubuntu/.local/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #1: <unknown function> + 0x1433e (0x7fef4d7c433e in /home/ubuntu/.local/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: <unknown function> + 0x107e1 (0x7fef4d7c07e1 in /home/ubuntu/.local/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
<omitting python frames>
frame #10: THPFunction_do_forward(THPFunction*, _object*) + 0x2ad (0x7fef969c9f8d in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so)

I thought this was a GPU/CUDA issue, but I am using the AWS Deep Learning Image so everything is setup correctly, and also Torch seems to work fine with CUDA by the below in Python interactive mode:

>>> import torch
>>> torch.cuda.set_device(0)
>>> torch.cuda.get_device_capability(0)(3, 7)
>>> x = torch.cuda.FloatTensor(1)
>>> y = torch.cuda.FloatTensor(1)
>>> x + y
tensor([0.], device='cuda:0')

I tried removing label_nc 0, but this gave rise to a bigger error:

/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [384,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [385,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [386,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [387,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [388,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [389,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [390,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [391,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [392,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [393,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [394,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [395,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [396,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [397,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [398,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [399,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [403,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [404,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [405,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [407,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [408,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [409,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [410,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [411,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [412,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [413,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [414,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCTensorScatterGather.cu:176: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [20,0,0], thread: [415,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
Traceback (most recent call last):
  File "train.py", line 273, in <module>
    train()
  File "train.py", line 97, in train
    fake_B, fake_B_raw, flow, weight, real_A, real_Bp, fake_B_last = modelG(input_A, input_B, inst_A, fake_B_last)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/vid2vid/models/vid2vid_model_G.py", line 125, in forward
    fake_B, fake_B_raw, flow, weight = self.generate_frame_train(netG, real_A_all, fake_B_prev, start_gpu, is_first_frame)
  File "/home/ubuntu/vid2vid/models/vid2vid_model_G.py", line 170, in generate_frame_train    fake_B_feat, flow_feat, fake_B_fg_feat, use_raw_only)
  File "/home/ubuntu/vid2vid/models/networks.py", line 161, in forward
    downsample = self.model_down_seg(input) + self.model_down_img(img_prev)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
    input = module(input)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: CuDNN error: CUDNN_STATUS_INTERNAL_ERROR
terminate called after throwing an instance of 'at::Error'
  what():  CUDA error: invalid device pointer (CudaCachingDeleter at /opt/conda/conda-bld/pytorch_1532579245307/work/aten/src/THC/THCCachingAllocator.cpp:498)
frame #0: THStorage_free + 0x44 (0x7f931b353314 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #1: THTensor_free + 0x2f (0x7f931b3f2a1f in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #2: at::CUDAFloatTensor: :~CUDAFloatTensor() + 0x9 (0x7f92fabc72e9 in /home/ubuntu/an
aconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so)
frame #3: torch::autograd:  :Variable:  :Impl:  :~Impl() + 0x291 (0x7f931cf45761 in /home/ubuntu
/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so)
gnu.so)frame #4: torch::autograd: :Variable: :Impl: :~Impl() + 0x9 (0x7f931cf458d9 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so)frame #5: <unknown function> + 0x770cd9 (0x7f931cf5ecd9 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so)frame #6: <unknown function> + 0x770d84 (0x7f931cf5ed84 in /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so)
<omitting python frames>frame #21: __libc_start_main + 0xf0 (0x7f93335f2830 in /lib/x86_64-linux-gnu/libc.so.6)
Aborted (core dumped)

The following is the network info printed out by train.py:

------------ Options -------------
TTUR: False
batchSize: 1
beta1: 0.5
checkpoints_dir: ./checkpoints
continue_train: False
dataroot: ./datasets/Dance/dancer1/
dataset_mode: temporal
debug: False
display_freq: 100
display_id: 0
display_winsize: 512
feat_num: 3
fg: False
fg_labels: [26]fineSize: 512
gan_mode: ls
gpu_ids: [0]
input_nc: 3
isTrain: Truelabel_feat: False
label_nc: 35
lambda_F: 10.0
lambda_T: 10.0
lambda_feat: 10.0loadSize: 256
load_features: False
load_pretrain:
lr: 0.0002
max_dataset_size: inf
max_frames_backpropagate: 1
max_frames_per_gpu: 6
model: vid2vid
nThreads: 2
n_blocks: 9n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 2
n_frames_D: 3
n_frames_G: 3
n_frames_total: 6
n_gpus_gen: 1
n_layers_D: 3
n_local_enhancers: 1
n_scales_spatial: 1
n_scales_temporal: 3
name: pose2body_256_g1
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
niter: 10
niter_decay: 10
niter_fix_global: 0
niter_step: 5
no_first_img: False
no_flip: False
no_ganFeat: False
no_html: False
no_vgg: False
norm: batch
num_D: 1
output_nc: 3
phase: train
pool_size: 1
print_freq: 100
resize_or_crop: scaleWidth
save_epoch_freq: 1
save_latest_freq: 1000
serial_batches: False
tf_log: False
use_instance: False
use_single_G: False
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TemporalDataset] was created
#training videos = 1
vid2vid
---------- Networks initialized -------------
-----------------------------------------------
---------- Networks initialized -------------
-----------------------------------------------
create web directory ./checkpoints/pose2body_256_g1/web..

Using a Tesla K80 with 12GB, CUDA 9.0, CUDNN 7.0.5, Ubuntu 16.04

Single GPU Pretrained face model not available?

I was looking for the single GPU face model.
Besides the test_512.sh script in the face directory, there are the following scripts:

test_g1_256.sh
test_g1_512.sh

If i understand correctly, those are for the single GPU models.
When downloading the models however I only get those directories:

edge2face_512
edge2face_single

I tried to rename the edge2face_single model directory but there is not the expected latest_net_G0.pth file but 3 files:

features.npy
latest_net_E.pth
latest_net_G.pth

Any suggestions on how to make the single GPU face model work?

Occlusion mask

Hello,
Can anybody point me where the occlusion mask is being used?
Thanks!

Resample2d_cuda_forward not found

Hi,

I'm running this command :

python test.py --name label2city_2048 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G

and I got this error :

------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/
dataset_mode: temporal
debug: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain: 
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_first_img: False
no_flip: False
norm: batch
ntest: inf
output_nc: 3
phase: test
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
---------- Networks initialized -------------
-----------------------------------------------
Doing 560 frames
Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7f55f562fc50>>
Traceback (most recent call last):
  File "/home/studio/.virtualenvs/vid2vid/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 399, in __del__
    self._shutdown_workers()
  File "/home/studio/.virtualenvs/vid2vid/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers
    self.worker_result_queue.get()
  File "/usr/lib/python3.6/multiprocessing/queues.py", line 337, in get
    return _ForkingPickler.loads(res)
  File "/home/studio/.virtualenvs/vid2vid/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
    fd = df.detach()
  File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
    with _resource_sharer.get_connection(self._id) as conn:
  File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
    c = Client(address, authkey=process.current_process().authkey)
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 494, in Client
    deliver_challenge(c, authkey)
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 722, in deliver_challenge
    response = connection.recv_bytes(256)        # reject large message
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
    buf = self._recv_bytes(maxlength)
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
    buf = self._recv(4)
  File "/usr/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Traceback (most recent call last):
  File "test.py", line 43, in <module>
    generated = model.inference(A, B, inst)
  File "/home/studio/HAH/AI_libs/vid2vid/models/vid2vid_model_G.py", line 198, in inference
    fake_B = self.generate_frame_infer(real_A[self.n_scales-1-s], s)
  File "/home/studio/HAH/AI_libs/vid2vid/models/vid2vid_model_G.py", line 216, in generate_frame_infer
    self.fake_B_feat, self.flow_feat, self.fake_B_fg_feat, use_raw_only)    
  File "/home/studio/HAH/AI_libs/vid2vid/models/networks.py", line 173, in forward
    img_warp = self.resample(img_prev[:,-3:,...].cuda(gpu_id), flow).cuda(gpu_id)        
  File "/home/studio/.virtualenvs/vid2vid/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/studio/HAH/AI_libs/vid2vid/models/flownet2_pytorch/networks/resample2d_package/modules/resample2d.py", line 14, in forward
    return Resample2dFunction.apply(input1_c, input2, self.kernel_size)
  File "/home/studio/HAH/AI_libs/vid2vid/models/flownet2_pytorch/networks/resample2d_package/functions/resample2d.py", line 19, in forward
    resample2d.Resample2d_cuda_forward(input1, input2, output, kernel_size)
AttributeError: module 'models.flownet2_pytorch.networks.resample2d_package._ext.resample2d' has no attribute 'Resample2d_cuda_forward'

I'm trying to fix this for hours now, no way.

  • Ubuntu 18.04.1
  • Cuda 9.2
  • Python 3.6
  • Pytorch 0.4

Here is my pip freeze :

  • certifi==2018.8.13
  • cffi==1.11.5
  • chardet==3.0.4
  • dominate==2.3.1
  • idna==2.7
  • numpy==1.15.1
  • opencv-python==3.4.2.17
  • Pillow==5.2.0
  • pycparser==2.18
  • PyYAML==3.13
  • requests==2.19.1
  • scipy==1.1.0
  • six==1.11.0
  • torch==0.4.1
  • torchvision==0.2.1
  • urllib3==1.23

Are there constraints on the length of a video ?

I have read multiple times in the paper that videos can be "up to 30 seconds".
Is there any real global constraint on the length or is it just loosing too much temporal consistency after that time?

vid2vid/data/face_dataset.py first image parameters

In lines 32--34 of vid2vid/data/face_dataset.py,

B_img = Image.open(B_paths[0]).convert('RGB')
B_size = B_img.size
points = np.loadtxt(A_paths[0], delimiter=',')

is there a reason why [0] is preferred over [start_idx]?

TypeError: __init__() missing 1 required positional argument: 'args'

I have compiled flownet_pytorch with python 3.6, pytorch 0.4.0.

When I run ./scripts/train_256_g1.sh , i.e.
python train.py --name label2city_256 --loadSize 256 --use_instance --n_downsample_G 2 --num_D 1 --max_frames_per_gpu 6 --n_frames_total 6
I got this error:
`
Traceback (most recent call last):
File "train.py", line 273, in
train()
File "train.py", line 31, in train
modelG, modelD, flowNet = create_model(opt)
File "/home/iedl/h84104067/vid2vid-master/models/models.py", line 22, in create_model
flowNet.initialize(opt)
File "/home/iedl/h84104067/vid2vid-master/models/flownet.py", line 19, in initialize
self.flowNet = flownet2_tools.module_to_dict(flownet2_models)'FlowNet2'.cuda(self.gpu_ids[0])

TypeError: init() missing 1 required positional argument: 'args'`

Why? Who can tell me how to solve it?

Training error: no such file ../FlowNet2_checkpoint.pth.tar

Awesome project, but I get training error:

Running the following script
bash ./scripts/face/train_g1_256.sh

Results in the following error:

root@d8a34002c74c:/home/vid2vid# bash ./scripts/face/train_g1_256.sh
------------ Options -------------
TTUR: False
add_face_disc: False
batchSize: 1
beta1: 0.5
checkpoints_dir: ./checkpoints
continue_train: False
dataroot: datasets/face/
dataset_mode: face
debug: False
densepose_only: False
display_freq: 100
display_id: 0
display_winsize: 512
feat_num: 3
fg: False
fg_labels: [26]
fineSize: 512
gan_mode: ls
gpu_ids: [0]
input_nc: 15
isTrain: True
label_feat: False
label_nc: 0
lambda_F: 10.0
lambda_T: 10.0
lambda_feat: 10.0
loadSize: 256
load_features: False
load_pretrain:
lr: 0.0002
max_dataset_size: inf
max_frames_backpropagate: 1
max_frames_per_gpu: 6
max_t_step: 1
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_D: 3
n_frames_G: 3
n_frames_total: 12
n_gpus_gen: 1
n_layers_D: 3
n_local_enhancers: 1
n_scales_spatial: 1
n_scales_temporal: 3
name: edge2face_256_g1
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 64
niter: 20
niter_decay: 20
niter_fix_global: 0
niter_step: 5
no_canny_edge: False
no_dist_map: False
no_first_img: False
no_flip: False
no_flow: False
no_ganFeat: False
no_html: False
no_vgg: False
norm: batch
num_D: 2
openpose_only: False
output_nc: 3
phase: train
pool_size: 1
print_freq: 100
random_drop_prob: 0.2
remove_face_labels: False
resize_or_crop: scaleWidth
save_epoch_freq: 1
save_latest_freq: 1000
serial_batches: False
tf_log: False
use_instance: False
use_single_G: False
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [FaceDataset] was created
#training videos = 8
vid2vid
---------- Networks initialized -------------
-----------------------------------------------
---------- Networks initialized -------------
-----------------------------------------------
Traceback (most recent call last):
  File "train.py", line 295, in <module>
    train()
  File "train.py", line 36, in train
    modelG, modelD, flowNet = create_model(opt)
  File "/home/vid2vid/models/models.py", line 22, in create_model
    flowNet.initialize(opt)
  File "/home/vid2vid/models/flownet.py", line 19, in initialize
    checkpoint = torch.load('models/flownet2_pytorch/FlowNet2_checkpoint.pth.tar')
  File "/usr/local/lib/python2.7/dist-packages/torch/serialization.py", line 356, in load
    f = open(f, 'rb')
IOError: [Errno 2] No such file or directory: 'models/flownet2_pytorch/FlowNet2_checkpoint.pth.tar'

Looks like checkpoint file is missing.

Unexpected EOF using flownet download scripts

so I've run scripts/download_flownet2.py with no errors, and satisfied the dependencies from the flownet github repo - but I'm still getting a flownet-related error when I try and train a vid2vid network. It claims that the file "models/flownet2_pytorch/FlowNet2_checkpoint.pth.tar" is corrupt.

My FlowNet2_checkpoint.pth.tar file is 48.6 MB (48,562,176 bytes) - is this the right size? my archive manager is having trouble opening it, so I'd assume that somehow it's corrupt.

Would really appreciate some help... maybe could someone link me to a version of this file that i can just download normally (minus scripts)?

Thanks so much for reading - the original error is here:

  File "train.py", line 273, in <module>
    train()
  File "train.py", line 31, in train
    modelG, modelD, flowNet = create_model(opt)
  File "/home/beef/vid2vid/models/models.py", line 22, in create_model
    flowNet.initialize(opt)        
  File "/home/beef/vid2vid/models/flownet.py", line 19, in initialize
    checkpoint = torch.load('models/flownet2_pytorch/FlowNet2_checkpoint.pth.tar')
  File "/home/beef/.conda/envs/pytorch4/lib/python3.6/site-packages/torch/serialization.py", line 303, in load
    return _load(f, map_location, pickle_module)
  File "/home/beef/.conda/envs/pytorch4/lib/python3.6/site-packages/torch/serialization.py", line 476, in _load
    deserialized_objects[key]._set_from_file(f, offset, f_is_real_file)
RuntimeError: unexpected EOF. The file might be corrupted.```

Tests don't pass

These tests all fail:

bash ./scripts/street/test_2048.sh
-->
AssertionError: datasets/Cityscapes/test_A is not a valid directory

bash ./scripts/street/test_g1_1024.sh
-->
AssertionError: datasets/Cityscapes/test_A is not a valid directory

bash ./scripts/face/test_512.sh
-->
AssertionError: datasets/face/test_keypoints is not a valid directory

Downloading the datasets seems to be lower in the instructions, and optional.
Is it required?

Clarification on Pose to Body

I have a few questions regarding the pose --> body task. From the paper,

Dance video dataset. We download YouTube dance videos for the pose to human motion
synthesis task. Each video is about 3 ∼ 4 minutes at 1280 × 720 resolution, and we crop the
central 512×720 regions. We extract human poses with the DensePose [21] and the OpenPose [5]
algorithms, and directly concatenate the results together. The training set includes a dance video
from a single dancer, while the test set contains videos of other dance motions or from other
dancers.

  1. By "directly concatenate" do you also layer the DensePose UV pose on the OpenPose color-coded pose? Or just use one of the two, and if so which performs better?

  2. The provided example shows a still background for the dancer. If I understand correctly would a video with changing background or multiple people cause issues? e.g. the demonstration for DensePose and OpenPose includes a fast paced multi-person dance video. However the poses generated are all more or less synchronized, and the background is not encoded in any way. Would the model generate mostly noise for the background in this case, and would it be able to synthesize human bodies from multiple pose estimations?

  3. How does the test set perform against other dancers? Do changes such as height, limb length, etc. cause issues in generation?

  4. Lastly, is there a dockerized version of this repository available? Alternatively, would this model compile inside the Flownet2 Docker container?

Thanks!

Understanding the flow for training faces

Please correct me if I am wrong. ( I am focusing just on faces)

As I understand, vid2vid lets you provide a video from which each frame is like labeled data for training. So once one has a trained model, then given any input data of just edge-maps, then vid2vid will try to create a face (based on the trained data) from the edge maps.

I am not clear though how to do this with train.py. Do I need to generate edge-maps myself for each frame of my video?

Ideally I want to just provide vid2vid a single say .avi or video file and vid2vid generate edge-maps itself for each frame, outputs a trained model.

Thank you @tcwang0509 @junyanz

When answering, please include CLI commands that I can copy paste/directions that I can immediately do/changes to Python code that might be needed.

Citiscape dataset download used

Is the dataset used for training in the paper the following:

leftImg8bit_sequence_trainvaltest.zip
30-frame snippets (17Hz) surrounding each left 8-bit image (-19 | +10) from the train, val, and test sets (150000 images)

Many thanks.

Nick

TypeError: __init__() got an unexpected keyword argument 'track_running_stats'

i have installed this repo in nvidia docker env with: cuda8.0, cudnn6.0, miniconda with python3.6 virtualenv, pytorch0.2.0

when i run ./scripts/test_2048.sh this shell, i got beblow error:

------------ Options -------------
aspect_ratio: 1.0
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/
dataset_mode: temporal
debug: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 2048
load_features: False
load_pretrain:
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 3
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_2048
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_first_img: False
no_flip: False
norm: batch
ntest: inf
output_nc: 3
phase: test
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
---------- Networks initialized -------------

Traceback (most recent call last):
File "test.py", line 24, in
model = create_model(opt)
File "/vid2vid/models/models.py", line 19, in create_model
modelG.initialize(opt)
File "/vid2vid/models/vid2vid_model_G.py", line 51, in initialize
self.netG_i = self.load_single_G() if self.use_single_G else None
File "/vid2vid/models/vid2vid_model_G.py", line 270, in load_single_G
netG = networks.define_G(input_nc, opt.output_nc, 0, 32, 'local', 4, 'instance', 0, self.gpu_ids, opt)
File "/vid2vid/models/networks.py", line 39, in define_G
netG = LocalEnhancer(input_nc, output_nc, ngf, n_downsampling, opt.n_blocks, opt.n_local_enhancers, opt.n_blocks_local, norm_layer)
File "/vid2vid/models/networks.py", line 320, in init
model_global = GlobalGenerator(input_nc, output_nc, ngf_global, n_downsample_global, n_blocks_global, norm_layer).model
File "/vid2vid/models/networks.py", line 286, in init
model = [nn.ReflectionPad2d(3), nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), norm_layer(ngf), activation]
TypeError: init() got an unexpected keyword argument 'track_running_stats'
(py3) root@0d93b7e85c1e:/vid2vid#

can anyone tell me how to solve it.

thks!

Download scripts no longer working

ModuleNotFoundError: No module named 'scripts.download_gdrive'

They need to be updated as the scripts are now in specific sub folders. Also, I believe an init.py file is required in the scripts folder.

Four undefined names

flake8 testing of https://github.com/NVIDIA/vid2vid on Python 3.7.0

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./models/networks.py:29:77: F821 undefined name 'norm'
        raise NotImplementedError('normalization layer [%s] is not found' % norm)
                                                                            ^
./data/test_dataset.py:52:46: F821 undefined name 'A'
            A = Ai if i == 0 else torch.cat([A, Ai], dim=0)                        
                                             ^
./data/temporal_dataset.py:76:46: F821 undefined name 'A'
            A = Ai if i == 0 else torch.cat([A, Ai], dim=0)            
                                             ^
./data/temporal_dataset.py:77:46: F821 undefined name 'B'
            B = Bi if i == 0 else torch.cat([B, Bi], dim=0)            
                                             ^
4     F821 undefined name 'A'
4

Dockerfile not compatible with Tesla V100

The Dockerfile you provide is based on cuda 8.0 and cannot be used with current Tesla Cards because they need at least cuda 9.0. If you need the 32GB V100 to run the training effectively at full resolution, then there is a mismatch.

Will the code be updated to pytorch 1.0 in the future? pytorch 0.4.0 has problems with current cuda releases.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.