Giter Club home page Giter Club logo

vid2depth's Introduction

vid2depth

Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints

Reza Mahjourian, Martin Wicke, Anelia Angelova

CVPR 2018

Project website: https://sites.google.com/view/vid2depth

ArXiv: https://arxiv.org/pdf/1802.05522.pdf

1. Installation

Requirements

Python Packages

mkvirtualenv venv  # Optionally create a virtual environment.
pip install absl-py
pip install matplotlib
pip install numpy
pip install scipy
pip install tensorflow

For building the ICP op (work in progress)

Download vid2depth

#git clone --depth 1 https://github.com/tensorflow/models.git
git clone https://github.com/Shiaoming/vid2depth.git

2. Datasets

Download KITTI dataset (174GB)

mkdir -p ~/vid2depth/kitti-raw-uncompressed
cd ~/vid2depth/kitti-raw-uncompressed
wget https://github.com/mrharicot/monodepth/blob/master/utils/kitti_archives_to_download.txt
wget -i kitti_archives_to_download.txt
unzip "*.zip"

Download Cityscapes dataset (110GB) (optional)

You will need to register in order to download the data. Download the following files:

  • leftImg8bit_sequence_trainvaltest.zip
  • camera_trainvaltest.zip

Download Bike dataset (17GB) (optional)

mkdir -p ~/vid2depth/bike-uncompressed
cd ~/vid2depth/bike-uncompressed
wget https://storage.googleapis.com/brain-robotics-data/bike/BikeVideoDataset.tar
tar xvf BikeVideoDataset.tar

3. Inference

Download trained model

mkdir -p ~/vid2depth/trained-model
cd ~/vid2depth/trained-model
wget https://storage.cloud.google.com/vid2depth/model/model-119496.zip
unzip model-119496.zip

Run inference

cd tensorflow/models/research/vid2depth
python inference.py \
  --kitti_dir ~/vid2depth/kitti-raw-uncompressed \
  --output_dir ~/vid2depth/inference \
  --video 2011_09_26/2011_09_26_drive_0009_sync \
  --model_ckpt ~/vid2depth/trained-model/model-119496

4. Training

Prepare KITTI training sequences

# Prepare training sequences.
cd tensorflow/models/research/vid2depth
python dataset/gen_data.py \
  --dataset_name kitti_raw_eigen \
  --dataset_dir ~/vid2depth/kitti-raw-uncompressed \
  --data_dir ~/vid2depth/data/kitti_raw_eigen \
  --seq_length 3

Prepare Cityscapes training sequences (optional)

# Prepare training sequences.
cd tensorflow/models/research/vid2depth
python dataset/gen_data.py \
  --dataset_name cityscapes \
  --dataset_dir ~/vid2depth/cityscapes-uncompressed \
  --data_dir ~/vid2depth/data/cityscapes \
  --seq_length 3

Prepare Bike training sequences (optional)

# Prepare training sequences.
cd tensorflow/models/research/vid2depth
python dataset/gen_data.py \
  --dataset_name bike \
  --dataset_dir ~/vid2depth/bike-uncompressed \
  --data_dir ~/vid2depth/data/bike \
  --seq_length 3

Compile the ICP op (work in progress)

The ICP op depends on multiple software packages (TensorFlow, Point Cloud Library, FLANN, Boost, HDF5). The Bazel build system requires individual BUILD files for each of these packages. We have included a partial implementation of these BUILD files inside the third_party directory. But they are not ready for compiling the op. If you manage to build the op, please let us know so we can include your contribution.

cd tensorflow/models/research/vid2depth
bazel build ops:pcl_demo  # Build test program using PCL only.
bazel build ops:icp_op.so

For the time being, it is possible to run inference on the pre-trained model and run training without the icp loss.

Run training

# Train
cd tensorflow/models/research/vid2depth
python train.py \
  --data_dir ~/vid2depth/data/kitti_raw_eigen \
  --seq_length 3 \
  --reconstr_weight 0.85 \
  --smooth_weight 0.05 \
  --ssim_weight 0.15 \
  --icp_weight 0 \
  --checkpoint_dir ~/vid2depth/checkpoints

Reference

If you find our work useful in your research please consider citing our paper:

@inproceedings{mahjourian2018unsupervised,
  title={Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints},
    author={Mahjourian, Reza and Wicke, Martin and Angelova, Anelia},
    booktitle = {CVPR},
    year={2018}
}

Contact

To ask questions or report issues please open an issue on the tensorflow/models issues tracker. Please assign issues to @rezama.

Credits

This implementation is derived from SfMLearner by Tinghui Zhou.

vid2depth's People

Contributors

shiaoming avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.