Giter Club home page Giter Club logo

deepmapping's Introduction

DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds

This repository contains PyTorch implementation associated with the paper:

"DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds", Li Ding and Chen Feng, CVPR 2019 (Oral).

Citation

If you find DeepMapping useful in your research, please cite:

@InProceedings{Ding_2019_CVPR,
author = {Ding, Li and Feng, Chen},
title = {DeepMapping: Unsupervised Map Estimation From Multiple Point Clouds},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

Dependencies

Requires Python 3.x, PyTorch, Open3D, and other common packages listed in requirements.txt

pip3 install -r requirements.txt

Running on GPU is highly recommended. The code has been tested with Python 3.6.5, PyTorch 0.4.0 and Open3D 0.4.0

Getting Started

Dataset

Simulated 2D point clouds are provided as ./data/2D/all_poses.tar. Extract the tar file:

tar -xvf ./data/2D/all_poses.tar -C ./data/2D/

A set of sub-directories will be created. For example, ./data/2D/v1_pose0 corresponds to the trajectory 0 sampled from the environment v1. In this folder, there are 256 local point clouds saved in PCD file format. The corresponding ground truth sensor poses is saved as gt_pose.mat file, which is a 256-by-3 matrix. The i-th row in the matrix represent the sensor pose [x,y,theta] for the i-th point cloud.

Solving Registration As Unsupervised Training

To run DeepMapping, execute the script

./script/run_train_2D.sh

By default, the results will be saved to ./results/2D/.

Warm Start

DeepMapping allows for seamless integration of a “warm start” to reduce the convergence time with improved performance. Instead of starting from scratch, you can first perform a coarse registration of all point clouds using incremental ICP

./script/run_icp.sh

The coarse registration can be further improved by DeepMapping. To do so, simply set INIT_POSE=/PATH/TO/ICP/RESULTS/pose_est.npy in ./script/run_train_2D.sh. Please see the comments in the script for detailed instruction.

Evaluation

The estimated sensor pose is saved as numpy array pose_est.npy. To evaluate the registration, execute the script

./script/run_eval_2D.sh

Absolute trajectory error will be computed as error metrics.

Related Project

DeepMapping2 (CVPR'2023) for large-scale LiDAR mapping

deepmapping's People

Contributors

dmax123 avatar simbaforrest avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepmapping's Issues

About generating 3D ground truth for custom trajectory on the AVD dataset

Hello, Thanks for the nice work. I am quite new in this field, I have run into the problem of getting 3D ground truth for custom trajectory. I can use the following function

  def transform_to_global_AVD(pose, obs_local):  
  - transform obs local coordinate to global corrdinate frame . 
  - :param pose: <Bx3> <x,z,theta> y = 0 . 
  - :param obs_local: <BxLx3> (unorganized) or <BxHxWx3> (organized) . 
  - :return obs_global: <BxLx3> (unorganized) or <BxHxWx3> (organized) .    

on estimated pose (B*3)<x,z,theta> which was obtained by L-NET.

However, the ground truth pose has the size of (B*6). So I treated it as <x,y,z,dx,dy,dz> and transform it into <x,z,theta>:

gt_angle_y =np.arctan2(np.sqrt(gt_pose[:,3]*gt_pose[:,3]+gt_pose[:,5]*gt_pose[:,5]),gt_pose[:,4]) gt_pose_xzth = np.vstack((gt_pose[:,0],gt_pose[:,2],gt_angle_y)).transpose()

However, the result is not correct. I also tried to use a translation matrix and three rotation matrixes along each axis. It also failed. How can I do this?

who can tell me how to deal this problem?

loading dataset
creating model
start training
Traceback (most recent call last):
File "train_AVD.py", line 77, in
loss = model(obs_batch, valid_pt, pose_batch)
File "/home/thebs/anaconda3/envs/fmr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "../models/deepmapping.py", line 114, in forward
self.obs_local, self.n_samples,sensor_center)
File "../models/deepmapping.py", line 35, in sample_unoccupied_point
unoccupied[:, (idx - 1) * L:idx * L, :] = center + (local_point_cloud-center) * fac
RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 2

run_train_2D.sh error

my version of python is : Python 3.6.8
Traceback (most recent call last):
File "train_2D.py", line 12, in
import utils
File "../utils/init.py", line 2, in
from .geometry_utils import *
File "../utils/geometry_utils.py", line 3, in
import open3d
File "/home/ct/.local/lib/python3.6/site-packages/open3d/init.py", line 28, in
from .open3d import * # py2 py3 compatible
ImportError: Invalid character class.
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python3/dist-packages/apport/init.py", line 5, in
from apport.report import Report
File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in
import apport.fileutils
File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in
from apport.packaging_impl import impl as packaging
File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 23, in
import apt
File "/usr/lib/python3/dist-packages/apt/init.py", line 23, in
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'

Original exception was:
Traceback (most recent call last):
File "train_2D.py", line 12, in
import utils
File "../utils/init.py", line 2, in
from .geometry_utils import *
File "../utils/geometry_utils.py", line 3, in
import open3d
File "/home/ct/.local/lib/python3.6/site-packages/open3d/init.py", line 28, in
from .open3d import * # py2 py3 compatible
ImportError: Invalid character class.

So is there anything I can do to solve this error? thx

Training for ordered / un-ordered data points

Hi,

In the paper, you mentioned that you tested both ordered and un-ordered data points - with CNN and PointNet correspondingly.

If I understand correctly, the current version of the code uses CNN for an ordered set of data points. Am I correct? Do you also support the other configuration?

Thanks!

Yotam

About used Active Vision Dataset (AVD) trajectory

Great work! Also thanks for releasing the codes.
The 2D optimisation works very well. And I am also very interested in the 3D case on real-world AVD dataset.

Could you please tell me if it is possible to kindly release the used trajectory data (images, poses, related scripts) to reproduce the same 3D results in the paper?
It would be great for us to have a better understanding of DeepMapping in 3D case.

How to generate Occupancy Map after registration

Hi,
Thanks for sharing this beautiful idea and code.
My question is, after training and registration is over, how to generate the occupancy by using the trained M-Net model?
Any idea to implement this function would be appreciated, since this part of code is not released now.

run_train_2D.sh error

DeepMapping/script$ ./run_train_2D.sh
Traceback (most recent call last):
File "train_2D.py", line 12, in
import utils
File "../utils/init.py", line 2, in
from .geometry_utils import *
File "../utils/geometry_utils.py", line 188
raise ValueError(f'metrics: {metrics} not recognized.')
^
SyntaxError: invalid syntax

About the AVD traning dataset

registrition
traj_compare
微信截图_20201030143917
Hi, first thanks you work and providing the release code for us.
I have some question about the AVD training with the provided DeepMapping network. I repeat the code you provided. However, I found that it only used 16 points cloud for training. Is that reliable with so little training data? In your paper, you mentioned that you have randomly produced 108 trajectories with the given datasets for training,but I could not find the corresponding code. And both the estimated results and ground truth pose show that there are only 16 point cloud for training. And in the trajectory's picture, I plot the ground truth pose with black color and estimated pose with red color, and find that these two can not overlap nicely. While on 2D experiments , your method works very well. I am curious about the reason. Hope for your reply, thanks a lot.

A Potential Bug in Sample Unoccupied Point

Hi,
I am reading your code.
And I believe something wrong with the calling of
sample_unoccupied_point()
in models/deepmapping.py file.

because you feed self.obs_local to sampling, and in the sampling, rays are shot from (0,0,0) to occupied positions, this works perfect when there is no initialized pose for self.obs_local.

When we use initialized poses, self.obs_local has already transformed by the initial pose, and center of scans(point clouds) is no longer (0,0,0), this sampling method has an issue to mess up the loss.

Do you agree this is a problem? or you design this on purpose, but based on my visualization of cases when it is evaluated with initialized poses, empty and occupied space is overlapped with each other.

Question about the warm start

Hi, thank you for sharing the nice work. I get a question about the warm start mentioned in Section 3.5 in the paper. You mentioned you use ICP as a warm start. In my understanding, the ICP gives a rough rotation, but the parameters for initialization should be the NN weights. Would you like to tell me what's wrong here?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.