Giter Club home page Giter Club logo

pointdsc's Introduction

PointDSC repository

PyTorch implementation of PointDSC for CVPR'2021 paper "PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency", by Xuyang Bai, Zixin Luo, Lei Zhou, Hongkai Chen, Lei Li, Zeyu Hu, Hongbo Fu and Chiew-Lan Tai.

This paper focus on outlier rejection for 3D point clouds registration. If you find this project useful, please cite:

@article{bai2021pointdsc,
  title={{PointDSC}: {R}obust {P}oint {C}loud {R}egistration using {D}eep {S}patial {C}onsistency},
  author={Xuyang Bai, Zixin Luo, Lei Zhou, Hongkai Chen, Lei Li, Zeyu Hu, Hongbo Fu and Chiew-Lan Tai},
  journal={CVPR},
  year={2021}
}

Introduction

Removing outlier correspondences is one of the critical steps for successful feature-based point cloud registration. Despite the increasing popularity of introducing deep learning techniques in this field, spatial consistency, which is essentially established by a Euclidean transformation between point clouds, has received almost no individual attention in existing learning frameworks. In this paper, we present PointDSC, a novel deep neural network that explicitly incorporates spatial consistency for pruning outlier correspondences. First, we propose a nonlocal feature aggregation module, weighted by both feature and spatial coherence, for feature embedding of the input correspondences. Second, we formulate a differentiable spectral matching module, supervised by pairwise spatial compatibility, to estimate the inlier confidence of each correspondence from the embedded features. With modest computation cost, our method outperforms the state-of-the-art hand-crafted and learning-based outlier rejection approaches on several real-world datasets by a significant margin. We also show its wide applicability by combining PointDSC with different 3D local descriptors.

fig0

Requirements

If you are using conda, you may configure PointDSC as:

conda env create -f environment.yml
conda activate pointdsc

If you also want to use FCGF as the 3d local descriptor, please install MinkowskiEngine v0.5.0 and download the FCGF model (pretrained on 3DMatch) from here.

Demo

We provide a small demo to extract dense FPFH descriptors for two point cloud, and register them using PointDSC. The ply files are saved in the demo_data folder, which can be replaced by your own data. Please use model pretrained on 3DMatch for indoor RGB-D scans and model pretrained on KITTI for outdoor LiDAR scans. To try the demo, please run

python demo_registration.py --chosen_snapshot [PointDSC_3DMatch_release/PointDSC_KITTI_release] --descriptor [fcgf/fpfh]

For challenging cases, we recommend to use learned feature descriptors like FCGF or D3Feat.

Dataset Preprocessing

3DMatch

The raw point clouds of 3DMatch can be downloaded from FCGF repo. The test set point clouds and the ground truth poses can be downloaded from 3DMatch Geometric Registration website. Please make sure the data folder contains the following:

.                          
├── fragments                 
│   ├── 7-scene-redkitechen/       
│   ├── sun3d-home_at-home_at_scan1_2013_jan_1/      
│   └── ...                
├── gt_result                   
│   ├── 7-scene-redkitechen-evaluation/   
│   ├── sun3d-home_at-home_at_scan1_2013_jan_1-evaluation/
│   └── ...         
├── threedmatch            
│   ├── *.npz
│   └── *.txt                            

To reduce the training time, we pre-compute the 3D local descriptors (FCGF or FPFH) so that we can directly build the input correspondence using NN search during training. Please use misc/cal_fcgf.py or misc/cal_fpfh.py to extract FCGF or FPFH descriptors. Here we provide the pre-computed descriptors for the 3DMatch test set.

KITTI

The raw point clouds can be download from KITTI Odometry website. Please follow the similar steps as 3DMatch dataset for pre-processing.

Augmented ICL-NUIM

Data can be downloaded from Redwood website. Details can be found in multiway/README.md

Pretrained Model

We provide the pre-trained model of 3DMatch in snapshot/PointDSC_3DMatch_release and KITTI in snapshot/PointDSC_KITTI_release.

Instructions to training and testing

3DMatch

The training and testing on 3DMatch dataset can be done by running

python train_3dmatch.py

python evaluation/test_3DMatch.py --chosen_snapshot [exp_id] --use_icp False

where the exp_id should be replaced by the snapshot folder name for testing (e.g. PointDSC_3DMatch_release). The testing results will be saved in logs/. The training config can be changed in config.py. We also provide the scripts to test the traditional outlier rejection baselines on 3DMatch in baseline_scripts/baseline_3DMatch.py.

KITTI

Similarly, the training and testing of KITTI data set can be done by running

python train_KITTI.py

python evaluation/test_KITTI.py --chosen_snapshot [exp_id] --use_icp False

We also provide the scripts to test the traditional outlier rejection baselines on KITTI in baseline_scripts/baseline_KITTI.py.

Augmemented ICL-NUIM

The detailed guidance of evaluating our method in multiway registration tasks can be found in multiway/README.md

3DLoMatch

We also evaluate our method on a recently proposed benchmark 3DLoMatch following OverlapPredator,

python evaluation/test_3DLoMatch.py --chosen_snapshot [exp_id] --descriptor [fcgf/predator] --num_points 5000

If you want to evaluate predator descriptor with PointDSC, you first need to follow the offical instruction of OverlapPredator to extract the features.

Contact

If you run into any problems or have questions, please create an issue or contact [email protected]

Acknowledgments

We thank the authors of

for open sourcing their methods.

pointdsc's People

Contributors

xuyangbai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pointdsc's Issues

预训练模型

作者您好!我是一名点云配准的初学者,非常感谢您和团队做的工作!想请问您可以分享一下预训练模型的吗?我在您分享的链接里没有找到

Could you add a license file?

Hi, thank you for sharing the code. I’m currently evaluating the use of PointDSC for point cloud registration in my project. However, I couldn’t find a license file. Could you please add one?

关于运行kitti数据集出错的问题

你好!感谢开源代码和数据!

在我成功运行了您给的demo,demo_data文件夹中的两组.ply格式的点云成功匹配。
但是我将KITTI数据集中的.bin格式的点云文件转码为.ply文件后,运行
python demo_registration.py --chosen_snapshot PointDSC_KITTI_release --descriptor fpfh

无法成功运行,请问您知道是我哪一步有问题吗?
感谢!

Extract FPFH feature on KITTI dataset

Hi,

Thanks for sharing valuable code.
However, in the cal_fpfh.py script, I do not find how to extract features on KITTI dataset?
Should I process the raw point cloud similar to other datasets with voxel_size=0.05?

Thanks a lot for your attention and look forward to your reply.

Best regards

cal_fcgf process_kitti error

when i run cal_fcgf of kitti dataset processing , it turns out to be an error in the function process_kitti ,

                xyz0_t = apply_transform(xyz0[sel0], M)
                pcd0 = make_point_cloud(xyz0_t)
                pcd1 = make_point_cloud(xyz1[sel1])
```when i change the index to     apply_transform(xyz0[sel0[1]], M)  , it  goes well  but   error  become like this
`zlib.error: Error -3 while decompressing data: invalid distance code`

so   I  want to  know  is  it  exactly  correct  of   the  code:    xyz0_t = apply_transform(xyz0[sel0], M)

Do you have any processed data for training?

Hello,I'm trying to reproduce this code, but I'm having some problems.

I have the catalogue ready:
.
├── fragments
│ ├── 7-scene-redkitechen/
│ ├── sun3d-home_at-home_at_scan1_2013_jan_1/
│ └── ...
├── gt_result
│ ├── 7-scene-redkitechen-evaluation/
│ ├── sun3d-home_at-home_at_scan1_2013_jan_1-evaluation/
│ └── ...
├── threedmatch
│ ├── *.npz
│ └── *.txt

But Error when I run training code"python train_3DMatch.py":
"AssertionError: Make sure that the path /data/3DMatch has data sun3d-brown_bm_1-brown_bm_1*0.30.txt"
Did I do something wrong?
And my files failed to install"MinkowskiEngine".
Could you please provide this folder "/3DMatch" used for training ?
Thank you!

the demo_registration is low RAM efficient and low gpu memory efficient

Thanks for your excellent work first,i get some problems here.
1.Run i run demo_registration.py with kitti like pcd files,use fcgf i can get a large feature which shape is [45793,3]。
When run the code below to calculate corr_pos,the process will be killed due to a huge memory usage which up to 16GB!

# distance = np.sqrt(2 - 2 * (src_features @ tgt_features.T) + 1e-6)
# source_idx = np.argmin(distance, axis=1)
# source_dis = np.min(distance, axis=1)
# corr = np.concatenate([np.arange(source_idx.shape[0])[:, None], source_idx[:, None]], axis=-1)
# src_keypts = src_pts[corr[:,0]]
# tgt_keypts = tgt_pts[corr[:,1]]
# corr_pos = np.concatenate([src_keypts, tgt_keypts], axis=-1)
# corr_pos = corr_pos - corr_pos.mean(0)

This problems has been solved by using fcgf_feature_matching function which is borrowed from DGR instead of the code above.
2.Then i downsample the pcd,now the feature shape is [25000,3],when start to run the code below,the process is been killed again due to a huge gpu memory use which up to 20GB!

res = model(data)

I have register the feature which has shape [45793,3] successfully using DGR with only 8GB gpu memory usage,which is quite fast.is that normal this repo could use 20GB gpu memory?I need your help to solve the problems above!

A question about the result of 3DRegNet.

Hi, thank you very much for your well-organized pointDSC code. I noticed the experimental result of 3DRegNet on 3DMatch in your paper. Would you like to provide the code of 3DRegNet on 3DMatch? Thank you very much!

I want to know which version of your pytorch ,cuda and ME.

Thanks your work!!!
I want to know which version of your pytorch ,cuda and ME.
Because I have this error, with the old version of ME when i use the FCGF before:
Traceback (most recent call last):
File "demo_registration.py", line 10, in
from misc.fcgf import ResUNetBN2C as FCGF
File "/Bill/Project/PointDSC/misc/fcgf.py", line 77, in
region_type=ME.RegionType.HYPER_CUBE,
File "/opt/conda/lib/python3.7/enum.py", line 349, in getattr
raise AttributeError(name) from None
AttributeError: HYPER_CUBE

Thanks

Can you share the model?

weight_path='misc/ResUNetBN2C-feat32-3dmatch-v0.05.pth' is not in the package , can you share the model?

A question about 3DRegNet loss

Sorry, I closed the last question by mistake. I have one last question about 3DRegNet. In your article, you only used classified loss to train 3DRegNet, namely BEC loss of PointDSC, right?

What tools to plot the FIgure 6.

Hi Xuyang,

What tool to plot the Figure 6 in your paper(feature similarity distribution of inlier and non-inlier pairs). It is very descriptive and vivid.

Thanks.

Discrepancy in ATE on Augmented ICL-NUIM dataset

Hi Xuyang,

Thank you for your excellent work! I recently conducted tests on the Augmented ICL-NUIM dataset. I followed your tutorial to generate fragments, computed the FPFH and FCGF descriptors, and used your provided script (multiway/test_multi_ate.py) to calculate the ATE results (see below).
image

However, I observed a significant difference in the ATE for the Living1 scene compared to the values in Table 3 of your paper. I used the same hyperparameters as specified in the code. So I wonder if you used different hyperparameters (e.g., voxel size) for specifically testing the Living1 scene.
image

I look forward to your response!

Best regards

How should I train my dataset?

Hello, I would like to use your model to train on my own dataset. My dataset consists of point clouds from different viewpoints of a statue. However, I do not have the corresponding rotation and translation matrices between the point clouds pairwise. How can I address this issue? Thank you!

Download the pre-computed descriptors for the 3DMatch test set

Hello! Thanks for your open source code and data!

I find the link to download the pre-computed descriptors for the 3DMatch test set is not aviliable now.
Could you please give me a new link?
image

What's more, I have another question. Do you know where is the link that I can download the pre-computed descriptors (FCGF and Predator) for the 3DLoMatch?

I'm looking forward to your reply.

About testing the 3DLoMatch

Hi, I have seen your presentation video of PointDSC, it's beautiful!
Besides, I want to know something about the training dataset of the experiments of the 'predator+pointdsc',
Did you use the original 3DMatch(>30% overlaps) dataset or use the 3DLoMatch(10%-30% and >30% overlaps) to train the pointdsc?

help me

Is the algorithm implemented in windows or linux?

About ThreeDMatchTrainVal

Dear authors:

Thanks for your inspiring work and for releasing your code.

Everything is very good, except that I'm very confused about why the 'orig_trans' in ThreeDMatchTrainVal class is 'eye(4)' rather than the real transformation between the src and tgt?

Thanks very much!

Best wishes

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.