Giter Club home page Giter Club logo

overlappredator's Introduction

PREDATOR: Registration of 3D Point Clouds with Low Overlap (CVPR 2021, Oral)

This repository represents the official implementation of the paper:

*Shengyu Huang, *Zan Gojcic, Mikhail Usvyatsov, Andreas Wieser, Konrad Schindler
|ETH Zurich | * Equal contribution

For implementation using MinkowskiEngine backbone, please check this

For more information, please see the project website

Predator_teaser

Contact

If you have any questions, please let us know:

News

  • 2021-08-09: We've updated arxiv version of our paper with improved performance!
  • 2021-06-02: Fix feature gathering bug in k-nn graph, please see improved performance in this issue. Stay tunned for updates on other experiments!
  • 2021-05-31: Check our video and poster on project page!
  • 2021-03-25: Camera ready is on arXiv! I also gave a talk on Predator(中文), you can find the recording here: Bilibili, Youtube
  • 2021-02-28: MinkowskiEngine-based PREDATOR release
  • 2020-11-30: Code and paper release

Instructions

This code has been tested on

  • Python 3.8.5, PyTorch 1.7.1, CUDA 11.2, gcc 9.3.0, GeForce RTX 3090/GeForce GTX 1080Ti

Note: We observe random data loader crashes due to memory issues, if you observe similar issues, please consider reducing the number of workers or increasing CPU RAM. We now released a sparse convolution-based Predator, have a look here!

Requirements

To create a virtual environment and install the required dependences please run:

git clone https://github.com/overlappredator/OverlapPredator.git
virtualenv predator; source predator/bin/activate
cd OverlapPredator; pip install -r requirements.txt
cd cpp_wrappers; sh compile_wrappers.sh; cd ..

in your working folder.

Datasets and pretrained models

For KITTI dataset, please follow the instruction on KITTI Odometry website to download the KITTI odometry training set.

We provide

  • preprocessed 3DMatch pairwise datasets (voxel-grid subsampled fragments together with their ground truth transformation matrices)
  • raw dense 3DMatch datasets
  • modelnet dataset
  • pretrained models on 3DMatch, KITTI and Modelnet

The preprocessed data and models can be downloaded by running:

sh scripts/download_data_weight.sh

To download raw dense 3DMatch data, please run:

wget --no-check-certificate --show-progress https://share.phys.ethz.ch/~gsg/pairwise_reg/3dmatch.zip
unzip 3dmatch.zip

The folder is organised as follows:

  • 3dmatch
    • train
      • 7-scenes-chess
        • fragments
          • cloud_bin_*.ply
          • ...
        • poses
          • cloud_bin_*.txt
          • ...
      • ...
    • test

3DMatch(Indoor)

Train

After creating the virtual environment and downloading the datasets, Predator can be trained using:

python main.py configs/train/indoor.yaml

Evaluate

For 3DMatch, to reproduce Table 2 in our main paper, we first extract features and overlap/matachability scores by running:

python main.py configs/test/indoor.yaml

the features together with scores will be saved to snapshot/indoor/3DMatch. The estimation of the transformation parameters using RANSAC can then be carried out using:

for N_POINTS in 250 500 1000 2500 5000
do
  python scripts/evaluate_predator.py --source_path snapshot/indoor/3DMatch --n_points $N_POINTS --benchmark 3DMatch --exp_dir snapshot/indoor/est_traj --sampling prob
done

dependent on n_points used by RANSAC, this might take a few minutes. The final results are stored in snapshot/indoor/est_traj/{benchmark}_{n_points}_prob/result. To evaluate PREDATOR on 3DLoMatch benchmark, please also change 3DMatch to 3DLoMatch in configs/test/indoor.yaml.

Demo

We prepared a small demo, which demonstrates the whole Predator pipeline using two random fragments from the 3DMatch dataset. To carry out the demo, please run:

python scripts/demo.py configs/test/indoor.yaml

The demo script will visualize input point clouds, inferred overlap regions, and point cloud aligned with the estimated transformation parameters:

demo

ModelNet(Synthetic)

Train

To train PREDATOR on ModelNet, please run:

python main.py configs/train/modelnet.yaml

We provide a small script to evaluate Predator on ModelNet test set, please run:

python main.py configs/test/modelnet.yaml

The rotation and translation errors could be better/worse than the reported ones due to randomness in RANSAC.

KITTI(Outdoor)

We provide a small script to evaluate Predator on KITTI test set, after configuring KITTI dataset, please run:

python main.py configs/test/kitti.yaml

the results will be saved to the log file.

Custom dataset

We have a few tips for train/test on custom dataset

  • If it's similar indoor scenes, please run demo.py first to check the generalisation ability before retraining
  • Remember to voxel-downsample the data in your data loader, see kitti.py for reference

Citation

If you find this code useful for your work or use it in your project, please consider citing:

@InProceedings{Huang_2021_CVPR,
    author    = {Huang, Shengyu and Gojcic, Zan and Usvyatsov, Mikhail and Wieser, Andreas and Schindler, Konrad},
    title     = {Predator: Registration of 3D Point Clouds With Low Overlap},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {4267-4276}
}

Acknowledgments

In this project we use (parts of) the official implementations of the followin works:

We thank the respective authors for open sourcing their methods. We would also like to thank reviewers, especially reviewer 2 for his/her valuable inputs.

overlappredator's People

Contributors

shengyuh avatar xmlyqing00 avatar zgojcic avatar zhulf0804 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

overlappredator's Issues

Which metrics for odometryKITTI reported in the paper?

Hi , Shengyu

The outputs from kitti test code contain rot_mean rot_median trans_rmse trans_rmedse rot_std trans_std

Could you tell which metrics correspond to the RTE and RRE in the Table 5 ( OdometryKITTI )

Thank you so much
: )

Regarding rotation and translation matrix in 3DMatch.pkl file

@ShengyuH
Could you please let me know how you preprocessed the 3DMatch dataset to get the rotation and translation matrix for the pair of point clouds, that we use to evaluate and get the evaluation results? I am trying to evaluate on Redwood dataset where I have ground truth information but no rotation and translation matrix associated with each point cloud pair as we have in 3DMatch.pkl file.

Please help me out with this issue. That would be helpful for me to evaluate the predator model on the Redwood dataset as well.

Is there a similar visualization demo with KITTI?

Thanks for your reply! I am pretty new about point cloud registration. Actually, I am carring a wireless related project. and I need to use Lidar as a ground truth. My lidar data is pretty simliar to KITTI so I want to try overlapPredator with my own Lidar of 16 lines.

By inplement your project and run

python scripts/demo.py configs/test/indoor.yaml

I see that two frames has been merged with a transform matrix. Is there a similar visualization demo with KITTI?

Originally posted by @FlyerMaxwell in #32 (comment)

Missing file in indoor test dataset

Hello,

Thank you for your work.

I ran your evaluation script
python main.py configs/test/indoor.yaml
and got an error
FileNotFoundError: [Errno 2] No such file or directory: 'data/indoor/test/7-scenes-redkitchen/cloud_bin_21.pth'

I noticed that there is a file with the same name in the folder assets. Is it the same file?

Thank you very much for your help

ModuleNotFoundError

when I run the code "python main.py configs/train/indoor.yaml", got error"ModuleNotFoundError: No module named 'cpp_wrappers.cpp_neighbors.radius_neighbors'".

Training on a custom dataset for point cloud merge registration, rather for aligning registration (non overlap area)

Hi guys,

Thank you for sharing your work, it is actually quite interesting. Would be easy to provide some hints whether it would be easy to use your pipeline to train on custom dataset?

My goal is a bit different from what you are doing in your work but at the same time quite similar. What I want to achieve is registering two point clouds so that they merge together rather than aligning. For example look on the two points clouds in the picture below:
image
The two pcds connect at a specific part but they are two different instances of a bigger pcd.
If the two point clouds are randomly transformed:
image
I would like to get the transformation matrix which gives me the result in the first image.

I've tested the Teaser++ work and the fpfh feature descriptor vectors in order to extract meaningful correspondences:
image
but though the teaser+icp can correctly identify the side that the two pcds have the similarities (though this is not the case for all my dataset cases) it tries to align the two point clouds instead of stitching them together:
image
Thus, the extracted transformation is not correct since it tries to match similar points and not similar points in reverse. I was thinking whether I could use your pipeline for adjusting the correspondence points and how these match to each other, so that to force the alignment that I am seeking.

I also tried the pretrained indoor model, where I noticed the following:

On the current existing working example everything seems to work fine:
image
However, once I add a bit of gaussian noise, it seems that is not able to generalize that good:
image
and of course on the point clouds in my dataset doesn't work as well (quite expected though):
image

Thanks.

Licence

Hello,
thank you for your great work.
I would like to ask what license is your work under? I can't find any mention of it so I'm asking to make sure.

Thank you.

How we can predict rigid transformation between pair of point clouds?

@ShengyuH
For evaluation, our registration algorithm i.e. PREDATOR here should determine whether the pair of the point clouds can be aligned and if so, the model should output the predicted rigid transformation to a log file. This log file then we can use for evaluation purposes.
So could you please help me out here and guide me on how we can predict the rigid transformation between pair of point clouds files using the PREDATOR model and use the same for evaluation purposes as mentioned out here http://3dmatch.cs.princeton.edu/#geometric-registration-benchmark

Thanks in Advance :)

Something about the pkl files in /configs

Hello.
I'm just wondering how you prepare the file train_info.pkl in configs/indoor, I tried to open it and see, there are many names of prepared-to-be-trained pth flies,rotated and translated matrics and overlap ratios, but I don't really know how you organized them(especially the pth files‘ names) and want to know why.
look forward to your reply!
Best wishes!

The following error occurs running scripts/demo.py

The error content is:

Traceback (most recent call last):
File "D:\dataset\OverlapPredator-main\scripts\demo.py", line 17, in
from datasets.dataloader import get_dataloader
File "D:\dataset\OverlapPredator-main\datasets\dataloader.py", line 6, in
import cpp_wrappers.cpp_neighbors.radius_neighbors as cpp_neighbors
ModuleNotFoundError: No module named 'cpp_wrappers.cpp_neighbors.radius_neighbors'

But there are only neighbors in the folder, and there are no radius_neighbors.

RuntimeError - probably no correspondences found?

Hi,

thanks a lot for your interesting research work. While running your model I encounter this error:
Traceback (most recent call last): File "/data/visioni06/OverlapPredator/scripts/demo.py", line 260, in <module> demo_loader, neighborhood_limits = get_dataloader(dataset=demo_set, File "/data/visioni06/OverlapPredator/datasets/dataloader.py", line 261, in get_dataloader neighborhood_limits = calibrate_neighbors(dataset, dataset.config, collate_fn=collate_fn_descriptor) File "/data/visioni06/OverlapPredator/datasets/dataloader.py", line 209, in calibrate_neighbors batched_input = collate_fn([dataset[i]], config, neighborhood_limits=[hist_n] * 5) File "/data/visioni06/OverlapPredator/datasets/dataloader.py", line 125, in collate_fn_descriptor conv_i = batch_neighbors_kpconv(batched_points, batched_points, batched_lengths, batched_lengths, r, neighborhood_limits[layer]) File "/data/visioni06/OverlapPredator/datasets/dataloader.py", line 66, in batch_neighbors_kpconv neighbors = cpp_neighbors.batch_query(queries, supports, q_batches, s_batches, radius=radius) RuntimeError: Error

My guess that the reason is that no correspondences are found.

If you need more info just let me know.

Thanks!

Questions about the matchability loss

Hi. Thank you for your amazing work!
Your design for overlap scores is wonderful.
But, I would like to discuss with you the design of the matchability score. You use the virtual labels generated by the feature space to supervise the 33rd or 34th dimension of the output, which is a very amazing idea. However, in the process of reproducing your paper, I found that the saliency_precision (matchability_precision) on the validation set is only 0.4, which means that the network fails to learn the matchability score.
I guess there are two possibilities: one is that I reproduce the error, if so, can you provide the log file? The other is to say that this tightly coupled way of descriptors and detectors is not easy to learn?

Error in loading data with lower points

Hi,
I am trying to run this example with my own data (.pts). But it isn't optimized to do so. The demo files have around 25,000 points, and it runs successfully, but with open3D (as it doesn't load it into torch), during the operations it asks for about 316 GB of GPU for ~20,000 points. I wanted to try and change the data format to .pth, is there a way to do so?

Can this be used to enhance object detection?

Thanks for your open source code. Can this work be used to merge frames of a time series to get a higher dectection accuracy. For example, I have a lidar with 16 lines. Can I use this work to combine two frames to simulate a Lidar with 32 lines?

Where is the pretrained model on KITTI ?

Thanks for your great work, I have two questions:

  1. I Notice there is no pretrained model on KITTI in https://share.phys.ethz.ch/~gsg/Predator/weights.zip, don't you want to make it public for forgot it ?

2.and there is no file : configs/train/kitti.yaml, and you didn't mention that training network in kitti dataset, only mentioned evaluating Predator on KITTI test set in README.md, So you have not make the training code/config files on kitti dataset public ?

A question about 'cpp_wrappers.cpp_neighbors.radius_neighbors'.

Hi,thanks for sharing the code.But I run the code encounters this question:

Connected to pydev debugger (build 201.6668.115)
Traceback (most recent call last):
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 678, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/qwt/code/OverlapPredator-main/datasets/dataloader.py", line 13, in
import cpp_wrappers.cpp_neighbors.radius_neighbors as cpp_neighbors
ModuleNotFoundError: No module named 'cpp_wrappers.cpp_neighbors.radius_neighbors'

I have compiled the C++ code, but the error still occurs.Is it about the path?
Thank you.

voxel-grid subsampled

I'm sorry to disturb you. I'd like to know what grid size is used in your data preprocessing? I am looking forward to your reply, or whether it is convenient to provide data preprocessing code.

no radius_neighbors module

There is no radius_neighbors module. Can u help?

ModuleNotFoundError: No module named 'cpp_wrappers.cpp_neighbors.radius_neighbors'

Contents for next commit

  • update results figure (typos and numbers)
  • add ModelNet and KITTI part
  • update arxiv paper
  • Sparse Convolution-based PREDATOR ASAP

[FIXED] Bug in the self-attention GNN.

As correctly noticed by @zhulf0804 (thank you), there was a bug in our implementation of the GNN, which actually led to lower performance of PREDATOR. The bug was fixed in the Pull Request #14. We have now retrained the model on the 3DMatch dataset and obtained a higher performance (see Figure below). We are also retraining PREDATOR on other dataset and will update the tables in the following days.

Note, due to the change of the GNN architecture old pretrained models will not work anymore, so make sure to always have the latest version.

image_2021_06_02T09_21_10_068Z

About preprocessing the 3DMatch dataset.

First, thank you for this solid work!

As we want to further explore this work, I wonder if you would like to share the preprocess script on 3DMatch dataset for transforming the raw 3DMatch dataset to the training data?

Also, I notice that the raw 3DMatch dataset you provide is different from the official 3DMatch dataset, can you please illustrate the difference between these two?

Thank you and look forward to your reply!

Performance comparison with Mink

Hello,
thanks for this excellent work!
I would be grateful if you could help me with the following questions:

  1. I observed that you also open-sourced the version with MinkowskiEngine. Have you compared the performance of these two versions?
  2. The voxel size for 3DMatch dataset is 2.5cm and I found that FCGF set voxel size to 5cm. Is there any special reason for using 2.5cm-voxel size? Or have you compared the performance with different voxel size?

Thanks!

Can we get a repeatable result?

I'm sorry to disturb you. When I run evaluate_predator.py file, the final recall results of two runs are inconsistent. Is this normal?

How to compute overlap ratio

Hi, @ShengyuH

Thank you for sharing your excellent work, I try to calculate the overlap ratio of the 3DMatch dataset.I follow the code of 3DMatch to calculate the overlap ratio of the datasets provided by here, but the result seems be different from the gt_overlap.log. So i wonder if you could give me some hint about the details.Thank you very much!

RANSAC used?

Hello,
thank you for your excellent work.
We have been experimenting with Overlap Predator and we have noticed that we get different results for the same input. What implementation of RANSAC did you use? Have you experimented with using other matching algorithms like SuperGlue, Deep MAGSAC...?

Thank you.

UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 due to no true samples. Use 'zero_division' parameter to control this behavior.

hi,
Thanks for your amazing work!
I'm tring to train your model with 3dmatch dataset recently, but I got warning ,
"UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 due to no true samples. Use 'zero_division' parameter to control this behavior". And I find that this is caused by the gt_label being 0 when calculating the saliency loss. I want to know if this warning will have any effect and do you meet the same issue?
Best Regards

A question about the performance of kitti.pth

Hi, Thank you for your amazing work.
The model performs well on 3dmatch, but it seems not enough good on Kitti. When I test the latest kitti.pth, I found when the distance_threshhold=0.3, the registration recall is only 0.978, lower than the 0.998 in the paper. and the rre and rte are 1.371 and 0.27, still doesn't match with the paper's result.
Can you test the kitti.pth again?

How long is the training?

Thanks for your awesome work. Inspired by it greatly, I want to do some further research on it. However my hardware condition is not so good.
So could you plz tell me the training time on 3dMatch? num and type of the GPUs?
Thanks in advance.

The dataloader have one problem when training 3dmatch

i have already download 3dmatch and unzip it. But when I train it, there is an error:

Traceback (most recent call last):
File "E:/Project/PointCloud/OverlapPredator-main/main.py", line 76, in
config.train_loader, neighborhood_limits = get_dataloader(dataset=train_set,
File "E:\Project\PointCloud\OverlapPredator-main\datasets\dataloader.py", line 256, in get_dataloader
neighborhood_limits = calibrate_neighbors(dataset, dataset.config, collate_fn=collate_fn_descriptor)
File "E:\Project\PointCloud\OverlapPredator-main\datasets\dataloader.py", line 208, in calibrate_neighbors
batched_input = collate_fn([dataset[i]], config, neighborhood_limits=[hist_n] * 5)
File "E:\Project\PointCloud\OverlapPredator-main\datasets\indoor.py", line 46, in getitem
src_pcd = torch.load(src_path)
File "F:\Anaconda3\envs\PythonPytorch\lib\site-packages\torch\serialization.py", line 699, in load
with _open_file_like(f, 'rb') as opened_file:
File "F:\Anaconda3\envs\PythonPytorch\lib\site-packages\torch\serialization.py", line 231, in _open_file_like
return _open_file(name_or_buffer, mode)
File "F:\Anaconda3\envs\PythonPytorch\lib\site-packages\torch\serialization.py", line 212, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'data/3dmatch/train/rgbd-scenes-v2-scene_01/cloud_bin_1.pth'

I can't find where 'cloud_bin_1.pth' is.

Problems encountered in the reproduction process

Hi, thank you for your amazing work.
I am reproducing your work recently, and amazed by your clear and novel ideas and detailed experiments, but I now encounter two problems:

  1. Generalization ability. FMR from 3dmatch to ETH is less than 30%. Even if the KPconv radius and kernel points weights are adjusted like D3Feat. Is this because the overlap module is sensitive to data distribution?
  2. Speed. Our test found that D3Feat is twice as fast as PREDATOR regarding inference time. And in the report of your paper, the decoder time is only 1ms, but D3Feat uses 63ms. We think this is unreasonable because your decoder codes are similar.
    If you have time, can you help me solve these two problems?

Best,
Canhui

Cannot call the C++ methods

Hi!Thanks for your great work! After I run the command: sh compile_wrappers.sh,I got the .so files, and It seems that I have successfully compiled the C++ extension module for python located in cpp_wrappers. But when debug,I still get the AttributeError: module 'cpp_wrappers.cpp_neighbors.neighbors' has no attribute 'batch_query'.Do you know what this is about?
Thank you very much!

About download 3DLoMatch

hi , i want to download 3DLoMatch dataset, but i don't know how to download and use it.
The 3DLoMatch and 3DMatch are the same dataset? or some differences?
I know these have different overlaps, but i want to use 3DLoMatch.
Does 3DLoMatch and 3DMatch have same train sets, and 3DLoMatch has another test sets?
Or i train a Model by 3DMatch, and test it by 3DLoMatch?

Looking forward to your reply, thank you!

Which result is reported in the paper ?

I try to produce the result, could you tell me which result is reported in the paper ?
For example, is the Inlier ratio w_mutual or the Inlier ratio wo_mutual ?

Thank you very much.
: )


Scene ¦ prec. ¦ rec. ¦ re ¦ te ¦ samples ¦
Kitchen ¦ 0.656 ¦ 0.656 ¦ 3.708 ¦ 0.091 ¦ 524¦
Home 1 ¦ 0.496 ¦ 0.496 ¦ 4.253 ¦ 0.108 ¦ 283¦
Home 2 ¦ 0.572 ¦ 0.572 ¦ 4.243 ¦ 0.108 ¦ 222¦
Hotel 1 ¦ 0.689 ¦ 0.689 ¦ 3.804 ¦ 0.117 ¦ 210¦
Hotel 2 ¦ 0.540 ¦ 0.540 ¦ 3.615 ¦ 0.103 ¦ 138¦
Hotel 3 ¦ 0.610 ¦ 0.610 ¦ 3.889 ¦ 0.080 ¦ 42¦
Study ¦ 0.407 ¦ 0.407 ¦ 4.978 ¦ 0.149 ¦ 237¦
MIT Lab ¦ 0.377 ¦ 0.377 ¦ 4.067 ¦ 0.142 ¦ 70¦
Mean precision: 0.543: +- 0.105
Weighted precision: 0.567
Mean median RRE: 4.069: +- 0.408
Mean median RTE: 0.112: +- 0.022
Inlier ratio w_mutual: 0.221 : +- 0.041
Feature match recall w_mutual: 0.743 : +- 0.090
Inlier ratio wo_mutual: 0.171 : +- 0.034
Feature match recall wo_mutual: 0.708 : +- 0.101

Custom data - register known CAD model - how can I further optimize alignment?

Hi there!
Thank you so much for this amazing work!
I'm trying to register known CAD models of anatomy to real world counterparts (patients) based solely on the partially occluded and noisy reconstructed point cloud data I get from the ToF camera of the HoloLens 2 mixed reality headset(using no RGB - only D). I'm completely new to Deep Learning, and I was wondering how I can best leverage the fact that I have the mesh of the 3D model I'm trying to register beforehand. Ideally I want to create a pipeline that lets one upload a 3D model and do a bunch of processing before to create the best possible network to register the CAD model.
Before doing any retraining, is there anything I can change in my config file to optimize the pretrained network for this use case of aligning smaller sized objects like the CT scan of a knee? I used the indoor config in my test and only changed dgcnn_k to 4 to avoid an error.(see What I have done so far ) Do you have advise what else I could optimize/ change? Unfortunately out of the box the network occasionally registers the CAD model upside down.

Would it be a good idea to train this network based solely on the one CAD model that I'm trying to register? Should I collect a lot of noise data that one would encounter in my use case and then train a network to be able to generate realistic noise that can be used during this custom training step?
Thank you so much for your consideration and sharing your work!

What I have done so far (for any complete beginners wondering how to use this for custom data):

SetUP before demo.py
  • on Ubuntu 20.04.3 LTS with the right devtools installed git, ninja etc
  • git clone https://github.com/overlappredator/OverlapPredator.git
  • conda activate py3.8.5 (environment with all requirements installed)
  • cd OverlapPredator
  • cd cpp_wrappers
  • sh compile_wrappers.sh
  • cd ..
  • sh scripts/download_data_weight.sh
  • python scripts/demo.py configs/test/indoor.yaml

demo.py works like a charm!.

Using custom data:
def __getitem__(self,item): 
       # get pointcloud
       # src_pcd = torch.load(self.src_path).astype(np.float32)
       # tgt_pcd = torch.load(self.tgt_path).astype(np.float32)   
       
       
       src_pcd = o3d.io.read_point_cloud(self.src_path)
       tgt_pcd = o3d.io.read_point_cloud(self.tgt_path)
       src_pcd = src_pcd.voxel_down_sample(0.025)
       tgt_pcd = tgt_pcd.voxel_down_sample(0.025)
       src_pcd = np.array(src_pcd.points).astype(np.float32)
       tgt_pcd = np.array(tgt_pcd.points).astype(np.float32)


       src_feats=np.ones_like(src_pcd[:,:1]).astype(np.float32)
       tgt_feats=np.ones_like(tgt_pcd[:,:1]).astype(np.float32)

       # fake the ground truth information
       rot = np.eye(3).astype(np.float32)
       trans = np.ones((3,1)).astype(np.float32)
       correspondences = torch.ones(1,2).long()

       return src_pcd,tgt_pcd,src_feats,tgt_feats,rot,trans, correspondences, src_pcd, tgt_pcd, torch.ones(1)
  • copy indoor.yaml and rename to testconfig.py
  • change line 73 & 74 to point to source and target point cloud like so:
demo:
  src_pcd: assets/Source.ply
  tgt_pcd: assets/Target.ply
  n_points: 1000
  • run python scripts/demo.py configs/test/testconfig.yaml
  • cause "RuntimeError: invalid argument 5: k not in range for dimension at /pytorch/aten/src/THC/generic/THCTensorTopK.cu:26"
  • try setting dgcnn_k from 10 to a lower number (4 worked for me) in testconfig.yaml like so:
overlap_attention_module:
  gnn_feats_dim: 256 
  dgcnn_k: 4
  num_head: 4
  nets: ['self','cross','self']

Results:
Screenshot from 2021-09-06 15-57-48
Screenshot from 2021-09-06 15-57-52
Screenshot from 2021-09-06 15-57-45

Question about test

Thanks for your great work, I have two question:

  1. I already trained on indoor dataset. If I want to test, should I change the path "weights/indoor.pth" in "configs/test/indoor.yaml " before run "python main.py configs/test/indoor.yaml"?

  2. What information saved in "snapshot/model"? Is it the model I trained? And how to use it?

Looking forward to your reply.
Best regards

About 3DLoMatch and benchmark

hi,
Thanks for your amazing work!
I want to use the 3DLoMatch dataset for testing recently. And I followed the steps in the README to change the benchmark in configs/test/indoor.yaml to 3DLoMatch, but I got an error,
“IndexError:index is out of bounds for axis 0 with size 1623”
I found that the error was in writing estimated trajectories of the seventh scence (sun3d-mit_76_studyrrom-76-1studyromm2). I'm not sure what I'm overlooking to get this error?
looking forward to your reply.

about 3DLoMatch test

Hello, I'm sorry to bother you. Do I need to retrain when test 3DLoMatch, or do I use the model trained by 3DMatch to test it directly on 3DMatch model. We look forward to your reply.

Which metrics reported in the paper?

Hello,
When I run your code, there are 2 metric: mean precision and weighted precision.
Which one did you report in your paper?
Thank you very much

cal_overlap script update

Hi guys,

Shouldn't the command in this line be p.map(cal_overlap_per_scene, base_dir) instead? Otherwise the cal_overlap_mat function is missing or something.

Also in line to what is the 03_Transformed folder corresponds to? For example if I try to apply this on the 3DMatch dataset there is not such a folder.

Can you give the data from 0.01 to 0.02 in 3DLoMatch?

Hi,I would like to make some experimental comparisons according to your work. Could you give me the Feat Macthing Ratio from 0.00 to 0.20 in 3DLoMatch? The results I run by myself may not meet the requirements of the results you run, so I hope you can give me, thank you.

Looking forward your reply.

neighborhood_limits

Thanks for your excellent work!
To test my own kitti-like pcd files,how should i get neighborhood_limits,can i use the neighborhood_limits on kitti directly?Could you please upload neighborhood_limits on kitti datasets since it is 80GB+ which is really hard to download.
By the way,i just want to simply run demo.py,but i need to download 3dmatch dataets first,this is somewhat unreasonable.It is more friendly if you could upload the neighborhood_limits to skip the step which aims to get neighborhood_limits.

A question about how to download the 3DLoMatch

Thank you so much for sharing the well-organized code. I see a 3d Match dataset presented in your article. Does 3DLoMatch only have test sets, or does it also have training sets? How can I download it?

Looking forward to your reply, thank you!

A questions about the dim of feats

Hi!
Thanks for your amasing work.
I noticed that in the yaml file, the first_feats_dim and gnn_feats_dim are 256 and 512 in train mode,while they are 128 and 256 in test mode. why ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.