Giter Club home page Giter Club logo

3dsmoothnet's Introduction

3DSmoothNet repository

This repository provides code and data to train and evaluate the 3DSmoothNet, compact local feature descriptor for undstructured point clouds. It represents the official implementation of the paper:

The Perfect Match: 3D Point Cloud Matching with Smoothed Densities (CVPR 2019).

PDF | Group Page

Zan Gojcic, Caifa Zhou, Jan D. Wegner, Andreas Wieser

We propose 3DSmoothNet, a full workflow to match 3D point clouds with a siamese deep learning architecture and fully convolutional layers using a voxelized smoothed density value (SDV) representation. The latter is computed per interest point and aligned to the local reference frame (LRF) to achieve rotation invariance. Our compact, learned, rotation invariant 3D point cloud descriptor achieves 94.9% average recall on the 3DMatch benchmark data set, outperforming the state-of-the-art by more than 20 percent points with only 32 output dimensions. This very low output dimension allows for near realtime correspondence search with 0.1 ms per feature point on a standard PC. Our approach is sensor- and sceneagnostic because of SDV, LRF and learning highly descriptive features with fully convolutional layers. We show that 3DSmoothNet trained only on RGB-D indoor scenes of buildings achieves 79.0% average recall on laser scans of outdoor vegetation, more than double the performance of our closest, learning-based competitors.

3DSMoothNet

! Update: We have submitted a revised version of the paper to Arxiv for correcting the typo.

Citation

If you find this code useful for your work or use it in your project, please consider citing:

@inproceedings{gojcic20193DSmoothNet,
	title={The Perfect Match: 3D Point Cloud Matching with Smoothed Densities},
	author={Gojcic, Zan and Zhou, Caifa and Wegner, Jan Dirk and Wieser Andreas},
	booktitle={International conference on computer vision and pattern recognition (CVPR)},
	year={2019}
}

Contact

If you have any questions or find any bugs, please let us know: Zan Gojcic, Caifa Zhou {[email protected]}

Instructions

Dependencies

The pipeline of 3DSmoothNet consits of two steps:

  1. main.cpp: computes the smoothed density value (SDV) voxel grid for a point cloud provided in the .ply format.

  2. main_cnn.py: Inferes the feature descriptors from the SDV voxel grid using the pretrained model. Can also be used to train 3DSmoothNet from scratch or for fine-tuning

Computation of the SDV voxel-grid impelemented in C++ has dependencies on the Point Cloud Library (PCL) and OpenMP. For convinience we include a shell script, which can be used to install PCL as

./install_pcl.sh

The CNN part of the pipeline is based on Python3 (specifically 3.5) and is implemented in Tensorflow. The required libraries can be easily installed by runing

pip install -r requirements.txt

in a new virtual environment.

To get the correct cuda and cudnn version you can also use the following docker: nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04

Input parametrization

We provide a cmake file that can be used to compile main.cpp as:

cmake -DCMAKE_BUILD_TYPE=Release .
make

which will create an executable 3DSmoothNet that takes the following command line arguments:

-f Path to the Input point cloud file in .ply format
-r Half size of the voxel grid in the unit of the point cloud. Defaults to 0.15.
-n Number of voxels in a side of the grid. Whole grid is nxnxn. Defaults to 16.
-h Width of the Gaussia kernel used for smoothing. Defaults to 1.75.
-k Path to the file with the indices of the interest points. Defults to 0 (i.e. all points are considered).
-o Output folder path. Defaults to "./data/sdv_voxel_grid/".

Testing

The testing with the pretrained models ./models/ for 16, 32 and 64 dimensional 3DSmoothNet can be easily done by runing

python ./main_cnn.py --run_mode=test

which will infer the 3DSmoothNet descriptors for all SDV voxel-grid files (*.csv) located in ./data/test/input_data/. For more options in runing the inference please see ./core/config.py

Training

The provided code can also be used to train 3DSmoothNet from scratch using e.g.:

python ./main_cnn.py --run_mode=train --output_dim=32 --batch_size=256

to train a 32 dimensional 3DSmoothNet with mini-batch size 256. By defult, the training data saved in data\train\trainingData3DMatch\ wil be used and the tensorboard log will be saved in ./logs/. For more training options please see ./core/config.py.

Evaluation

The source-code for the performance evaluation on the 3DMatch data set is available in the ./evaluation/.

In order to compute the recall, first run the correspondenceMatching.m and then the evaluate3DMatchDataset.m.

With small changes of the point cloud names and paths, the code can also be used to evaluate the performance on the ETH data set.

Generation of training data

  • Procedures
    • Prepare the dataset (as .ply format): collect your own point clouds (for fine-tuning) or download the benchmark dataset (e.g. 3DMatch)
    • Perform the Input parametrization using the main.cpp
    • Save the SDVs into tfrecord using saveDataToTFrecordsExample.py.
    • Train the model e.g., using
     python ./main_cnn.py --run_mode=train --output_dim=32 --batch_size=256
    
    for ouputing a 32-dim features
  • Generate training using 3dmatch
    • Extract point clouds (.ply) from RGB-D images according using 3dmatch-toolbox
    • Sample positive tuples: for each pair of training fragments (e.g., $F_i$ and $F_j$) which has more than 30% overlap (This is given in 3DMatch dataset), arbitrarily sample 300 points wihtin the overlap region from $F_i$ and found their nearest neighbors in the transformed $F_j$. Positive tuples defined as those whose the distance between the nearest neighbor less than the average resolution (around 6 mm) of 3DMatch dataset (can be retrieved from 3dmatch-toolbox.)
    • Compute the SDV for those sampled point pairs and format the SDVs of each pair into one row vector. I.e. each row is one training example.
    • Save the SDVs into tfrecord

Demo

We prepared a small demo which demonstrates the whole pipeline using two fragments from the 3DMatch dataset. To carry out the demo, please run

python ./demo.py

after installing and compiling the necessary source code. It will compute the SDV voxel-grid for the input point clouds, before infering 32 dimensional 3DSmoothNet descriptors using the pretrained model. These descriptors are then used to estimate the rigid-body transformation parameters using RANSAC. Software outputs the results of RANSAC as well as two figures, first showing the inital state and the second the state after the 3DSmoothNet registration

3DSMoothNet

Data

Training data

Training data created using the RGB-D data from 3DMatch data set can be downloaded from here (145GB). It consists of a *.tfreford file for each scene, due to the size of the data several scenes are split into more *.tfreford files (329 files all together). In order to train the model using this data replace the sample_training_file.tfrecord file in ./data/train/trainingData3DMatch/ with the files from this archive. When run in train mode the source code will automatically read all the files from the selected folder.

If you use these data please consider also citing the authors of the data set 3DMatch.

Evaluation data sets

3DMatch

The pointclouds and indices of the interest points for the 3DMatch data set can be downloaded from here (0.27GB)

If you use these data please consider also citing the authors of the data set 3DMatch.

3DSparseMatch

The pointclouds and indices of the interest points for the 3DSparseMatch data set can be downloaded from here (0.83GB)

If you use these data please consider also citing the authors of the original data set 3DMatch.

ETH

The pointclouds and indices of the interest points for the ETH data set can be downloaded from here (0.11GB)

If you use these data please consider also citing the authors of the original data set ETH.

Pretrained model

The pretrained model of the 3DSmoothNet with 128 dim can be downloaded from here (0.10GB)

To use this model please unpack the archive to ./models/128_dim/.

TO DO!!

  • Add source code of the descriptors used as baseline

License

This code is released under the Simplified BSD License (refer to the LICENSE file for details).

3dsmoothnet's People

Contributors

caifazhou avatar qq456cvb avatar zgojcic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dsmoothnet's Issues

Building issue, flann::Matrix

Hi !

Thank you for sharing this work, I am interested to use it for my internship of 3D reconstruction.

I am having an issue in the building process,

the $make command gives me an error about flann :

In file included from /home/karlmontalban/dev/3DSmoothNet/main.cpp:13:
/home/karlmontalban/dev/3DSmoothNet/core/core.h:38:8: error: « Matrix » dans l'espace de noms « flann » ne nomme pas un type de patron
 flann::Matrix<float> initializeGridMatrix(const int n, float x_step, float y_step, float z_step);
(....)

I have installed flann on my virtual env and I have version 1.8 so I don't understand what is going on...

Any idea ?

Missing functions in demo.py

Hi,

I'm trying to run demo.py file and it seems like some of the functions that are being used in this script are missing, for example read_point_cloud or registration_ransac_based_on_feature_matching.
I don't see an import of these functions.
Could you please specify where should I import them from?

Thanks,
Tal

How to Generate Training Data TFRecorder

Hi
I test the pre-trained model in my lidar point cloud and the result is not good, but i do the process in my rgbd point cloud ,the match result is perfect. I think i should finetune the pre-trained model in my lisdar data ,but i do not know how to generate the sample_training_file.tfrecord from the point cloud and transformation groudturth.Would it be convenient for you to tell me the process?
thanks

Generate training using 3dmatch

Hi,
I wanna extract point clouds (.ply) from RGB-D images according using 3dmatch-toolbox, but I do not know which file can be used and how it can be used? I am wondering whether it is convenient for you to send me a small example code on how to produce the data.

Thanks

Rui

error compling main.cpp

Severity Code Description Project Path File Line Source Suppression State
Error C2039 'PointXYZ': is not a member of 'pcl' 3DSmoothNet D:\freelance\3DSmoothNet-master\core D:\freelance\3DSmoothNet-master\core\core.h 7 Build

i configure and generate this project after installing pcl1.12.0
but I gat this error

LRF/SDV computation returning all zeros

Hi,

I'm attempting to generate the LRF/SDV values from a .ply file (saved from open3d) and I'm receiving all zeros in the output .csv file. There were no errors in the output of ./3DSmoothNet:

File: ./datums/temp/target.ply
Number of Points: 3788
Size of the voxel grid: 0.3
Number of Voxels: 16
Smoothing Kernel: 1.75
Number of keypoints:3788

Starting SDV computation!
8 threads will be used!!
Saving Features to a CSV file:
./datums/temp/sdv/target.ply_0.150000_16_1.750000.csv

---------------------------------------------------------
LRF computation took 1080 miliseconds
SDV computation took 2427 miliseconds
---------------------------------------------------------

I then load the resultant .csv file and print out some stats related to it. As you can see, there are no non-zero values in the loaded numpy data:

evaluation_features = np.fromfile(fp, dtype=np.float32).reshape(-1, 4096)
print(evaluation_features.shape)
print(np.where(evaluation_features != 0.0))
print(evaluation_features)

(3788, 4096)
(array([], dtype=int64), array([], dtype=int64))
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]]

I have tried a variety of -r, -h & -n parameters, but with no effect. Is there anything I should keep in mind or anything to look out for when trying to generate LRF/SDV values for a point cloud?

Training raise OutOfRangeError

Hi, Gojcic, I attempt to run your training script and raise some error.

OutOfRangeError (see above for traceback): End of sequence
         [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,16,16,16,1], [?,16,16,16,1]], output_types=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]
         [[Node: IteratorGetNext/_21 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_55_IteratorGetNext", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]]

I unpack the training data under ./data/train/trainingData3DMatch .
Do you have any idea where is the problem?

Information about training dataset creation

Good morning,
I'm studying your work and I find it very interesting. If you don't mind, I would like to ask you a detail about your training setup. I read in another issue here on your github that you trained your model with 472000 iterations, using a batch size 256 for 20 epochs.

  • First of all, can you please confirm that when you speak about "batch size 256", you do mean 256 pairs of descriptors 4096dim (computed via SDV) for anchor and positive inputs?
  • Then, every pair of pointclouds should have 300 descriptors for the anchor pointcloud and 300 descriptors for the positive one, each of these computed around the closest points on the target pointcloud. Right? So the training doesn't need to be "pointcloud pair-related" in the sense that every pair of anchor and positive descriptors in the 256 batch could belong to different pairs of poinclouds, even from different scenes. Can you confirm my understanding or I said something wrong?
  • In the end, i suppose your training set was composed by 472000/20=23600 iterations per epoch, coming from nearly 23600*256=6041600 pairs of descriptors computed on the whole training set. I'm asking just to be sure that I got it right. :)

Thanks in advance for your patience, have a nice day.
Marco

the bad result of registration

hi,zan!
Thanks for your contribution first.

I can run demo perfectly,however when I use my data to test,the result is bad at follows
registration::RegistrationResult with fitness = 0.000000, inlier_rmse = 0.000000, and correspondence_set size of 0

I don't show it was the descriptors by network or registration of open3d.

Thanks so mush!
kim.

compile cpp file

Hello,

I'm wondering which version of the pcl library you used in the project. Since I came across a error when I trying to compile the cpp file following your instructions.
image
And I checked online and found that PointXYZ is a struct member of the pcl 1.12, but I'm not sure it is the reason that the cpp file cannot be made.

In addition, the "glob3" package in the requirements.txt cannot be installed via pip. Looking forward to hearing from you!

Thanks,
Felix

Registration problem

Hello,

I followed your instruction and ran your demo code based on my own data, however, the registration result is not good. I applied simple transformation matrix on the original point sets and used the original one to register the transformed one. However, after running the 3DSmoothNet, it shows that registration::RegistrationResult with fitness = 0.000000, inlier_rmse = 0.000000, and correspondence_set size of 0. I have no idea what's wrong with my process and the data.
image

Looking forward to your answer, thanks a lot!

Felix

Cuda and cudnn version

Dear Author,
I wonder know which cuda and cudnn version you use in this project.

Thank you so much.
Best regards,
Yu-Kai Lin.

demo.py fail, no such file ...

Launching demo:

It fails when trying to find these files

# Load the descriptors and estimate the transformation parameters using RANSAC
reference_desc = np.load('./data/demo/32_dim/cloud_bin_0.ply_0.150000_16_1.750000_3DSmoothNet.npz')
reference_desc = reference_desc['data']


test_desc = np.load('./data/demo/32_dim/cloud_bin_1.ply_0.150000_16_1.750000_3DSmoothNet.npz')
test_desc = test_desc['data']

What are these files for ? There are not in the /data/demo directory ...

FileNotFoundError: [Errno 2] No such file or directory: './data/demo/32_dim/cloud_bin_0.ply_0.150000_16_1.750000_3DSmoothNet.npz'

The problem of running demo.py

Hi, Prof. Gojcic and Prof. Zhou

Nice work and thank you for your code.

I run demo.py according to your website and get different results as following. Are the results OK? I wonder whether the different results are caused by installing PCL abnormally?

Initial State:
图片

Registration Result:
图片

Thank you for your time and attention.

Rainbow

Bad result in final alignment

Hi !

I have two pointclouds from lidar and photogrammetry acquisitions.
The result is null as no correspondence is found.

registration::RegistrationResult with fitness = 0.000000, inlier_rmse = 0.000000, and correspondence_set size of 0
Access transformation to get result.

The files are accessible here :
https://drive.google.com/file/d/1pxP_koE6FP5EjjnAvU_tJrFAbw2RprK3/view?usp=sharing
https://drive.google.com/file/d/1nVKukdgohvQdlay0ZwGA9mZCQMr6xVdq/view?usp=sharing

What do you think ?
Should I try to slightly align them first before using 3DSmoothNet ?

Method to Generate Point Cloud Fragments

Hi, Thanks for your sharing. I have a little question.

In 3DMatch the author claimed that they are using 50 depth image to do TSDF to get one point cloud fragment(.ply file). But I found the number of ply provided in their website geometric registration benchmark part does not equal to the rgbd images / 50. For example there are 10204 pairs of color images and depth images in sun3d-hotel_uc-scan3 dataset, but only 54 ply files is provided in the geometric registration benchmark. So I wonder do they just use part of rgbd image to generate ply file or maybe I miss some information? Could you please explain a little bit how you generate the point cloud fragments ? Or can you share the code for generating the point cloud if you are not using the code from 3DMatch ?

Best,
Xuyang.

mapped_indices in SaveDatatoTfRecord.py

Hi!

Firstly, Thanks so much for creating this tool! I am still interested in using it in a project, and currently am hoping to fine tune the network on my own data.

I am trying to generate my own files using the saveDatatoTFRecord.py file. I am a little confused as to what is in mapped_indices.npz . I have a text file with indices of interest points for point clouds X and Y which were used for the initial input parametrization via 3Dsmoothnet executable. Does mapped_indices.npz contain the actual point cloud values at each index, or is it the same index file of indices?

registration help

Hi, I've been working with 3DSmootNet for a few weeks now. I'm trying to create features I can put into teaser++ but so far I haven't gotten it to work correctly. I am at the point now where I can run demo.py with your data and everything works. Then I tried to run it with my data (after generating new random keypoint files). Everything seems to run smoothly, and it creates the .npz files. But in the registration phase after I close the first window the next window has both point clouds in the same space :) I can't figure out if it's my data or if I've done something wrong. I'm sure I'm doing something wrong. I did try adjusting the voxel size, but that didn't really have an effect.

The point clouds are just two clouds I took of my desk, one is slightly off angle from the other. So I thought it would be easy. Famous last words I guess.

Thanks!

GPU for input parametrization

Hi! Is there a way to enable GPU services for the input parametrization step before inference? My system takes a while to compute the SDV grids for each keypoint, but I have 3 GPU cards available for use that could probably allow for much faster computation. Just wondering if this was something already available! Thanks

3Dsmooth on own data, proccess killed and no keypoints

I am now trying to use 3DsmoothNet with my own data.
I have two 500 000 points ply files of the same scene, can size be a problem ?

I launched the executable ./3DSmoothNet on one of my pointcloud. Result is:

Config parameters successfully read in!! 

File: ../data/raw_point_clouds/riverside_lidar.ply
Number of Points: 442860
Size of the voxel grid: 0.3
Number of Voxels: 16
Smoothing Kernel: 1.75
Number of keypoints:442860

Less then ten points in the neighborhood!!!
Less then ten points in the neighborhood!!!
Less then ten points in the neighborhood!!!
Less then ten points in the neighborhood!!!
(..................)
Less then ten points in the neighborhood!!!
Less then ten points in the neighborhood!!!
Starting SDV computation!
8 threads will be used!!
Killed

Do you know what have hapenned ?

demo.py got an error about npz

fid = stack.enter_context(open(os_fspath(file), "rb"))

FileNotFoundError: [Errno 2] No such file or directory: './data/demo/32_dim/cloud_bin_0.ply_0.150000_16_1.750000_3DSmoothNet.npz'

i ran demo.py and got this error

3DSmoothNet execution time

Hello
I am trying to run the 3DSmoothNet executable with ply files from 3DMatch with the following command.
./3DSmoothNet --fileCloud ~/Downloads/3DMatch/kitchen/cloud_bin_51.ply

The execution seems to not terminate.

Shouldn't the SDV computation be a fast procedure??

Thanks in advance

Demo.py only works on the given keypoints, method does not generalize well?

The keypoints file works always. However, when I change the the keypoints part to randomly select 1000 indices:

std::vector<int> ep_temp(cloud->points.size());
std::iota(ep_temp.begin(), ep_temp.end(), 0);
unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
std::shuffle(ep_temp.begin(), ep_temp.end(), std::default_random_engine(seed));
evaluation_points.insert(evaluation_points.begin(), ep_temp.begin(), ep_temp.begin()+1000);
ep_temp.clear();

This almost never works. Is there any other tricks to make it work?

Apply this method to 3D template matching?

I'm interested in applying this technique to 3D template matching, found in datasets such as LineMOD. Basically, given a 3D model of an object and a point cloud of an environment containing the model, find the pose of the model.

I figured I could convert the 3D model into a point cloud and use 3DSmoothNet to 'register' it with the environment point cloud. This doesn't immediately work, but I'm not sure I'm properly adapting the data to 3DSmoothNet or using appropriate settings. Just curious if there are any suggestions on making this work, or if the problem is just too different (e.g. due to object and environment point clouds being much different in size).

About LRF computation...

In core.cpp line 326,the source code is: toldiComputeZaxis(sphere_neighbor_z, z_axis, point_dst);
why use sphere_neighbor_z instead of sphere_neighbor?

Key points from 3DSparseMatch dataset

Dear Author,
I wonder know how to extract those keypoints with indices in the 3DSparsematch dataset.
What is the method you use or just random pick up 5000 points from point cloud?

Thank you so much.
Best regards,
Yu-Kai Lin.

SDV Parametrization: CSV file to TfRecord

When computing the SDV grids using the included 3DSmoothNet executable (super helpful tool, thanks for including!), I noticed it saves the output into a CSV file. I am trying to fine tune the network using my own data, but saveDataToTFrecordsExample.py reads in a numpy array. I am not able to easily convert the CSV into a numpy array which seems necessary to convert the training data to tfrecord format. Is there a way to change the output file format? Thanks so much!

Baselines code

Hi Zan,

Would you be able to share the SHOT code you used for your experiments?

Thanks

Extract point-wise feature from simple objects

Hi,

I have tried that the network can extract very nice features from large scenes, but when it scales down to small objects, the extracted features are the same for all points. May I get some ideas about the possible reason?

I have tried to modify parameters for generating SDV. but the problem remains. I am attaching the point cloud file I used:
point_cloud.zip

Thanks for your attention!

Dataset creation with 3DMatch dataset for training

Hi @zgojcic

I am trying to train 3DSmoothNet with the 3DMatch dataset. I wanted to know how are the descriptors (dataset) generated for main_cnn.py? As in how are pairs of rgbd images chosen (from which the keypoint descriptors are computed)?

Thanks,
Surabhi

About the generation of training fragments of 3d match

Many thanks to your relatively promising work. I would like to generate some training fragments from 3d match dataset, and I found the statement in readme that each pair of training fragments which has more than 30% overlap (This is given in 3DMatch dataset). Do you mean the training pairs with more than 30% overlap are already prepared in 3d match dataset ?

I try to find such training pairs in 3d match dataset but get no result. Could you tell me where to find the training pairs? Or if I need to generate them by myself, which script in 3d match toolbox should be used?

Thanks in advance -:).

No module named core

Hi all !

Running python main_cnn.py:

/usr/lib/python2.7/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.3) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)
Traceback (most recent call last):
  File "main_cnn.py", line 21, in <module>
    from core import config
ImportError: No module named core

I can't reach the core folder from the python code.

Any idea why ?

Script for input parametrization for 3DMatch dataset

Hi,

First of all, thank you guys for sharing this excellent work online. I'm in the process of reproducing the results mentioned in the paper using your MATLAB script, and noticed that there's no script for running input parametrization on the 3DMatch dataset.

Is there a chance for you guys to provide such a script?

Thanks in advance!

32 dim pretrained model download

Thank you for publishing the code used in your work :)

I would like to use the model, specifically the 32 dimensional descriptor. The paper specifically investigates this dimensionality, also it should run a little faster and requires only 1/4th of the memory to store the features.

The paper states:
Code, data and pre-trained models are available online at [...]

However there is only a single pretrained model linked:
https://share.phys.ethz.ch/~gsg/3DSmoothNet/models/128_dim/3DSmoothNet_model_128_dim.rar

I tried to be smart and replaced the 128 with 32, but that resulted in 404 :(

Could you provide the 16 and 32 dimensional pretrained models as well?

double free or corruption (out)

When I run "./3DSmoothNet -f myfile -o /data/test/output", I got "double free or corruption" error:

"Config parameters successfully read in!!

File: ./data/demo/cloud_bin_0.ply
Number of Points: 258342
Size of the voxel grid: 0.3
Number of Voxels: 16
Smoothing Kernel: 1.75
Number of keypoints:47596

Starting SDV computation!
4 threads will be used!!
Initialize the point to the descriptor for each thread used
sphere_neighbors is sphere_neighbors is 1313sphere_neighbors is sphere_neighbors is
1341
13981367

counter_voxel is4096
counter_voxel is4096
counter_voxel is4096
counter_voxel is4096
double free or corruption (out)
Aborted (core dumped)"

Could you please tell me how to fix it?

Do not work on real rgbd data!!!

I have tested the net on real rgbd datasets, the calculation progress was really slow, and could not registered which could be registed by fpfh.

How to Generate Data TFRecord: CSV file to TfRecord

Hi,
We used our own data to generate SDV, I noticed it(SDV) is a CSV file. But saveDataToTFrecordsExample.py reads in a numpy array,I want to know how you use CSV file to generate TFRecord. Thanks so much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.