Giter Club home page Giter Club logo

fastglobalregistration's People

Contributors

jponttuset avatar oleg-alexandrov avatar qianyizh avatar syncle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastglobalregistration's Issues

Hessian matrix seems wrong

Hi,should hessian matrix be [[I ,-q^],[-q^,q^*q^]] when we do optimization?Bcz according to bch formula jacobian should be [I,-q^].

seg fault when linking PCL?

Hi, this is an amazing work, and I'm working on a related paper and try to use your work.

However, I met some problem. Is there anything in the program conflict with PCL? When I do:
TARGET_LINK_LIBRARIES(FastGlobalRegistration FastGlobalRegistrationLib ${PCL_LIBRARIES})

Although it compiles successfully, when I run the main program, it run into seg fault at the end during desctruction of class "fgr::CApp".

However, after remove ${PCL_LIBRARIES}, everything works well. Do you know why? This is very important to me, because I'm using PCL heavily in other parts.

Unit of RMSE for the synthetic range data sets

Hello, I have a question about the unit of the RMS error calculated with the 25 synthetic range data sets. The paper says the RMS unit is the diameter of the surface. Could you please explain this? Thanks!

Error building Matlab_binding/fast_global_registration.

I want to get the .mexw64 files and use them in MATLAB but I failed to build one of them. I used VS 2017 and the errors it showed were something about undeclared identifiers in "fast_global_registration.cpp". I checked the file and I can find the definition of these identifiers in app.h.
image

Issues getting decent transformations

Hi,

I am trying to use this algorithm to replace my PCL ICP based system but have not been able to get any decent transformations. I am trying to align a laser scanned model of an object with the scene point cloud of the object in a basic environment. I followed the instructions to create input for the algorithm and got outputs as follows:

ReadFeature ... done.
ReadFeature ... done.
normalize points :: mean[0] = [-0.202804 -0.075849 0.622425]
normalize points :: mean[1] = [0.015940 -0.047936 0.001295]
normalize points :: global scale : 1.000000
Advanced matching : [0 - 1]
points are remained : 2451
[cross check] points are remained : 11
[tuple constraint] 0 tuples (1100 trial, 1100 actual).
[final] matches 0.
Pairwise rigid pose optimization

with transformations of

0 1 2
1.0000000000 0.0000000000 0.0000000000 -0.2187434137
0.0000000000 1.0000000000 0.0000000000 -0.0279131606
0.0000000000 0.0000000000 1.0000000000 0.6211291552
0.0000000000 0.0000000000 0.0000000000 1.0000000000

This is clearly not correct as there is a rotation to align the clouds.

I tried playing with some of the parameters but have not gotten better results.

Do you have any tips for getting better results? I attached two sample files, one of the model and one of the scene, to see if anyone is able to get a good transformation that aligns the model (sample_files.zip)

Thanks!

How can i compile using g++??

/tmp/ccvT6nCV.o: In function main': main.cpp:(.text+0xda): undefined reference to fgr::CApp::ReadFeature(char const*)'
main.cpp:(.text+0xfa): undefined reference to fgr::CApp::ReadFeature(char const*)' main.cpp:(.text+0x109): undefined reference to fgr::CApp::NormalizePoints()'
main.cpp:(.text+0x118): undefined reference to fgr::CApp::AdvancedMatching()' main.cpp:(.text+0x12c): undefined reference to fgr::CApp::OptimizePairwise(bool)'
main.cpp:(.text+0x14c): undefined reference to `fgr::CApp::WriteTrans(char const*)'
collect2: error: ld returned 1 exit status

if i input in terminal using "g++ main.cpp", i got this error

Is there any way I can use it with g++ without using cmake?

Using Live PCL clouds?

Hi, and thank you for making this code available. I am trying to implement it in my GICP application to test speed/accuracy, but I am having trouble.

With live cloud input, what is the workflow?

Read clouds
extract features.

This gives me my original cloud <pcl::PointXYZI> and my features PointCloud<FPFHSignature33>.
i do this for both clouds. What do i need to do to pass this data into your lib?

thank you!

Wrong aligment on full room scans

Hello,
I would like to publish a comparison between FGR and my 3D alignment method.
I conducted an experiment of 12 tests, each test contains scene + template, each 3D Model can be represented as polygon mesh or point cloud.
10 of the scenes are from sceneNN and 2 from scanNet, the templates are taken from shapeNet.
The alignment presented is between the scene (source) and template (target) point clouds.
I used the voxel radius of 0.04 and the 1:2:5 radios are suggested in the troubleshoot.
The parameters of FGR remind the defaults.

In the link below I have visualized the transformation T generate by FGR for all 12 tests. It seems that in all tests the result is not successful. Can you suggest general parametrization that will yield a much better alignment for the tests?

In the folder you can find the python script I used to create the FPFH and run FGR.
scene_8
scene_9
scene_10
scene_11
scene_12
scene_1
scene_2
scene_3
scene_4
scene_5
scene_6
scene_7

All data is here: https://www.dropbox.com/sh/dxglr4ga1f2ly0k/AACIE4eNcCYBMhYgy3qjbO_7a?dl=0

Thanks,
Tamir

Usage of Line process(r2 variable or psi(Ψ))

In the part 3.2 of your paper, line process doesn't need to be calculated during the optimization of T.

If my understanding is right, we don't have to calculate line process for pair-to-pair registration .

For global registration, line process is needed. of course.

so, in the source code, variable r2 seems useless.
because there is no usage of r2 at all.

double r2 = 0.0;
		for (int c = 0; c < corres_.size(); c++) {
			int ii = corres_[c].first;
			int jj = corres_[c].second;
			Eigen::Vector3f p, q;
			p = pointcloud_[i][ii];
			q = pcj_copy[jj];
			Eigen::Vector3f rpq = p - q;

			int c2 = c;

			float temp = par / (rpq.dot(rpq) + par);
			s[c2] = temp * temp;

			J.setZero();
			J(1) = -q(2);
			J(2) = q(1);
			J(3) = -1;
			r = rpq(0);
			JTJ += J * J.transpose() * s[c2];
			JTr += J * r * s[c2];
			r2 += r * r * s[c2];

			J.setZero();
			J(2) = -q(0);
			J(0) = q(2);
			J(4) = -1;
			r = rpq(1);
			JTJ += J * J.transpose() * s[c2];
			JTr += J * r * s[c2];
			r2 += r * r * s[c2];

			J.setZero();
			J(0) = -q(1);
			J(1) = q(0);
			J(5) = -1;
			r = rpq(2);
			JTJ += J * J.transpose() * s[c2];
			JTr += J * r * s[c2];
			r2 += r * r * s[c2];

			r2 += (par * (1.0 - sqrt(s[c2])) * (1.0 - sqrt(s[c2])));

Parameters tuning in Open3D

For convenience, I used the FastGlobalRegistrtion implementation in the Open3D library and used the sample code from this page:
http://www.open3d.org/docs/release/tutorial/Advanced/global_registration.html#id2

But I found that the parameters in Open3D is different with this repo. As I know, there is only a voxel_size para in Open3D's implemention. When I tune voxel_size , I can only get some not very good results like this:
image

So I wonder if there is a instruction of how to tune paras of Open3D's implemention.

Modify optimization for 4 DoF transformations?

In my problem, the two point clouds are transformed by only x,y,z position and yaw angle. Would it make sense to modify FGR to enforce this constraint? Should better accuracy/speed be expected?

It was certainly a benefit for RANSAC, and I'm curious if I can get even better results with FGR.

Final matches is always 0

Couldn't make this work even using sample data. What I'm missing?
Final matches is always 0 and the number of points that remain after cross-check is always low, like 2 to 6.

.\FastGlobalRegistration.exe "features_0000.bin" "features_0001.bin" "mat.txt2"

Current config:
DIV_FACTOR 1.4
USE_ABSOLUTE_SCALE 0
MAX_CORR_DIST 0.025
ITERATION_NUMBER 64
TUPLE_SCALE 0.95
TUPLE_MAX_CNT 1000
Absolute path: D:\FastGlobalRegistration\Build\FastGlobalRegistration\Release\features_0000.bin
ReadFeature ... 13985 points with 33 feature dimensions.
Absolute path: D:\FastGlobalRegistration\Build\FastGlobalRegistration\Release\features_0001.bin
ReadFeature ... 15468 points with 33 feature dimensions.
normalize points :: mean[0] = [-0.664467 -0.160645 -0.265918]
normalize points :: mean[1] = [0.288589 -0.335949 -0.423488]
normalize points :: global scale : 0.040751
Advanced matching : [0 - 1]
Number of points that remain: 13987
        [cross check] Number of points that remain after cross-check: 2
        [tuple constraint] 0 tuples (200 trial, 200 actual).
        [final] matches 0.

where can I get the UWA dataset?

I searched for a long time and still couldn't find the download address of the UWA dataset. I roughly read the original paper of UWA and found that they did not give the download address. Where should I go to download the UWA dataset? Thanks!

speed of the code

The code runs very slowly when searching the nearest point,how can I speed it up?

if without ground truth,如何进行误差评估呢?

周老师你好!测试了您的这个库,结果很惊艳,可是当我用来进行人脸建模的匹配上时,多帧融合之后,还是会引入不小的误差,所以想请教一下,有没有类似重投影误差这样一种评估的方法,来筛选FGR得到的位姿结果呢?Evaluation,需要输入gt.lg, 如果没有怎么办?

Can not understand Jacobian Computation Process in function OptimizePairwise

hello,
i read the paper Fast Global Registration, which uses Gauss-Newton method to optimize T by equation 8 at the Section3.2
image
I find your calculation of the Jacobian as follows:
`

int ii = corres_[c].first;
int jj = corres_[c].second;
Eigen::Vector3f p, q;
p = pointcloud_[i][ii];
q = pcj_copy[jj];
Eigen::Vector3f rpq = p - q;

		int c2 = c;

		float temp = par / (rpq.dot(rpq) + par);
		s[c2] = temp * temp;
                    J.setZero();
		J(1) = -q(2);
		J(2) = q(1);
		J(3) = -1;
		r = rpq(0);
		JTJ += J * J.transpose() * s[c2];
		JTr += J * r * s[c2];
		r2 += r * r * s[c2];

		J.setZero();
		J(2) = -q(0);
		J(0) = q(2);
		J(4) = -1;
		r = rpq(1);
		JTJ += J * J.transpose() * s[c2];
		JTr += J * r * s[c2];
		r2 += r * r * s[c2];

		J.setZero();
		J(0) = -q(1);
		J(1) = q(0);
		J(5) = -1;
		r = rpq(2);
		JTJ += J * J.transpose() * s[c2];
		JTr += J * r * s[c2];
		r2 += r * r * s[c2];

		r2 += (par * (1.0 - sqrt(s[c2])) * (1.0 - sqrt(s[c2])));`

To have a better understanding, I tried to substitute lp,q (equation 6) into objective 3 E(T;L).
image
image
Then I got:
image
Using chain rule:
image
So:
image
And I realize that this equation represents the following line in the code:
s[c2] = temp * temp;
As far as I am concerned, J(2) should be calculated as follows:
image
image
image
image
• I see that your calculation of Jacobian is divided into three parts which are quite different from I expected before. Could you explain more about your codes of Jacobian calculation?
Thanks,
yechuankun

The question concentrate on equation 8.

Hello author,
Thanks your friendly idea about this work. Here I have a question about eq.8. In my idea, JTJ△\lamada=-JTr. However, through your code and paper, JTJ\lamada=-JTr, where confused me for a long time.

Question about Fig2 in the paper page5

Hi, this is an amazing work, and thank you very much.
But, I have a question about the fig2 in the page 5 of the paper. Fig2 is the picture of $$ ρ(x)=\frac{μ x^2}{μ+x^2} $$
FGR_loss_paper
But when I draw the picture, the result is different from it. Why?
FGR_loss_my

This is the code I am drawing the image:

# -*- coding: utf-8 -*-
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt  

mpl.rcParams['font.family'] = 'SimHei'
plt.rcParams['axes.unicode_minus'] = False

x = np.linspace(-10, 10, 100)
M_list = [0.25, 1, 4, 16]
color_list = ['k', 'b', 'g', 'r']

for idx, _ in enumerate(M_list):
    M = M_list[idx]
    color = color_list[idx]
    y = (M * np.square(x)) / (M + np.square(x)) # formulation
    plt.plot(x, y, color, label='μ='+str(M))
    plt.annotate(s='μ='+str(M), xy=(x[0], y[0]), xytext=(5,5),
                 xycoords='data',textcoords='offset points', fontsize=10, color=color)

plt.title('Geman-McClure estimator')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()

plt.axis("equal")
plt.show()

Does FGR work for point clouds with large rotation?

Since FGR is global registration algorithm, initial pose is not necessary. Does the algorithm work when the target point cloud is rotated by 90 degrees or more? I am working on RGBD data (partial point cloud vs clean and complete 3D model), but didn't get good results using FGR.

Descriptor for noisy data

Hello all,

First of all thank you very much for open sourcing your work!
I am having trouble getting an alignment between 2 pointclouds. I am trying to globally localize in a larger map (base map) based on a smaller excerpt (query map).
The data however are having rather high noise between the base map and the query map. I have been using FPFH features as suggested in the readme, but either receive no or a wrong final transformation. I believe the low performance I am experiencing is due to the descriptor choice / parametrization.

My question is: Do you have an intuition on how to parametrize the descriptors / which descriptors to use on very noisy data, e.g., I believe the normals estimation is not very reliable on my data? Furthermore, does it make sense to select keypoints or should the matching always be done densely?

Thank you very much for your help!

Cannot find the module of Multi-way registration in the Source codes ?

Currently, I have tested the Pairwise registration of your codes. It's so amaze that the registration results are very accurate and the programs are very speedy. But when I want to input a set of point clouds, I cannot find the module of Multi-way registration just as the proposed algorithms in your paper. So can you give me some suggestions?

Speeding up FPFH generation

Hi,
I have successfully used FGR to align a pointcloud (~27000 points) with a transformed (rotated and translated) version of itself and it worked.

I noticed that the FPFH calculations take extremely long relative to the actual FGR alignment itself.

I am trying to use FGR for a real-time application (aligning consecutive pointclouds from a bag file), and clearly the speed of the feature detection will not do.

Any ideas on how I could speed this up? Is there any way I could use CUDA/GPU to accelerate this?

Would it be possible to "save" features found with FPFH for faster FPFH computation of a subsequent pointcloud (in a real-time application)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.