Giter Club home page Giter Club logo

Comments (8)

ySalaun avatar ySalaun commented on September 17, 2024 1

Hi,

I have also observed this issue with different images. It is difficult to get rid of it.
What I did is to only compute matching between consecutive pictures.
However this solution only works if you know that the pictures are in the good order and if there is in fact a good order.

If you cannot use this solution, I can advise you to use one of these solutions that might work:

  • find a better line matching algorithm (personnally I haven't but there might be new existing code now)
  • tune the thresholds inside LBD matcher (e.g. only accept matches with better descriptor similarities)
  • define and tune a threshold after the hybrid ransac in calibration part. A contrario ransac gives you a NFA at the end and this value is supposed to tell you if the calibration worked well or not (-logNFA should be very high in good cases and very low otherwise)
  • validate the calibration hypothesis by reprojecting 3D segment extremities. It is a bit complicated to explain here but the idea is to compute the 3D position of a segment extremity from its position in one picture (pic. 1) and the position of the matched line in the other picture (pic.2). From the 3D position you compute the projection on pic.2 and compute its distance with the corresponding segment (segment distance not line distance). When the distance is low, you validate this segment match otherwise you invalidate it. You perform this check on all the segment inliers and give a score to the calibration (e.g. #validated matches / #inlier matches) and with a threshold you validate or invalidate the image pair. Note that even in good cases, this value might be low but in bad cases it should be really close to 0.

I am sorry I cannot give you a good solution but I think one of the current biggest difficulty of SfM with lines is in fact the line matching...

Best,

Yohann

from linesfm.

haopo2005 avatar haopo2005 commented on September 17, 2024

Hi,
I cant find the relationship between VanishingPoints Computation and Line Matching Computation in the main_line_matching.cpp. I think you just read or compute these vp points and let it go.
Do you integrate the line match module with Lilian Zhang's code (https://github.com/mtamburrano/LBD_Descriptor)?
These are too many thresholds for newbie to tune.
And I will test other line matching algorithm later, for example,
https://docs.opencv.org/3.4.0/df/dfa/tutorial_line_descriptor_main.html
or
https://github.com/kailigo/LineSegmentMatching (not efficient but really accurate)

And I'd like to know the internal structure of x_y_matches_line.txt so as to replace the line matching module and continue the calibration stage(compute relative camera pose).

Best regards,
Jin

from linesfm.

ySalaun avatar ySalaun commented on September 17, 2024

Hi,

Regarding VP, it is historical code I forgot to erase. I have tried to accept line matches only when the vanishing points agreed globally but it didn't work well so you can forget/erase this part.

The line matching code is from Lilian Zhang's code but not the one on github, the one on his website (which require a painful installation). I agree that the thresholds are numerous and thus tuning is difficult.

About the code on opencv it is supposed to be LBD with LSD but I got far worse results than with Lilian Zhang's code whereas mine and Zhang's one are close (since detection is different, the results cannot be the same but the matching part is copy-pasted code and conversion to opencv lib).

About https://github.com/kailigo/LineSegmentMatching, if you have already tested it on your dataset and it's working I think that it would be the best solution. But beware, usual datasets used in line matching papers are far easier than the dataset you have so you need to test it before :)

About the file x_y_matches_line.txt it is just a simple txt files with :

  • first line is the total number of lines in image x
  • following lines are of the form i (space) j with i being the index of the line in image x and j the one in image y

Best,

Yohann

from linesfm.

haopo2005 avatar haopo2005 commented on September 17, 2024

Thanks for your advice about opencv.
I'd like to just exclude the irrelevant images pairs and get the nice inlier of matched image lines.
As for "validate the calibration hypothesis", I think it's the basic pipeline of ransac and you should have already implemented it in the main_calibration.cpp, have you?
Currently, I need to fix the missing matching file due to the failure of computation of principle adjacency matrix. That may make me stuck at the index out of range at this line:
matches_lines.insert(PictureMatches(imPair, readMatches(dirPath, picName[i], picName[j], LINE)));

from linesfm.

ySalaun avatar ySalaun commented on September 17, 2024

For the "validate calibration hypothesis" I don't really have implemented it. You can add to the pipeline a condition of the form:
if(finalNFA > 0)
then (reject solution)
otherwise (keep it)
The 0 threshold is the one usually used in a contrario methods but may be this threshold could be tuned.
To know if it can work, you just have to display the final NFA for every image pairs and check if for bad pairs the nfa is above a given value and for good pairs it is below it.

About your matching file error, I don't really understand what's happening.
The matching fails ? The match reading fails ?

Best,

Yohann

from linesfm.

haopo2005 avatar haopo2005 commented on September 17, 2024

Hi,
As for 'validate calibration hypothesis', I think the easiest way for me is to compare the number of inlier of HAC_RANSAC.computeRelativePose stage with some kind of threshold.
As my understanding from your paper, a contrario approach in ransac is to select the scale of lowest NFA. and the final nfa is a chain of dot product of coplanarity and trifocal constraints nfa. It's too complicated in the computeRelativePose function. I cant follow the code with the paper.

Besides,
your paper says,'Line-based calibration is thus prone to be less accurate in practice than point-based calibration, and even less when two lines are involved in a feature, as in line coplanarity'.
Does it mean that if there are enough feature points, I should choose the feature points first to get the relative pose rather than the line features?

from linesfm.

haopo2005 avatar haopo2005 commented on September 17, 2024

I've tried different inlier thresholds and lbd thresholds. It is really difficult to get rid of the mis-match problem. There always exist false positive or false negative matching pairs.
Maybe it's wrong to deal with the static irrelevant images. There are matching poinst in nature whatever alogrithm to use. The choice of input images should be a dynamic and incremental pipeline.

from linesfm.

ySalaun avatar ySalaun commented on September 17, 2024

Hi,

Sorry for the late reply, I was in holidays.

About the NFA threshold, you just have to look at the variable minNFA in computeRelativePose function in hybrid_essential.cpp file. If it is higher in wrong cases than in good cases, then you can use a threshold on this value to know if went well or not.

About the point is better than lines. In fact, what we observed is that in the cases where many points are detected (e.g. > 1000), lines are not useful and can even decrease (a bit) the result accuracy. However, in your cases, it seems that there are too few points to obtain good results with points only.

I agree that this issue is difficult to correct, another possibility would be to use graph based algorithm (that are usually used in SfM methods).
The idea is to accept every calibration information, then build a graph of relations between every cameras (mainly with rotation info because you don't have the translation scale info) and find the outliers. I didn't implement this part on the code but it is in openMVG for example.

Best,

Yohann

from linesfm.

Related Issues (9)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.