Giter Club home page Giter Club logo

visuallocalizationbenchmark's People

Contributors

mihaidusmanu avatar tsattler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

visuallocalizationbenchmark's Issues

Using d2-net in colmap

I use my own data for testing d2-net in reconstruction.
I just follow the pipeline in the modify_database_with_custom_features_and_matches.py on my own dataset, and the match_list is generated using original sift matcher in colmap. And i get reconstruction result as follows:

the original sift result:(which has a perfect circle)
image

and the d2-net result:(which don't have a perfect circle)
image

the d2-net match seems quite noisy. And i simply follow the pipeline using match_importer and do the reconstruction.
d2-net cvmatch

Then i try to add the ransc for the matches before import them into the colmap. However, the reconstruction results seems even worse:
image

All these methods share the same match_list. However,
I found that original sift has better pose result than d2-net. Can you please give me some possible reasons for this ?

The scale and orientation of the descriptors in CMU dataset

Hi, I found the scale and orientation of the descriptors of the CMU database images are 1 and 0, does the orientation of the descriptors of the CMU database images provided by the CMU dataset are the orientation reported by the keypoint detector ? Does the features of the CMU database images are upright features just like RobotCar-Seasons ?

502 Bad Gateway Error

Hi again CarlToft,
I got "502 Bad Gateway Error" this time when I open the webpage of the benchmark.

Local feature challenge

This document image_pairs_to_match.txt only contains match pairs about nighttime. If I want to perform a ''Local feature challenge'' on daytime images, how do I get match_pair.txt about daytime images?

Looking forward to your reply
Thank you!

Question regarding geometric verification

It seems to me that query images are included during geometric verification. Is there a performance gain by doing it this way, instead of doing it solely from reference images and registering query images afterwards?

Server Error when creating an account

Hi, I have a server error 500 when I try to create an account. Is it a bug or have you stopped the online evaluation now that CVPR2019 is over ?

Thanks

Where can I get 3D-models of aachen_v_1_1?

I cannot find the 3D-models files aachen_v_1_1.nvm and database_intrinsics_v1_1.txt files in the dataset.

https://data.ciirc.cvut.cz/public/projects/2020VisualLocalization/Aachen-Day-Night/

.
├── database_v1_1.db
├── image_pairs_to_match_v1_1.txt
├── images
│ └── images_upright
├── 3D-models
│ ├── aachen_v_1_1/
│ ├── aachen_v_1_1.nvm
│ └── database_intrinsics_v1_1.txt
└── queries/night_time_queries_with_intrinsics.txt

There are only cameras.bin, images.bin, points3D.bin and project.ini in the aache_v1_1.zip file.

How can I obtain the aachen_v_1_1.nvm and database_intrinsics_v1_1.txt? Or is there any method to convert the 3D-models format?

Why do the reference images have to be matched between themselves?

Hello, the 'image_pairs_to_match.txt' file contains pairs of query to reference (database) images, but also pairs of reference to reference images. When I remove the reference-to-reference pairs from the file, COLMAP fails to find 3D points and the poses of the query images. Why is that the case? Which pairs (query-to-reference or reference-to-reference) are used by COLMAP to find the 3D points? Shouldn't it be possible to triangulate 3D points by only having query-to-reference pairs of images?

Thank you and kind regards

Upload results

Hello, I would like to consult again about the results of step5. After I converted image.bin to image.txt, I found that the results were all poses of db images, which should be wrong. But I am not sure where the problem occurred, because I am experimenting in accordance with step1 to step5. What are the possible reasons for the results of db images?

Looking forward to your reply, thank you!

RobotCar Seasons: inaccurate rig extrinsics

The transformations between the rear, right, and left cameras are provided in extrinsics/*_extrinsics.txt. They are not consistent with the relative transformations that can be estimated from the absolute poses of the reference sequence overcast-summer. This negatively impacts the accuracy of multi-camera localization on RobotCar.

Additionally, the relative transformations are not even constant in the reference model - the rig constraint was not enforced when constructing the model. Nevertheless, reestimating the relative transformations from the reference model (as least-squares) gives better results, which shows that the provided extrinsics are very inaccurate.

About Aachen-day query

Hello.

Is it possible to ask for the pair between day query and db images?
I notice that in 'image_pairs_to_match.txt' file, night query only corresponds to a few db images.
May I ask how do you determine the pair? (GPS? Global descriptor?)

BTW, is it too soon to ask for Robotcar and CMU evaluation code as well?

Thank you very much


update:

For the 1st question, after I re-read the paper, the reason there is no day query is that it's too simple if the candidate is provided. Therefore, the benchmark encourage people to use global+local descriptors for the whole localization pipeline. Is this correct?

How to get the day query intrinsics list for Aachen Day-Night v1.1 ?

Hi @mihaidusmanu, thank you so much for the update and these great tools and datasets.
I have tried with the Aachen Day-Night V1.1. I have seen the results for both day and night from some benchmark methods on https://www.visuallocalization.net/benchmark/.

However, it seems that only the night queries' intrinsics are available in the package.
I have downloaded the aachen_v1_1.zip on the 27th of July.

Could you recommend how to get the day query intrinsics list for Aachen Day-Night v1.1?

I don't know how to get the list. Is it available at all?

"modify_database_with_custom_features_and_matches.py" with different dataset

Thanks for your sharing and work
I'm wondering if I can use the "modify_database_with_custom_features_and_matches.py" with my own dataset.
I test your work with a few images(used COLMAP to generate the database.db and wrote the match_list.txt like the sample "visuallocalizationbenchmark/local_feature_evaluation/data/aachen-day-night/image_pairs_to_match.txt" you gave ) but get some errors below:
Cloud you give me some suggestion to modify the code?
Connected to pydev debugger (build 183.4886.43)
Importing features...
Matching...
0it [00:00, ?it/s]
0%| | 0/142 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/cv-group/pycharm-community-2018.3.2/helpers/pydev/pydevd.py", line 1741, in
main()
File "/home/cv-group/pycharm-community-2018.3.2/helpers/pydev/pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/cv-group/pycharm-community-2018.3.2/helpers/pydev/pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/cv-group/pycharm-community-2018.3.2/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/cv-group/visuallocalizationbenchmark-master/local_feature_evaluation/modify_database_with_custom_features_and_matches.py", line 193, in
match_features(images, paths, args)
File "/home/cv-group/visuallocalizationbenchmark-master/local_feature_evaluation/modify_database_with_custom_features_and_matches.py", line 125, in match_features
image_id1, image_id2 = images[image_name1], images[image_name2]
KeyError: '1.jpg'

Better performance on lower threshold imply better performance on higher threshold?

As per the website, the evaluations are being performed at thresholds (0.25m, 2°) / (0.5m, 5°) / (5m, 10°) on Aachen Day-Night dataset. It appears that smaller thresholds like (0.25m, 2°) are more strict and harder to reach than larger thresholds like (5m, 10°). Based on this, it might be reasonable to hope that if a method performs better at smaller thresholds then it would perform better at larger thresholds as well. However, from some of the results reported on the benchmark, it seems this may not be true. What is a possible reason why a method could perform better at a smaller threshold yet performs worse at the larger thresholds?

Some examples from https://www.visuallocalization.net/benchmark/ for Aachen Day-Night dataset on Night images

  1. d2-net-ydb(77.6 / 84.7 / 93.9) is better at smaller thresholds but not at larger thresholds relative to HF_SG_4096_nv_50_sp(67.3 / 80.6 / 96.9)
  2. attention 5K(71.4 / 83.7 / 91.8) vs. HF_SG_4096_nv_50_sp(67.3 / 80.6 / 96.9)
  3. SuperPoint (baseline)(73.5 / 79.6 / 88.8) vs. rootsift_upright_8k_seedmatcher_sink0.2_256_34(68.4 / 82.7 / 96.9)

Thank you

Adapting the code to RobotCar

Hello,

I have tried to adapt the given code used for the Aachen dataset to the RobotCar dataset. For each query image, I find the 20 nearest reference images using NetVLAD, match the query image to these 20 reference images, and exhaustively match the 20 reference images between each other. For the matching, D2-net features were used.

Given the reference poses (/3D-models/all-merged/all.nvm), camera intrinsics (/intrinsics), and an empty COLMAP database that I created on my own (which is consistent with the Aachen files), I was able to run the reconstruction_pipeline.py script. Everything seems to work until the 'point_triangulator' function, which produces no output (and therefore the subsequent calls to 'image_registrator' and 'model_converter' fail).

What could be the reason for the 'point_triangulator' to fail and produce no output model? The terminal shows no error or warnings. Is it because of the intrinsics I used (as mentioned in this issue #2)? Or because of the matching?

Import SuperGlue feature matches

Since you already helped me once I will ask another question here - not strictly an issue ;).
I was able to import D2-net and R2D2 features into COLMAP using the provided scripts and then I calculated matches using the given matching methods.

Could I also use the script to import SuperGlue inlier matches? (https://github.com/magicleap/SuperGluePretrainedNetwork)
The workflow of SuperGlue directly provides matches from img_id1 to img_id2 in .npz file format.
My idea is to extract every feature point in every image and write it to a .superglue file format (the given .npz file format).

Since SuperGlue already provides inlier matches is it also possible to import inlier_matches with the modify_database_with_custom_features_and_matches.py?

Benchmark at different thresholds

The benchmark reports results at thresholds (0.25m, 2°) / (0.5m, 5°) / (5m, 10°) however papers like R2D2 report at (0.5m, 2°) / (1m, 5°) / (5m, 10°). How do I obtain results at the same thresholds as in these paper?
Reference -- Table 4 of https://arxiv.org/pdf/1906.06195.pdf

Also, there seems to be a mismatch between the results reported in the paper and the ones obtained from the benchmark. Any advice on the reason for the difference. Upon extracting the features from the R2D2 model [code- https://github.com/naver/r2d2/blob/master/extract.py], I run

python reconstruction_pipeline.py
                --dataset_path /local/aachen
                --colmap_path /local/colmap/build/src/exe
                --method_name r2d2

I, then, upload the resulting Aachen_eval_[r2d2].txt file to https://www.visuallocalization.net/submission/
I obtain -- [DAY] 0.0 / 0.0 / 0.0, [NIGHT] 67.3 / 81.6 / 93.9, while the paper reports 45.9 / 65.3 / 86.7

Why not add code keypoints[:, : 2] += 0.5?

Hi,

I find this benchmark very useful for visual localization.
However, I notice that you do not add the code keypoints[:, : 2] += 0.5
to change keypoints position as other scripts such as local-feature-refinement.
Can you tell when I need to add code keypoints[:, : 2] += 0.5?

Thanks a lot for your help.

About the training set of Aachen-day-night

Hello. I was there at the CVPR workshop and I have some questions about the evaluation so I want to ask the official benchmark manager.

I was reading the workshop paper "R2D2: Repeatable and Reliable Detector and Descriptor".

Is it fair to train on Aachen and test on Aachen also? Because it seems like the results of R2D2 (Tab.3 and Tab.4 from https://arxiv.org/pdf/1906.06195.pdf) use Aachen to train and test. Even with style transfer, I think it would be fairer if they use other dataset and adopt style transfer, but definitely not Aachen.

It seems like only the results on Tab.4 with "W" option (W=web images + homographies) is suitable for testing on Aachen and it is much lower.

RobotCar Seasons: Projecting the point cloud into the reference images

Problem: Using the camera intrinsics stored in the .out and .nvm files or provided in the intrinsics folder together with the provided poses for the overcast-reference images leads to slight shifts in the projected positions.

Reason: The reference poses for the overcast-reference images were created using SfM by refining the poses provided by GPS/INS. As part of the refinement process, we allowed that the intrinsics are also refined.

Fix: In order to get the most accurate projections, you should use the intrinsics provided in the COLMAP reconstructions under 3D-models/invididual/colmap_reconstructions.

RobotCar dataset in visual place recognition

Apologies in advance if this question is a bit off the context!

Given different categories of RobotCar dataset in folder images (i.e. dawn, dusk, night, etc) , I would like to apply AlexNet as a holistic feature extractor to retrieve top 5 images of database for a given query.

Let's say overcast-reference folder is my reference database containing 6954 * 3 images.
Given a query, for example from night folder containing 438 * 3 images, I do not know how ground truth should be defined to calculate correct matches.

How could one establish a image-to-image correspondences from reference to query to calculate topX matches?

To make my question clearer, here is an explanation for another dataset called Nordland which represent seasonal changes:

Given an image Q as qeury from Season S1, there is a matched image R in reference database in Seasons S2, S3, S4.
Here is the ground truth distance matrix:
gt

Is there such an explanation for RobotCar dataset too?

Cheers,

Evaluation on RobotCar-Seasons: only “rear” matters?

Hi! Thanks for your effects of hosting the benchmark.

I am trying to submit to the RobotCar-Seasons dataset. It is constituted of rear, left, and right. I found that if I set all “rear” as an invalid value (e.g. 1 0 0 0 for rotation and 10000000 0 0 for translation), I will get ALL 0s for the results (3 thresholds for day-all and night-all).

I am interested whether only the “rear” is used, or the rear is essential for evaluation.

Of course it is possible that my result for “left” and “right” is all incorrect.

So could you try to test the baseline method without the “rear” (set the lines of rear as invalid value)? Or anyone could me to try your method?

Thank you!

RobotCar Seasons's image format

the image in floder 'images/' is 'xxx.jpg', but the image's name of the file xxx.nvm in 3D-models/all-merged and 3D-models/individual is 'xxx.png'. It's need us to convert the 'xxx.jpg' in the 'images/' to 'xxx.png'.
Thank you for your nice work!

The upload results

Hello. I want to ask about the uploaded results.
https://www.visuallocalization.net/benchmark/
For example, Aachen Day-Night dataset has day and night results.
Each of them has 3 different values.
However, the website does not specify what kind of evaluation metric is it.
I have to read d2-net paper and assume the values are
the percentage of correctly localized queries in different threshold: '0.5m, 2◦ 1.0m, 5◦ 5.0m, 10◦'.
Is this correct? I suggest that the website should directly specify the metric. Thanks.

output evaluation results represent

IMG_20161227_173116.jpg 0.0676863 0.83592 -0.0595648 -0.541394 -329.917 120.202 655.417
IMG_20161227_191355.jpg 0.0435276 0.328245 0.0716575 0.940864 614.502 -91.751 -391.97
IMG_20161227_191118.jpg 0.0492746 0.572585 0.0497108 0.816852 346.767 -100.862 -638.47...
in the Aachen_eval_[].txt,what is the meaning of each column?How can I get the similar results with the d2-net?

Obtaining curves from original paper

Hi, first of all thanks for the amazing code and for providing us with the evaluation script!
I noticed in the original D2-Net paper (https://arxiv.org/pdf/1905.03561.pdf) Figure 5 contains more extensive evaluations on the Aachen Day-Night.
Is there a way one could somehow obtain these curves ?
I feel that this could be useful to have a more comprehensive evaluation.
Cheers,

false extrinsics in aachen_cvpr2018_db.nvm

Hi

Image db/1228.jpg seems to have false extrinsics inside the aachen_cvpr2018_db.nvm.

Currently, the line (name, _, qw, qx, qy, qz, cx, cy, cz, _, _) in .nvm file sais:

db/1228.jpg 1603.26000000 -0.026940300000 -0.225850000000 0.104937000000 0.968119000000 782.070000000000 -8.563600000000 -276.338000000000 -0.255289000000 0

img 1228 with false extrinsics

1228

sqlite3.IntegrityError: UNIQUE constraint failed: matches.pair_id

Hello,
during the 4th step of localization, I met an error like this:

python modify_database_with_custom_features_and_matches.py --dataset_path /data/d2-net/aachen --colmap_path /home/yang/Documents/Benchmark/colmap/build/src/exe --method_name d2-net --database_name d2-net.db --image_path images/images_upright/ --match_list retrieval_list.txt --matching_only True
Matching...
0%| | 0/1957 [00:01<?, ?it/s]
Traceback (most recent call last):
File "modify_database_with_custom_features_and_matches.py", line 189, in
match_features(images, paths, args)
File "modify_database_with_custom_features_and_matches.py", line 137, in match_features
(image_pair_id, matches.shape[0], matches.shape[1], matches_str))
sqlite3.IntegrityError: UNIQUE constraint failed: matches.pair_id

The reconstruction pipeline and the first 3 steps are OK. I tested with image_pairs_to_match.txt (including ref-ref and query-ref pairs) and retrieval_list.txt (only query-ref pairs), and it seems the image_pair_id is not unique, do you know how to solve this problem? Thank you.

need more results

hi,@tsattler,,in the website,only get the result of (0.5m,2),(1m,5),(5m,10),how can I get the result of the features and (10m,25) in the d2-net paper?
such as:
Correctly localized queries (%)
Method # Features 0.5m, 2◦ 1.0m, 5◦ 5.0m, 10◦ 10m, 25◦
Upright RootSIFT [29] 11.3K 36.7 54.1 72.5 81.6...

500 Internal Server Error (sign-up)

I'm getting "500 Internal Server Error" in sign-up.
When I retried with the same content, it said "A user with that username already exists." but I can't log in with that username and password.

RobotCar Seasons: inconsistent naming of queries

In RobotCar Seasons, the database and query images have the extension .jpg, but are named as .png in the NVM and COLMAP models. In the submitted results, the evaluation server expects the queries to with end with .jpg for RobotCar v1, but with .png for RobotCar v2. This difference is not documented.

Aachen model in COLMAP format

Hi! I was wondering if there is any future plan for releasing the 3D model from the Aachen Day-Night dataset in COLMAP binary or text formats.

500 Internal Server Error

I got "500 Internal Server Error" when I submit the results to the evaluation server.
Is there something wrong? Looking forward to your reply!

Pose Overlapping in Aachen day-night dataset

Hello,
I did some test to calculate the intersection between two camera's frustum, in order to define their 'spatial similarity'. Then I found that in the db dataset (4328 images from aachen_cvpr2018_db.nvm), some images have very close pose, but in a different location (after appearance check). For example: db/1045.jpg & db/2506.jpg, db/1135.jpg & db/3355.jpg ......
I am thinking that maybe there are several subsets (sub-models) of images in this dataset, and some of the db images in different subsets have overlapped absolute pose, is that right?
So if we predict the absolute pose for one query image, it could be referenced to many different subsets?
Thank you in advance.

ORB descriptor

The type of ORB will be CV_8U as it is a binary descriptor.
It is also closely related to the type of norm to use for matching descriptors: NORM_HAMMING and its derived for binary descriptors.

In matchers.py , I see L2 normalized descriptors, so does that mean I can't use ORB descriptors for this experiment?

Looking forward to your reply!
Thank you!

[RobotCar seasons V2] test image format

Hello. I have downloaded the RobotCar seasons V2 dataset from https://www.visuallocalization.net/ . The test image filenames provided by robotcar_v2_test.txt is in .png format, while all images in the images/ folder is in .jpg format. Besides, the conditions is not specified in test image filenames. Is it a mistake or there are additional test image packages?

Looking forward to your reply, thank you!

reconstruction pipeline issues

I have extracted features of R2D2, and was going to run the reconstruction_pipeline.py, but encountered the following error. Could you please take a look? I don't know why the number of triangulated points is 0 and the nothing was generated in the final_txt_model_path.

==============================================================================
Triangulating image #4479 (4327)
==============================================================================
                                                    
  => Image sees 0 / 0 points       
  => Triangulated 0 points                          
                                                                                                                                                                                                                   
==============================================================================
Retriangulation                                                                                          
==============================================================================
                                                                                                         
  => Completed observations: 0                                                                           
  => Merged observations: 0                                                                              
                                                    
==============================================================================
Bundle adjustment                                   
==============================================================================
                                                    
F0723 07:48:51.958273 2665382 colmap.cc:1573] Check failed: bundle_adjuster.Solve(&reconstruction)       
*** Check failure stack trace: ***                                                                                                                                                                                 
    @     0x7f722520a50d  google::LogMessage::Fail()                                                     
    @     0x7f722520c94c  google::LogMessage::SendToLog()                                                                                                                                                          
    @     0x7f722520a040  google::LogMessage::Flush()                                                                                                                                                              
    @     0x7f722520cea9  google::LogMessageFatal::~LogMessageFatal()         
    @     0x561c0f2ddf00  RunPointTriangulator()
    @     0x561c0f2d2b0e  main
    @     0x7f72239aad0a  __libc_start_main
    @     0x561c0f2d6b8a  _start
    @              (nil)  (unknown)
 
==============================================================================
Loading database                                    
==============================================================================
 
Loading cameras... 5401 in 0.004s
Loading matches... 0 in 0.000s
Loading images... 5401 in 0.005s (connected 0)
Building correspondence graph... in 0.000s (ignored 0)
 
Elapsed time: 0.000 [minutes]
 
F0723 07:48:52.029235 2665383 reconstruction.cc:809] cameras, images, points3D files do not exist at /usr/local/google/home/xiaotaihong/datasets/Aachen-Day-Night/sparse-r2d2_mega_consistent_1_epoch_37_0.7-databa
se                                                  
*** Check failure stack trace: ***
    @     0x7fc8d422650d  google::LogMessage::Fail() 
    @     0x7fc8d422894c  google::LogMessage::SendToLog()
    @     0x7fc8d4226040  google::LogMessage::Flush()
    @     0x7fc8d4228ea9  google::LogMessageFatal::~LogMessageFatal()
    @     0x55aac5d4fd1e  colmap::Reconstruction::Read()
    @     0x55aac5c8b9dd  RunImageRegistrator()
    @     0x55aac5c81b0e  main
    @     0x7fc8d29c6d0a  __libc_start_main
    @     0x55aac5c85b8a  _start
    @              (nil)  (unknown)
Recovering query poses...                           
F0723 07:48:52.074733 2665384 reconstruction.cc:809] cameras, images, points3D files do not exist at /usr/local/google/home/xiaotaihong/datasets/Aachen-Day-Night/sparse-r2d2_mega_consistent_1_epoch_37_0.7-final
*** Check failure stack trace: ***
    @     0x7f66545ae50d  google::LogMessage::Fail() 
    @     0x7f66545b094c  google::LogMessage::SendToLog()
    @     0x7f66545ae040  google::LogMessage::Flush()
    @     0x7f66545b0ea9  google::LogMessageFatal::~LogMessageFatal()
    @     0x56280aa63d1e  colmap::Reconstruction::Read()
    @     0x56280a99e15c  RunModelConverter()
    @     0x56280a995b0e  main
    @     0x7f6652d4ed0a  __libc_start_main
    @     0x56280a999b8a  _start
    @              (nil)  (unknown)
Traceback (most recent call last):
  File "/usr/local/google/home/xiaotaihong/work/keypoint-eval/visuallocalizationbenchmark/local_feature_evaluation/reconstruction_pipeline.py", line 340, in <module>
    recover_query_poses(paths, args)
  File "/usr/local/google/home/xiaotaihong/work/keypoint-eval/visuallocalizationbenchmark/local_feature_evaluation/reconstruction_pipeline.py", line 283, in recover_query_poses
   with open(os.path.join(paths.final_txt_model_path, 'images.txt')) as f:

Results of My Submissions

Hello, I submitted the results to the evaluation server yesterday, but the status is still “Scheduled for processing” now. Is there something wrong? Looking forward to your reply!

Reconstruction pipeline issue

Hi,
I meet a similar problem with issue#48

part of the output:

==============================================================================
Triangulating image #4479 (4327)
==============================================================================

  => Image sees 0 / 0 points
  => Triangulated 0 points

==============================================================================
Retriangulation
==============================================================================

  => Completed observations: 0
  => Merged observations: 0

==============================================================================
Bundle adjustment
==============================================================================

F1014 12:30:37.116462 3119869 colmap.cc:1598] Check failed: bundle_adjuster.Solve(&reconstruction) 
*** Check failure stack trace: ***
    @     0x7f42e487e1c3  google::LogMessage::Fail()
    @     0x7f42e488325b  google::LogMessage::SendToLog()
    @     0x7f42e487debf  google::LogMessage::Flush()
    @     0x7f42e487e6ef  google::LogMessageFatal::~LogMessageFatal()
    @     0x5596554a035d  RunPointTriangulator()
    @     0x559655490e4a  main
    @     0x7f42e28240b3  __libc_start_main
    @     0x55965549aeee  _start
Aborted (core dumped)

what I have done:
check the matches in the database: I check the number of match pairs and the "data" in the matches table in the database, they are correct.

test with more features: I have test with 8k, 16k, 20k features per image, neither can work with pipeline v1

reconstruct with pipeline_v1.1: running the pipeline v1.1 with the same custom features (including 8k, 16k, 20k) is ok, and can reconstruct the scene then get the query poses

Local feature challenge: unknown queries in Aachen database_v1_1.db

Problem: Running reconstruction_pipeline_aachen_v1_1.py crashes since it attempts to find missing features for the following new queries:

query/night/nexus5x_additional_night/IMG_20170702_005615.jpg
query/night/nexus5x_additional_night/IMG_20170702_005557.jpg
query/night/nexus5x_additional_night/IMG_20170702_003519.jpg
query/night/nexus5x_additional_night/IMG_20170702_005427.jpg
query/night/nexus5x_additional_night/IMG_20170702_004734.jpg

Reason: The new database for Aachen v1.1 has 196 query images, so 98 new images, while aachen_v1_1.zip only contains 95 new queries in images_upright/query/night/nexus5x_additional_night.

Quick fix: Check that an image actually exists before importing its features.

Better fix: Cleanup database_v1_1.db.

About aachen v1.1 and robotcar v2 dataset

Hi! Thanks for making this wonderful benchmark.
I noticed that evluation on aachen v1.1 and robotcar v2 is available. However, I can't find the download link of these two datasets. Is it not ready yet? I'm confused. Hope you can help me. Thanks!

The final results

After I have done step 5, how do I get the percentage of correctly localized queries in different threshold: '0.5m, 2◦ 1.0m, 5◦ 5.0m, 10◦' ?
Looking forward to your reply.
Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.