Giter Club home page Giter Club logo

image-matching-toolbox's Introduction

A Toolbox for Image Feature Matching and Evaluations

In this repository, we provide easy interfaces for several exisiting SotA methods to match image feature correspondences between image pairs. We provide scripts to evaluate their predicted correspondences on common benchmarks for the tasks of image matching, homography estimation and visual localization.

TODOs & Updates

  • Add LoFTR method (2021-7-8)
  • Add simple match visualization (2021-7-8)
  • Use immatch as a python lib under develop mode. Check install.md for details. (2021-7-22)
  • Add SIFT method (opencv version) (2021-7-25)
  • Add script to eval on RobotCar using HLoc (2021-7-31)
  • Add Dog-AffNet-Hardnet (Contributed by Dmytro Mishkin 👏, 2021-8-29)
  • Add AUC metric and opencv solver for Homography estimation on HPatches (#20, 2022-1-12)
  • Add COTR (A naive wrapper without tuning parameters, 2022-3-29)
  • Add Aspanformer (2023-6-2)
  • Add Megadepth relative pose estimation following LoFTR & Aspanformer (2023-6-2)
  • Add ScanNet relative pose estimation following LoFTR & Aspanformer (2024-1-11)
  • Add support to eval on Image Matching Challenge
  • Add scripts to eval on SimLoc challenge.

Comments from QJ: Currently I am quite busy with my study & work. So it will take some time before I release the next two TODOs.

Supported Methods & Evaluations

Sparse Keypoint-based Matching:

Semi-dense Matching:

Supported Evaluations :

  • Image feature matching on HPatches
  • Homography estimation on HPatches
  • Visual localization benchmarks:
    • InLoc
    • Aachen (original + v1.1)
    • RobotCar Seasons (v1 + v2)

Repository Overview

The repository is structured as follows:

  • configs/: Each method has its own yaml (.yml) file to configure its testing parameters.
  • data/: All datasets should be placed under this folder following our instructions described in Data Preparation.
  • immatch/: It contains implementations of method wrappers and evaluation interfaces.
  • outputs/: All evaluation results are supposed to be saved here. One folder per benchmark.
  • pretrained/: It contains the pretrained models of the supported methods.
  • third_party/: The real implementation of the supported methods from their original repositories, as git submodules.
  • notebooks/: It contains jupyter notebooks that show example codes to quickly access the methods implemented in this repo.
  • docs/: It contains separate documentation about installation and evaluation. To keep a clean face of this repo :).

👉Refer to install.md for details about installation.

👉Refer to evaluation.md for details about evaluation on benchmarks.

Example Code for Quick Testing

To use a specific method to perform the matching task, you simply need to do:

  • Initialize a matcher using its config file. See examples of config yaml files under configs folder, eg., patch2pix.yml. Each config file contains multiple sections, each section corresponds to one setting. Here, we use the setting (tagged by 'example') for testing on example image pairs.
  • Perform matching
import immatch
import yaml
from immatch.utils import plot_matches

# Initialize model
with open('configs/patch2pix.yml', 'r') as f:
    args = yaml.load(f, Loader=yaml.FullLoader)['example']
model = immatch.__dict__[args['class']](args)
matcher = lambda im1, im2: model.match_pairs(im1, im2)

# Specify the image pair
im1 = 'third_party/patch2pix/examples/images/pair_2/1.jpg'
im2 = 'third_party/patch2pix/examples/images/pair_2/2.jpg'

# Match and visualize
matches, _, _, _ = matcher(im1, im2)    
plot_matches(im1, im2, matches, radius=2, lines=True)

example matches

👉 Try out the code using example notebook .

Notice

  • This repository is expected to be actively maintained (at least before I graduate🤣🤣) and gradually (slowly) grow for new features of interest.
  • Suggestions regarding how to improve this repo, such as adding new SotA image matching methods or new benchmark evaluations, are welcome 👏.

Regarding Patch2Pix

With this reprository, one can reproduce the tables reported in our paper accepted at CVPR2021: Patch2Pix: Epipolar-Guided Pixel-Level Correspondences[pdf]. Check our patch2pix repository for its training code.

Disclaimer

  • All of the supported methods and evaluations are not implemented from scratch by us. Instead, we modularize their original code to define unified interfaces.
  • If you are using the results of a method, remember to cite the corresponding paper.
  • All credits of the implemetation of those methods belong to their authors .

image-matching-toolbox's People

Contributors

aliyoussef97 avatar ducha-aiki avatar georg-bn avatar grumpyzhou avatar marisancans avatar parskatt avatar sergioragostinho avatar tsattler avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

image-matching-toolbox's Issues

Unable to execute Quick Testing code

I am unable to execute the quick testing code shown in readme.md
I've imported all the files as a PyCharm project, then I installed all the required packages using Conda, I created the following file to execute the image-matching-toolbox, but I have this error... (in addition, there is NO file plot_matches.py inside immatch.utils)
image

Waiting your answer, thank you in advance

FileNotFoundError: no 'cameras.bin' when test loftr on aachen

Hi, I use your default code and yml to test loftr on aachen benchmark as follow:
python -m immatch.eval_aachen --gpu 1 --config 'loftr' --colmap /usr/bin/colmap --benchmark_name 'aachen'
After running for a dozen hours, I met this error:

runpy.py 194 _run_module_as_main
return _run_code(code, main_globals, None,

runpy.py 87 _run_code
exec(code, run_globals)

eval_aachen.py 84 <module>
eval_aachen(args)

eval_aachen.py 56 eval_aachen
reconstruct_database_pairs(args)

localize_sfm_helper.py 107 reconstruct_database_pairs
triangulation.main(

triangulation.py 180 main
image_ids = create_db_from_model(empty_sfm_model, database)

triangulation.py 22 create_db_from_model
cameras = read_cameras_binary(str(empty_model / 'cameras.bin'))

read_write_model.py 135 read_cameras_binary
with open(path_to_model_file, "rb") as fid:

FileNotFoundError:
2
No such file or directory
outputs/aachen/empty_sfm/cameras.bin

the specific config is:

aachen:
    <<: *default
    match_threshold: 0.0 # Save all matches
    pairs: ['pairs-db-covis20.txt', 'pairs-query-netvlad50.txt']
    npts: 4096
    imsize: 1024
    qt_dthres: 4
    qt_psize: 48
    qt_unique: True
    ransac_thres: [20]
    sc_thres: 0.2 # Filtering during quantization
    covis_cluster: True

I have no idea what`s going on, can u give me some advice? thank!
@GrumpyZhou

LoFTR Update

Hi thx for repo!
But can you update LoFTR-git?
Thx!

A request for help when testing Hpatch `ValueError: zero-size array to reduction operation minimum which has no identity`

Hi!

Thank you so much for releasing this toolbox and for the awesome paper on Patch2Pix.
Here, I would like to kindly ask for your help.

I have tried to run your example for Patch2Pix and another one of SuperPoint + NCNet ...

The following code lines are in ./test_patch3pix.sh...

python -m immatch.eval_hpatches --gpu 0 \
    --config 'patch2pix' --match_thres 0.25 0.5 0.9  \
    --task 'both' --save_npy \
    --root_dir . 

However, I keep getting following error ValueError: zero-size array to reduction operation minimum which has no identity ... I am not sure how to fix it.
So, please give the guidance...

Below here is the full error report.

Also, I have captured the screen to confirm at the bottom to confirm

(immatch) gabby-suwichaya@gabby-suwichaya:/mnt/HDD4TB1/image-matching-toolbox$ ./test_patch3pix.sh Can not import sparsencnet

>>>> Method=Patch2Pix Default config: {'class': 'Patch2Pix', 'ckpt': 'pretrained/patch2pix/patch2pix_pretrained.pth', 'ksize': 2, 'imsize': 1024, 'match_threshold': 0.25} Thres: [0.25, 0.5, 0.9]

Load model method:patch2pix 
Ckpt:pretrained/patch2pix/patch2pix_pretrained.pth
Initialize Patch2Pix: backbone=ResNet34 cstride=True upsample=8
Init regressor Namespace(conv_dims=[512, 512], conv_kers=[3, 3], conv_strs=[2, 1], fc_dims=[512, 256], feat_comb='pre', feat_dim=259, panc=1, pshift=8, psize=[16, 16], shared=False)
FeatRegressNet:  feat_comb:pre psize:16 out:5 feat_dim:518 conv_kers:[3, 3] conv_dims:[512, 512] conv_str:[2, 1] 
FeatRegressNet:  feat_comb:pre psize:16 out:5 feat_dim:518 conv_kers:[3, 3] conv_dims:[512, 512] conv_str:[2, 1] 
Xavier initialize all model parameters
Initialize ResNet using pretrained model from https://download.pytorch.org/models/resnet34-333f7ec4.pth
Reload all model parameters from weights dict
Initialize Patch2Pix
Matching thres: 0.25  Save to: ./outputs/hpatches/cache/Patch2Pix.im1024.m0.25.npy

>>Eval hpatches: method=Patch2Pix rthres=2 thres=[1, 3, 5, 10] 
Save results to ./outputs/hpatches/cache/Patch2Pix.im1024.m0.25.npy
/home/gabby-suwichaya/anaconda3/envs/immatch/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3441: RuntimeWarning: Mean of empty slice.
  out=out, **kwargs)
/home/gabby-suwichaya/anaconda3/envs/immatch/lib/python3.7/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
Traceback (most recent call last):
  File "/home/gabby-suwichaya/anaconda3/envs/immatch/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/gabby-suwichaya/anaconda3/envs/immatch/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/mnt/HDD4TB1/image-matching-toolbox/immatch/eval_hpatches.py", line 98, in <module>
    print_out=args.print_out
  File "/mnt/HDD4TB1/image-matching-toolbox/immatch/eval_hpatches.py", line 74, in eval_hpatches
    save_npy=result_npy
  File "/mnt/HDD4TB1/image-matching-toolbox/immatch/utils/hpatches_helper.py", line 323, in eval_hpatches
    lprint_(eval_summary(results, thres, save_npy=save_npy))
  File "/mnt/HDD4TB1/image-matching-toolbox/immatch/utils/hpatches_helper.py", line 28, in eval_summary
    summary += '# Features: mean={:.0f} min={:d} max={:d}\n'.format(np.mean(n_feats), np.min(n_feats), np.max(n_feats))
  File "<__array_function__ internals>", line 6, in amin
  File "/home/gabby-suwichaya/anaconda3/envs/immatch/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 2880, in amin
    keepdims=keepdims, initial=initial, where=where)
  File "/home/gabby-suwichaya/anaconda3/envs/immatch/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation minimum which has no identity

demo_problem

Test loftr in aachen

Hi! I’m trying to use your great work to eval loftr in aachen.
But it seems like a huge work, my running state is shown below, It doesn't call the gpu to handle....

100%|██████████| 131615/131615 [00:01<00:00, 99454.57it/s]
0%| | 0/102817 [00:00<?, ?it/s]
Load match file, existing matches 0
Start matching, total 102817 pairs...
0%| | 3/102817 [01:27<831:08:45, 29.10s/it]

Is this a normal phenomenon? If not, could you tell me how to solve it?

my args is:

--gpu
0
--config
loftr
--colmap
F:/work/COLMAP-3.8-windows-cuda/COLMAP.bat
--benchmark_name
aachen

my config is as follow, only changed the path of ckpt.

default: &default
class: 'LoFTR'
ckpt: 'E:/work/image-matching-toolbox-main/pretrained/loftr/myout0.ckpt'
match_threshold: 0.2
imsize: -1
no_match_upscale: False
aachen:
<<: *default
match_threshold: 0.0 # Save all matches
pairs: ['pairs-db-covis20.txt', 'pairs-query-netvlad50.txt']
npts: 4096
imsize: 1024
qt_dthres: 4
qt_psize: 48
qt_unique: True
ransac_thres: [20]
sc_thres: 0.2 # Filtering during quantization
covis_cluster: True

SIFT+SuperGlue

Hi, thank you for coming up with this great project! I have been working on this project recently, and superglue really feels like the best matching method at present. I would like to ask if you plan to release the combination of SIFT + Superglue? thanks!

Feature Request: Add homography AUC

In this toolbox, homographies on hpatches are evaluated with accuracy, while recent papers such as LoFTR and SuperGlue both use AUC, which makes comparisons difficult.

It further complicates matters that pydegensac is used here, while the opencv version is used in most papers.
Adding the option of calculating AUC and opencv would make comparison easier.
See e.g.
https://github.com/PruneTruong/DenseMatching/blob/40c29a6b5c35e86b9509e65ab0cd12553d998e5f/validation/utils.py#L209
for an AUC implementation.

LoFTR has a bug in position_encoding.py

Wrong:
div_term = torch.exp(torch.arange(0, d_model//2, 2).float() * (-math.log(10000.0) / d_model//2))
Correct:
div_term = torch.exp(torch.arange(0, d_model//2, 2).float() * (-math.log(10000.0) / (d_model//2)))

This seems to be fixed in original repo.

[email protected]: Permission denied (publickey).

Thanks for this great project. According to the installation doc the toolbox should be install using ssh, which is impossible for users interested to add ssh-key to this project. And offcourse, nonsense. and I have checked the .gitmodules file
[submodule "third_party/SuperGluePretrainedNetwork"] path = third_party/superglue url = [email protected]:magicleap/SuperGluePretrainedNetwork.git [submodule "third_party/caps"] path = third_party/caps url = [email protected]:GrumpyZhou/caps.git [submodule "third_party/d2-net"] path = third_party/d2net url = [email protected]:mihaidusmanu/d2-net.git [submodule "third_party/r2d2"] path = third_party/r2d2 url = https://github.com/naver/r2d2 ignore = untracked [submodule "third_party/sparse-ncnet"] path = third_party/sparsencnet url = [email protected]:ignacio-rocco/sparse-ncnet.git [submodule "third_party/hloc"] path = third_party/hloc url = [email protected]:GrumpyZhou/Hierarchical-Localization.git branch = extend_base_dev
would you change the url to https, so we can test and use this great project.

How do I add my own methods to test?

I copied the loftr code to add the required files, but when running my code failed:
KeyError: 'block_type'
Self. block_type = config['block_type'], which is the argument in my code

COTR no matches found in hpatches test

exec the command according to the evaulation.md

python -m immatch.eval_hpatches --gpu 0 \ 
    --config  'cotr' \
    --task 'both' --save_npy \
    --root_dir .

and output:

>>>> Method=COTR Default config: {'class': 'COTR', 'ckpt': 'pretrained/cotr/cotr_default.pth.tar', 'backbone': 'resnet50', 'hidden_dim': 256, 'dilation': False, 'dropout': 0.1, 'nheads': 8, 'layer': 'layer3', 'backbone_layer_dims': {'layer1': 256, 'layer2': 512, 'layer3': 1024, 'layer4': 2048}, 'enc_layers': 6, 'dec_layers': 6, 'position_embedding': 'lin_sine', 'max_corrs': 100, 'match_threshold': 0.0, 'imsize': -1, 'batch_size': 32} Thres: [0.0]
Matching thres: 0.0  Save to: ./outputs/hpatches/cache/COTR.npy

>>>>Eval hpatches: task=matching+homography method=COTR scale_H=False rthres=2 thres=[1, 3, 5, 10] 
>>Finished, pairs=0 match_failed=580 matches=0.0 match_time=nans
==== Image Matching ====
#Features: mean=0 min=0 max=0
#(Old)Matches: a=0, i=0, v=0
#Matches: a=0, i=0, v=0
MMA@[ 1  3  5 10] px:
a=[0. 0. 0. 0.]
i=[0. 0. 0. 0.]
v=[0. 0. 0. 0.]

==== Homography Estimation ====
Hest solver=degensac est_failed=580 ransac_thres=2 inlier_rate=0.00
Hest Correct: a=[0. 0. 0. 0.]
i=[0. 0. 0. 0.]
v=[0. 0. 0. 0.]
Hest AUC: a=[0. 0. 0. 0.]
i=[0. 0. 0. 0.]
v=[0. 0. 0. 0.]

and i set print function in match_pairs function but no infomation output

def match_pairs(self, im1_path, im2_path, queries_im1=None):
        im1 = imageio.imread(im1_path, pilmode='RGB')
        im2 = imageio.imread(im2_path, pilmode='RGB')
        engine = SparseEngine(self.model, self.batch_size, mode='tile')
        matches = engine.cotr_corr_multiscale(
            im1, im2, np.linspace(0.5, 0.0625, 4), 1,
            max_corrs=self.max_corrs, queries_a=queries_im1,
            force=True
        )

        print(matches.shape) # add print

        # Fake scores as not output by the model
        scores = np.ones(len(matches))
        kpts1 = matches[:, :2]
        kpts2 = matches[:, 2:4]
        return matches, kpts1, kpts2, scores

what is the meaning of $colmap path ?

--colmap $COLMAP_PATH
in the evaluation code, there is an ambiguous parameter and I don't know the actual meaning of it. Can you give an example and explain why I need install colmap since I have installed pycolmap?

Test loftr in inloc

hi,i used image-matching-toolbox to eval loftr for inloc,
config:
default: &default
class: 'LoFTR'
ckpt: 'pretrained/loftr/outdoor_ds.ckpt'
match_threshold: 0.2
imsize: 1024
no_match_upscale: False
example:
<<: *default
match_threshold: 0.5
imsize: -1
hpatch:
<<: *default
imsize: 480
no_match_upscale: True
inloc:
<<: *default
pairs: 'pairs-query-netvlad40-temporal.txt'
rthres: 48
skip_matches: 20

and then get result as below:
image

I have a question

I encountered some errors while running this naming:bash download.sh
https://dsmn.ml/files/d2-net/d2_tf.pth
Resolving host dsmn. ml (dsmn. ml) Failure: Unknown name or service.
Wget: Unable to resolve host address' dsmn. ml '
--2024-05-29 19:41:36-- https://dsmn.ml/files/d2-net/d2_tf_no_phototourism.pth
Resolving host dsmn. ml (dsmn. ml) Failure: Unknown name or service.
Wget: Unable to resolve host address' dsmn. ml '
--2024-05-29 19:41:36-- https://www.di.ens.fr/willow/research/sparse-ncnet/models/sparsencnet_k10.pth.tar
Resolving host www.di. ens. fr (www.di. ens. fr) 129.199.99.14
Connecting to www.di. ens. fr (www.di. ens. fr) | 129.99.14 |: 443... Connected.
An HTTP request has been sent and is waiting for a response 200 OK
Length: unspecified [application/x-tar]
Saving to: "sparsencnet/sparsencnet_k10. pth. tar"
Can you give me some advice? Thank you

The possibilities of python to C++

Patch2Pix achieved amazingly good results on our dataset. Therefore, we are interested in taking the work further and deploying it on our C++ project.

Unfortunately, I am not familiar with this process and would like to ask for your opinion and advice on this. I've researched that the methods involved include compiling the python program as '.DLL' and using C++ calls, and I'm not sure if this will affect efficiency. Or reading and understanding the program in depth and rewriting it in C++, which would be a very challenging work.

Thank you very much for your suggestions and help!

aachen evaluation

Hi @GrumpyZhou Thanks for your great works!

I use patch2pix + superglue as matcher for aachen evaluation, and use eval_aachen.py script. I can get keypoints.h5 and matches.h5, but an error was encountered while running triangulation, the error as follows:
"Keypoint format not supported"

I find the format slightly different from the hloc, any suggestion? Did you test patch2pix + superglue on aachen ? Thanks!

robotcar-season eval run very slowly

Hi, thanks for your excellent work.

I finished robotcar dataset preparation and can successfully run eval program. However, the process takes too much time (around 2000+ hours estimation for matching 4.7M+ data). Is it normal? Or any mistakes maybe happen?

Also, does skip-match work? And can I skip that? Could you help me and explain?

Thanks for your time!

python setup.py develop failed

when running python setup.py develop

ERROR:: You need Python 3.8 or greater to install PyTables!
error: Setup script exited with 1

should I change python version to 3.8 to avoid the problem?

Prepare Image Pairs

bash download.sh:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='drive.google.com', port=443): Max retries exceeded with url: /uc?id=1T2c3DHJlTdRpo_Y3Ad4AH8p4tS7cOyjP (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f0d00812cd0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
unzip: cannot find or open pairs.zip, pairs.zip.zip or pairs.zip.ZIP.

ModuleNotFoundError

First of all, thank you for your excellent work. I run eval__ hpatches after installing the environment, there is a ModuleNotFound Error:
ModuleNotFoundError: No module named 'third_party.caps.CAPS'
I think it may be the problem of submodules. Some submodules can be imported normally, such as d2net, r2d2

Where to find download.sh?

Hi,

This repo is a really nice one. However, I couldn't find the folder pretrained and the download.sh to download the pretrained weights. Is it not pushed yet?

Thanks you very much.

python -m immatch.eval_inloc

File "/home/szw/image-matching-toolbox-main/immatch/eval_inloc.py", line 5, in
from third_party.hloc.hloc.localize_inloc import localize_with_matcher
ImportError: cannot import name 'localize_with_matcher'

'localize_with_matcher' is not really in 'localize_inloc.py',Did you make a mistake?

RuntimeError

RuntimeError:
Tried to access nonexistent attribute or method 'call' of type 'torch.kornia.geometry.boxes.Boxes3D'. Did you forget to initialize an attribute in init()?:
File "/home/szw/anaconda3/envs/image-matching-toolbox-main/lib/python3.7/site-packages/kornia/geometry/boxes.py", line 601
# Due to some torch.jit.script bug (at least <= 1.9), you need to pass all arguments to init when
# constructing the class from inside of a method.
return cls(hexahedrons, raise_if_not_floating_point=False, mode=mode)
~~~ <--- HERE

nn_matching: cannot perform reduction function max on tensor with no elements

Hello @GrumpyZhou ,

Image pair matching sometimes fails with the following error:

File "my_code.py", line 111, in <lambda>
    matcher = lambda im1, im2: model.match_pairs(im1, im2)
File ".../image-matching-toolbox/immatch/modules/caps.py", line 79, in match_pairs
    matches, kpts1, kpts2, scores = self.match_inputs_(im1, gray1, im2, gray2)
File ".../image-matching-toolbox/immatch/modules/caps.py", line 69, in match_inputs_
    match_ids, scores = self.mutual_nn_match(desc1, desc2, threshold=self.match_threshold)
File ".../image-matching-toolbox/immatch/modules/base.py", line 63, in mutual_nn_match
    return mutual_nn_matching(desc1, desc2, threshold)
File ".../image-matching-toolbox/immatch/modules/nn_matching.py", line 28, in mutual_nn_matching
    matches, scores = mutual_nn_matching_torch(desc1, desc2, threshold=threshold)
File .../image-matching-toolbox/immatch/modules/nn_matching.py", line 12, in mutual_nn_matching_torch
    nn12 = similarity.max(dim=1)[1]
RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity

My guess is that there is an image, for which no keypoints are detected (yes, there is one such image), desc1 or desc2 array is empty, similarity matrix is empty and therefore the max operation fails. Implementing a test somewhere, checking if desc1 or desc2 are not empty (and dealing with the empty ones) would be great. I've seen the issue only with CAPS, but it could probably appear also for other methods.

Thank you very much!

patch2pix with superglue

Hello,
I find patch2pix with superglue can get a better result than superpoint+superglue in hloc.can you tell me how to use patch2pix with superglue in this code. thanks.

Bug Report - installation problem

Hi,
Your work is really great.
But when downloading d2-net pretrained model by running bash download.sh according to the install.md, the terminal prompts that "wget: unable to resolve host address 'dsmn.ml'". And “dsmn.ml” is also inaccessible using google.

Please help me, thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.