Giter Club home page Giter Club logo

dbow3's People

Contributors

dorian3d avatar droppix avatar jlblancoc avatar lowsfer avatar rmsalinas avatar stevenvdschoot avatar yachizhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbow3's Issues

Windows10 - Built as dynamic libs?:ON, Thanks a lot!

When configure, I encounter "Built as dynamic libs?;ON"
Detecting CXX compile features - done
LIB_INSTALL_DIR:
OpenCV ARCH: x86
OpenCV RUNTIME: vc12
OpenCV STATIC: OFF
Found OpenCV 3.4.0 in D:/opencv-3.4.0/opencv-3.4.0/build/install/x86/vc12/lib
You might need to add D:\opencv-3.4.0\opencv-3.4.0\build\install\x86\vc12\bin to your PATH to be able to run your applications.


General configuration for DBoW3 0.0.1

Built as dynamic libs?:ON
Compiler:C:/Program Files (x86)/Microsoft Visual Studio 12.0/VC/bin/cl.exe

Build Type: Release
C++ flags (Release): /DWIN32 /D_WINDOWS /W3 /GR /EHsc /DWIN32 /D_WINDOWS /W3 -std=c++11
C++ flags (Debug): /DWIN32 /D_WINDOWS /W3 /GR /EHsc /DWIN32 /D_WINDOWS /W3 -std=c++11
C++ flags (Relase+Debug): /DWIN32 /D_WINDOWS /W3 -std=c++11 /DWIN32 /D_WINDOWS /W3 -std=c++11
CMAKE_CXX_FLAGS: /DWIN32 /D_WINDOWS /W3 /GR /EHsc
CMAKE_BINARY_DIR: D:/DBow3-master/build

normalization

Hello everyone:
I would like to ask if I need to normalize the descriptor before using the DBOW3 library when I create a new descriptor myself.

Problem with generating dictionary on own features

Hello, I use my own CNN generated 512 dimension uchar descriptors of 7000 images and use DBoW3::Vocabulary vocab;
vocab.create( descriptors );
vocab.save( "vocabulary.gz" );
to generate dictionary.
Then I use another 100 images with descriptors generated from the same CNN to get vocabulary based on this dictionary. However, the vocabulary is null.
While, I use the same descriptors of the 100 images to get vocabulary based on another dictionary, which is generated with orb features, the vocabulary is not null.

So, why? Is there something wrong with the generated dictionary based on the CNN feature?

[Question] update vocabulary zip when new image's bow in

hello, developer.
I really appreciate your project and your work.

now I'm doing my own project that I have an image dataset(specific urban path in my case, called it as 1st image dataset) and I use yours to train images and made vocabulary zip for that image dataset(1st image dataset).

and with vocabulary zip, want to find the same image from the image dataset corresponding with an image of a place of another day in the same path(call it as 2nd image dataset) using your Dbow.

but most of the time found the wrong image with a low score from the image dataset(1st image dataset).
(if using the same image of the image dataset(1st image dataset) to vocabulary zip(1st image dataset), instead of the same image of the place of another day, it found the right image from the 1st image dataset

thus, I will try to add another day's images BoW(2nd image dataset) to the vocabulary zip already made before. (tend to update vocabulary zip)

and I wonder,
will it be okay that add a new Bow of an image(2nd image dataset) to the existing vocabulary zip(1st image dataset) and use it to find an image from the 1st image dataset corresponding to an image of the 2nd image dataset? not like creating a vocabulary zip from scratch with two image datasets again and using this zip to find the right image from 1st image dataset corresponding to an image of 2nd image dataset.

structured features words

I think the original way to collect the words bag lost the structure information of the picture. could we come up with a method think over the relationship of the features in one image .
I am working in this,if you have some good idea, we can communication

Is this project still maintained?

I understand that fbow is now the modern choice, but I have some minor bugfixes for this library that I could send a PR for. Would it be possible to merge them?

I am using DBow3 simply because I needed Python bindings and couldn't find any for fbow. In the future I'll write my own but I'm a bit tight on time at the moment.

Thank you,
Andrei

Is DBoW3 thread safe?

Hello, I am wondering if DBoW3 is thread safe, that I can transform, add or query in multi threads? If not all, is there any one of these functionalities is thread-safe?

bug in "void DescManip::fromString(cv::Mat &a, const std::string &s)"

Hi! When I run the demo, demo_genera, there was an error occurred show as follow:

...
Done
Creating a small database...
OpenCV Error: Assertion failed (s >= 0) in setSize, file /home/koker/library/opencv/modules/core/src/matrix.cpp, line 306
/home/koker/library/opencv/modules/core/src/matrix.cpp:306: error: (-215) s >= 0 in function setSize
...

and the opencv I use was 3.1.0.
I debugged the codes and found that the error was occurred in file "src/DescManip.cpp" at a.create(1, cols, type);(line 185).
because there were wrong values of type and cols loading from ss.

there was my solution:

void DescManip::fromString(cv::Mat &a, const std::string &s)
{

    //check if the dbow3 is present
    string ss_aux;
    stringstream ss(s);
    ss>>ss_aus;
    if(ss_aux.find("dbw3")==std::string::npos){//is dbow2
        //READ UNTIL END
        //stringstream ss(s);
        int val;
        vector<uchar> data;data.reserve(100);
        while( ss>>val) data.push_back(val);
        //copy to a
        a.create(1,data.size(),CV_8UC1);
        memcpy(a.ptr<char>(0),&data[0],data.size());
    }
    else{
        int type,cols;
        //stringstream ss(s);
        ss >>type>>cols;
        a.create(1,  cols, type);

Ambiguous license?

In DBoW3.h there is reference to CC-ShareAlike license. At the same time it also refers to LICENSE.txt which clearly states a modified BSD license. So I am not sure which license I should follow?

Vocabulary creation for ORB_SLAM2

Hi again Rafael,

I am also using ORB_SLAM2 in my project. Because the visual content is quite different I need to recreate the vocabulary. I managed to do this a few months back using DBoW2, however I was limited to ORB and SURF descriptors as I was unsure on how to code additional feature templates.
I am migrating to DBoW3 as it seems to seamlessly support all type of features.
ORB_SLAM2 uses a non-yaml human-readable (ASCII) vocabulary in TXT format. Although DBoW2 created a yml.gz vocabulary by default, some additional files in the ORB_SLAM2 repository provided the function saveToTextFile() which ensured compatibility.
After digging in the files I have seen that DBoW3 will store vocabularies as binaries or YAML. Although the YAML files can also be saved with txt extension, the content does not really resemble the format output by DBoW2::vocabulary::saveToTextFile().
Would you mind giving some pointers on how to get DBoW3 vocabularies to work with ORBLAM2?

problems about "Matrices should be continuous"

Hi,
I'd like to save the feature matrixs of 194339 images.
But there exist some warnings like "Matrices should be continuous", when running the create_voc_step0 with orb feature.
How to deal with it?

cannot find source file:test_flann.cpp

Hi,@dorian3d:
When I built the project with cmake, I got the error:
CMake Error at tests/CMakeLists.txt:7 (ADD_EXECUTABLE):
Cannot find source file: test_flann.cpp

Obviously, there is no file "test_flann.cpp" in the "test" directory. Waht's the function fo this file and Why it is missing?

Floating point vs binary features matching scores

Hi Rafael,
Thanks a lot for this code.
I have compiled demo_general and some other demos that basically extend to other features I wanted to test.
When matching images against themselves (method testVocCreation) the voc.score method returns a measure of how similar the images are based on the BowVectors v1, v2. Particularly one should obtain a score of 1 when matching the same images, for instance matching "image0.png" vs "image0.png".
This is true in the demo program for BINARY features (orb, brisk, akaze). For FLOATING POINT features (surf in the demo) all image combinations return a score of "-0".
I have further added SIFT to one of my own demos and I get the same result.

I have also played around with the vocabulary settings "WeightingType" and "ScoringType" as per documented in the DBoW repository. In all tests with floating point descriptors, I keep getting "-0" scores.

Could you point me as to what should be the vocabulary configuration for floating point descriptors?
I am compiling in Win7 with VS2015 and OpenCV3.0 libraries.

make[2]: *** 没有规则可制作目标“/usr/local/lib/libDBoW3.a”,由“gen_vocab” 需求。 停止。 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/gen_vocab.dir/all' failed make[1]: *** [CMakeFiles/gen_vocab.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

make[2]: *** 没有规则可制作目标“/usr/local/lib/libDBoW3.a”,由“gen_vocab” 需求。 停止。
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/gen_vocab.dir/all' failed
make[1]: *** [CMakeFiles/gen_vocab.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

An error when running demo_general (The written string is too long in function icvYMLWriteString)

When I ran demo_general , there was an error occurred. Here is part of the output:
" Saving database...
1
OpenCV Error: Bad argument (The written string is too long) in icvYMLWriteString, file /tmp/opencv3-20170727-41914-1s9pxfl/opencv-3.2.0/modules/core/src/persistence.cpp, line 2111
/tmp/opencv3-20170727-41914-1s9pxfl/opencv-3.2.0/modules/core/src/persistence.cpp:2111: error: (-5) The written string is too long in function icvYMLWriteString "

and I found that it was from the line ' f << "descriptor" << DescManip::toString(child.descriptor); ' in Vocabulary.cpp. "child.descriptor.cols()" was 32767 so the string was too long for operator<< .
What confused me most is that in the debug mode of CLion , there is no error. But in the run mode there is.

And I found that if I changed the suffix from '.yml.gz' to '.voc' it could be solved.

Fail to load binary orbvoc files on windows.

In "Vocabulary.cpp" line 1053:
std::ifstream ifile(filename);
may work on linux but you need to set that the data is binary ensure the data is not modified.
It needs to be replaced with:
std::ifstream ifile(filename, std::ios::binary);

Similarly on line 1036 replace with:
std::ofstream file_out(filename, std::ios::binary);

Problems about float format descriptors.

Hello, I use my own CNN generated 256 dimension float descriptors of just 12 images and use
DBoW3::Vocabulary vocab;
vocab.create( descripors );
to generate vocabulary but it seems stuck for a long while(more than 24h) and can not obtain the vocabulary. And I don't know where the problems are.

Test on Der Hass dataset

Hi,

What am I looking for:

I am interested in the task of image matching. My goal is to query one image from a dataset of images and get the entries which have a substantial overlap with the queried image.

What do I expect
When I query one image from the dataset, I expect to get a list of image indices, sorted by their scores which show how similar each image is to the queried image. The images with smaller variance from, or more overlap with, the queried image should have higher score.

What I do:

To this end, I use the demo_general example. There is the testDatabase function which serves for my purpose. Where an image is queried and the index of "similar" images are printed out along with their score.
I have tried this example code on Der Hass dataset.

I run the code with the following command:
demo_general orbfind $(DATASET_PATH) -iname *.JPG | sort\ `

What is the result
And the output can be found in the attached file named as output.txt
The interesting part of the output is where it querying the database. An instance of a queries result is as follows (the entry if starts from zero),

Searching for Image 2. 4 results:
<EntryId: 2, Score: 1>
<EntryId: 53, Score: 0.379218>
<EntryId: 76, Score: 0.36081>
<EntryId: 60, Score: 0.358032>.

What is the problem?

Looking at the images, one could observe that images with high scores are not very similar to the queried image. For the example above, image 1 and image 60 have huge difference in the view point. In every other example, there are mostly one or two such images which are very different and one could find more similar images in the dataset by visual comparison.

What is the question?
Do I have the right expectation? Should I rely on the query scores to get the images matches with similar view points? Is there any way to make the scores more reliable in this scenario?

Problem with the size of vocabulary

Hi, I am trying to use SIFT to create vocabulary what I need. There is a problem, when I use the dataset (NYU DataSet), the size of vocabulary result is 3KB, but when I use the one-tenth of the dataset, the size of result is 968KB. This made me very confused. Sincerely hope for your reply, Thank you!

orbvoc.dbow3

orbvoc.dbow3 is a ~50mb file that is checked into the repo. Since a git clone pulls all commits, every version of this file is pulled upon a clone. Currently there are 2 commits of this file, so a clone is >100MB. Every time you change this file and push (even if is a single byte change), the repo will bump up by another 50MB. Is this what you want? It would be preferable to use git lfs or git annex to add this file to the repo.

Unfortunately, even if you were to remove this file from the repo, that 100MB is still in the history so will come along with every clone. To get rid of it, you have to do some extra work. See http://stackoverflow.com/questions/2100907/how-to-remove-delete-a-large-file-from-commit-history-in-git-repository or https://git-scm.com/book/en/v2/Git-Internals-Maintenance-and-Data-Recovery#Removing-Objects

hello,if I want build DBow3 into .a,how can I do?now I build it into .so,which isnt what I want

-- Install configuration: "Release"
-- Installing: /usr/local/lib/cmake/FindDBoW3.cmake
-- Installing: /usr/local/lib/cmake/DBoW3/DBoW3Config.cmake
-- Installing: /usr/local/lib/libDBoW3.so.0.0.1
-- Installing: /usr/local/lib/libDBoW3.so.0.0
-- Installing: /usr/local/lib/libDBoW3.so
-- Set runtime path of "/usr/local/lib/libDBoW3.so.0.0.1" to ""
-- Installing: /usr/local/include/DBoW3/BowVector.h
-- Installing: /usr/local/include/DBoW3/DBoW3.h
-- Installing: /usr/local/include/DBoW3/Database.h
-- Installing: /usr/local/include/DBoW3/DescManip.h
-- Installing: /usr/local/include/DBoW3/FeatureVector.h
-- Installing: /usr/local/include/DBoW3/QueryResults.h
-- Installing: /usr/local/include/DBoW3/ScoringObject.h
-- Installing: /usr/local/include/DBoW3/Vocabulary.h
-- Installing: /usr/local/include/DBoW3/exports.h
-- Installing: /usr/local/include/DBoW3/quicklz.h
-- Installing: /usr/local/include/DBoW3/timers.h
-- Installing: /usr/local/bin/demo_general
-- Set runtime path of "/usr/local/bin/demo_general" to ""
-- Installing: /usr/local/bin/create_voc_step0
-- Set runtime path of "/usr/local/bin/create_voc_step0" to ""
-- Installing: /usr/local/bin/create_voc_step1
-- Set runtime path of "/usr/local/bin/create_voc_step1" to ""
lzw@resplendent-star:~/3d_lib/DBow3-master/build$

Why the format is different?

I download a DBoW3 vocubulary yml file. It like this:
vocabulary:
k: 10
L: 5
scoringType: 0
weightingType: 0
nodes:
- { nodeId:1, parentId:0, weight:0.,
descriptor:"dbw3 5 256 0.0282166 -0.0464277 -0.00155563 0.0195318 0.0127344 0.0175524 0.0249182 0.0505561 0.00974953 0.0191589 -0.126822 -0.0269695 0.0156602 -0.00148103 0.0266696 -0.0278376 0.0183598 0.0225065 0.0102972 0.0135393 -0.0298388 -0.015842 0.0644389 0.0026523 0.019828 -0.00464698 -0.0318678 0.00451956 0.0167672 0.00993656 0.0064205 0.0114803 0.0052258 -0.0204265 -0.00298808 0.0393944 -0.0195278 -0.013887 -0.0227335 -0.0162304 -0.0071446 -0.00422807 0.0428517 -0.0317671 0.0283621 0.0111701 -0.00566452 -0.0400856 -0.0196577 0.0216044 0.0196101 -0.0285099 -0.0151599 0.0212448 -0.0288515 -0.0600829 -0.00861907 0.0214418 0.0375813 0.0166094 -0.00850956 0.010408 -0.0336234 0.0310988 0.0168146 0.0161289 0.0427318 -0.0044199 -0.0122952 -0.0365116 -0.0145487 -0.0397346 -0.0221509 -0.00777042 -0.0464547 -0.00426373 -0.00957853 0.00308297 0.0132791 -0.0196195 -0.0377574 -0.029988 0.0265767 0.0262487 0.0140505 0.00488274 0.00680912 0.0107932 -0.070846 -0.0113015 0.0128653 -0.0112844 0.0181235 -0.00195907 0.0191648 0.0254948 -0.0359513 -0.0123985 -0.0175331 0.0196461 0.000914373 -0.0114064 -0.00685857 -0.00109303 -0.0183508 -0.0132181 0.0116307 -0.0248163 0.0327876 -0.0247617 -0.0454079 0.0262968 -0.0764144 0.0390536 0.0297468 -0.0273197 0.0202212 -0.00681736 0.0196127 0.00834888 -0.0134622 0.012962 0.0112936 0.0178503 0.0262749 -0.0485005 -0.0285796 -0.000385143 0.0147169 -0.0462185 0.00297543 0.0275997 0.0373025 0.0245792 -0.00278855 0.0522282 -0.0288043 0.00905006 -0.00756075 -0.013557 -0.014353 0.000465126 -0.0129177 -0.0350644 0.0297775 0.0208471 0.0449397 -0.0053479 0.00241352 -0.00604664 0.006685 0.0144987 0.0226759 0.00105402 -0.020786 0.0129925 0.0320702 0.0445253 -0.0233705 0.027613 -0.00479526 -0.0295112 -0.0437024 0.0131428 0.0180217 0.0324005 0.00466164 -0.0128045 0.00932035 0.043001 -0.00728973 -0.0377903 0.0018688 0.00254708 -0.0367907 -0.0331428 -0.0475313 -0.0203362 -0.00430156 0.00493915 -0.0095337 0.0088231 -0.0356051 0.0450481 -0.034091 0.00540604 0.00357041 -0.0229231 -0.0263163 0.00353934 0.0208586 -0.0530529 0.0175665 -0.0260063 -0.013499 0.0175644 0.00625461 0.0114385 0.00166789 0.03291 -0.0270635 -0.00973511 -0.0434037 0.0292823 0.00742841 0.000494358 -0.0306714 -0.0159139 -0.0414082 -0.0288355 -0.0115343 -0.00388788 -0.0139166 0.021995 0.00987418 -0.00274973 -0.0514535 0.0107591 -0.00696206 -0.0293223 0.00747669 -0.0166595 -0.00588887 0.00667826 0.0119736 -0.00979452 -0.00674867 1.18017e-06 0.0537927 -0.0359476 -0.0194478 -0.0138035 0.0166307 0.024314 0.00792531 0.015141 -0.0113583 -0.00644338 0.0120866 -0.046237 0.00113854 0.0419657 -0.0702566 -0.0104919 0.0462897 -0.00122678 -0.0525829 -0.000213172 -0.0186052 -0.0135538 -0.0245714 -0.0288605 0.00682914 -0.0338058 0.018147 0.0170514 " }
- { nodeId:2, parentId:0, weight:0.,
...

And I use same data to generate the vocubulary yml file. And my output is like:
vocabulary:
k: 10
L: 5
scoringType: 0
weightingType: 0
nodes:
- { nodeId:1, parentId:0, weight:3.1172121710917007e+00,
descriptor:"0.0354999 0.125961 -0.0899333 0.00940716 -0.0404284 -0.0681934 -0.000698927 -0.129488 0.0596507 -0.0352885 -0.100353 -0.00107607 0.0115445 0.0383281 -0.00665728 0.0467515 -0.198072 0.0997283 0.00653879 0.0282129 0.156231 -0.142805 0.0929327 -0.137059 0.0668409 0.150995 -0.0693844 -0.0766955 -0.041515 -0.153341 -0.0530757 0.0540285 -0.084946 0.0180105 0.121748 -0.0676764 0.230421 0.0762116 0.0669327 -0.0890834 0.126961 -0.188613 0.157426 0.0674022 0.0543198 0.00510156 -0.0718658 -0.0548448 0.0952988 0.0044786 -0.153464 0.0371513 0.109263 0.0892266 -0.0309564 0.0759547 -0.0795075 0.0361993 -0.0254162 0.0726163 -0.0482989 -0.113146 -0.0129798 -0.021902 -0.0136797 0.0957096 -0.201533 -0.0887019 0.151989 -0.171577 0.0205965 -0.0746266 -0.0182118 0.0777944 0.186204 -0.018468 0.194916 -0.0438617 0.0290822 0.0368869 -0.048466 0.0319289 -0.0238049 0.0643619 0.0388753 0.060765 0.118967 -0.128794 0.0865447 0.00288163 -0.0940577 0.0105423 0.0632482 0.0407168 -0.0778853 0.0066329 0.0849079 -0.167564 0.0577636 -0.0248432 -0.0139746 0.110944 0.0570377 -0.0205937 -0.0931026 -0.00960186 0.00720341 -0.0533386 0.121489 0.116467 -0.0546351 0.115026 0.215079 -0.223696 -0.268556 -0.22581 0.131816 -0.0447868 -0.00238234 -0.0351309 0.0907687 0.0915385 -0.0322984 0.0394833 -0.0734853 -0.0545587 -0.0266285 0.137623 0.0620815 -0.0788311 0.0871121 -0.194911 -0.123108 -0.126513 0.140195 -0.0601778 -0.0877871 -0.15231 0.00645607 0.0714905 0.159278 -0.110403 0.151996 -0.0206717 -0.0113557 0.028608 -0.186627 0.176118 -0.068964 0.0219737 -0.0434809 0.134475 0.0660336 0.197536 -0.0774245 -0.169559 0.0735524 -0.117962 0.0240644 0.00481006 -0.212233 -0.125976 -0.038999 -0.141939 -0.0296288 -0.022527 -0.0990938 0.0819112 0.0307572 -0.101368 0.155772 -0.106299 0.0846459 -0.115195 -0.0944857 -0.0163783 0.00181647 0.0297214 -0.0214813 0.15572 0.148555 -0.0451894 -0.0243046 -0.0647202 0.090212 0.0562619 0.103249 -0.0505745 -0.00872183 0.0909672 -0.076492 0.130413 0.0233149 0.100497 -0.0491502 -0.031341 -0.0269015 -0.144976 -0.0639115 -0.0625203 -0.0354729 0.181546 0.109445 0.00274729 0.0505961 -0.149337 -0.0849629 -0.0537802 0.064498 -0.111422 0.0521305 -0.153909 0.0649416 0.0308576 0.106565 -0.143596 -0.175881 -0.103822 0.0445059 0.0837604 -0.123117 -0.0301544 0.0444908 0.0924708 0.150015 0.0405757 -0.00521247 0.107609 0.0918586 -0.0847282 -0.025882 0.139215 -0.0492126 0.0832153 -0.0281849 -0.0227442 0.159082 0.0228036 0.124301 -0.0736366 -0.0804549 0.0800293 0.0624154 -0.0583405 -0.0609488 -0.0556457 0.0762758 0.0391196 0.103908 -0.0837154 0.224688 -0.0982641 -0.0789496 -0.129398 0.024566 -0.0360153 " }

why the descriptor of the first yml file has
dbw3 5 256
And my yml file doesn't.
Can I generate the yml like the first yml file?
Could you help me?

How can I get the EntryID separately?

With the following code, we can easily get the result of query and print it out. However, how can I get the EntryID separately?

QueryResults ret;
for(size_t i = 0; i < features.size(); i++)
{
    db.query(features[i], ret, 1);
    cout << "Searching for Image " << i << ". " << ret << endl;
}

// searching for image 0 returns 1 result:
// <EntryId: 0, Score: 1>
// searching for image 1 returns 1 result:
// <EntryId: 1, Score: 1>
// searching for image 2 returns 1 result:
// <EntryId: 2, Score: 1>
// searching for image 3 returns 1 result:
// <EntryId: 3, Score: 1>
// searching for image 4 returns 1 result:
// <EntryId: 4, Score: 1>
// searching for image 5 returns 1 result:

Problem with calculating BoW vectors on own features

Hi guys, I have a problem with calculating BoW vectors on own feature descriptors. I use DBoW3 on FPFH feature descriptors designed for point cloud data (data from 3D LIDAR in my case). I have implemented the procedure of collecting features and building vocabulary following the code in demo_general.cpp. My implementation for one training sample is following:

cv::Mat features_tmp (features_cloud->points.size(), dimensionality, CV_32F);

       for(size_t j = 0; j < features_cloud->points.size(); j++)
       {
           for(int idx = 0; idx < dimensionality; idx++)
           {
               features_tmp.at<float>(j, idx) = features_cloud->points[j].histogram[idx];
           }
       }

       cv::Mat features = features_tmp;

The building of vocabulary works well. However when I use transform() method on the same feature descriptors I am not able to iterate over resulting BowVector object. The output of the ‘for’ loop with BowVector::iterator is empty. I use following code:

for(size_t i = 0; i < training_features.size(); i++)
   {
       cout << "Training frame " << i << endl;

       BowVector frame_bow_vector;
       voc.transform(training_features[i], frame_bow_vector);

       cout << "BoW vector for training_frame " << i << ": " << endl;

       for(BowVector::iterator it = frame_bow_vector.begin(); it != frame_bow_vector.end(); it++)
           cout << it->first << ": " << it->second << ", ";

       cout << endl;

       training_bow_descriptors.push_back(frame_bow_vector);
   }

Here training_features is std::vectorcv::Mat. If I print first element:

cout<<frame_bow_vector.begin()->first<<" "<<frame_bow_vector.begin()->second<<endl;

I get the same output for all the BoW vectors of reference samples: 0 5.30499e-315.
When I use following code to test the vocabulary matching training samples against themselves I have score 0 for all the samples:

BowVector v1, v2;
   for(size_t i = 0; i < training_features.size(); i++)
   {
       voc.transform(training_features[i], v1);
       for(size_t j = 0; j < training_features.size(); j++)
       {
           voc.transform(training_features[j], v2);

           double score = voc.score(v1, v2);
//            cout << "Frame " << i << " vs Frame " << j << ": " << score << endl;
           printf("Frame %d vs Frame %d: %.5f\n", i, j, score);
       }
   }

What could be a cause of this problem? Thank you in advance!

Stuck in function "testDatabase"

Hi guys, I have issue running the demo_general. The compilation shows no issue, but the execution just hangs after producing this message:
Creating a small database...
After some simple investigation, I found that the execution seems to stuck in this loop:
(File Vocabulary.cpp, line 1396-1411)

for(unsigned int i = 0; i < fn.size(); ++i)
  {
    NodeId nid = (int)fn[i]["nodeId"];
    NodeId pid = (int)fn[i]["parentId"];
    WordValue weight = (WordValue)fn[i]["weight"];
    std::string d = (std::string)fn[i]["descriptor"];

    m_nodes[nid].id = nid;
    m_nodes[nid].parent = pid;
    m_nodes[nid].weight = weight;
    m_nodes[pid].children.push_back(nid);

    DescManip::fromString(m_nodes[nid].descriptor, d);
  }

Does any one has the same issue or has any idea why?
Thank you very much

Chung

c++17 compile error

Vocabulary.h
using c++17 compile generate error because:
void toStream( std::ostream &str, bool compressed=true) const throw(std::exception);
throw specifier deprecated in c++17

hope you can fix

cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv\modules\core\src\matrix.cpp:235: error: (-215:Assertion failed) s >= 0 in function 'cv::setSize'

Necesito ayuda con este error, al momento de cambiar de persona me marca este error no se cual sera el problema (soy muy nuevo en esto tengo)

import numpy as np
import cv2
import pickle

Face = cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml ")
eye = cv2.CascadeClassifier('cascades/data/haarcascade_eye.xml')
smile = cv2.CascadeClassifier('cascades/data/haarcascade_smile.xml')

recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("trainner.yml")

labelids={"person_name", 1}

with open("labels.pickle","rb")as f:
og_labelids=pickle.load(f)
labelids={v:k for k,v in og_labelids.items()}

capt = cv2.VideoCapture(0)

while(True):
ret, frame = capt.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = Face.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)
for (x, y, w, h) in faces:
#print(x, y, w, h)
gray = gray[y:y+h, x:x+w]
color = frame[y:y+h, x:x+w]
#recordar
id_, conf = recognizer.predict(gray)
if (conf>=4 and conf<=85):
#print(id_)
#print(labelids[id_])
font = cv2.FONT_HERSHEY_SIMPLEX
name= labelids[id_]
color =(255,255,255)
stroke = 2
cv2.putText(frame, name, (x,y), font, 1, color, stroke, cv2.LINE_AA)

    img_item = "My-image.png"
    cv2.imwrite(img_item, color)

    color =  (255, 0, 0) #BGR 0-255
    stroke = 2
    end_cord_y = y+h
    end_cord_x  = x+w
    cv2.rectangle(frame, (x,y),(end_cord_x, end_cord_y), color, stroke)


cv2.imshow('frame', frame)
if(cv2.waitKey(20) & 0xFF == ord('q')):
    break

capt.release()
cv2.destroyAllWindows()

static MIN_COMMON_WORDS in Database.h

I'm not sure why MIN_COMMON_WORDS is declared in Database.h and not the cpp file? Maybe the intention is that the library user could change this value programmatically like a parameter. This will however not work, since the the nature of static variables is that they are independent in each translation unit, so setting it from your program code will not affect the value in Database.cpp.

This is probably also the reason why you get a compiler warning like

/usr/wiss/demmeln/work/slam/ldso/LDSO/thirdparty/DBoW3/src/Database.h:31:12: warning: ‘DBoW3::MIN_COMMON_WORDS’ defined but not used [-Wunused-variable]
 static int MIN_COMMON_WORDS = 5;

every time you include Database.h.

Solution? If you want a configurable parameter, you might want to add it to the Database class as a member. If you want a constant, make it static const or else move it to the cpp file.

saved vocabulary bigger than database file?

When running the utils/demo_generation sample with a few images, the console output seems fine and two files are generated under current folder:

-rw-r--r--   1 u  staff    384686 Nov 12 08:55 small_db.yml.gz
-rw-r--r--   1 u  staff    665344 Nov 12 08:55 small_voc.yml.gz

it seems that the vocabulary should be embedded in the database, but the size of vocabulary file is larger?

When checking the content of the YAML, we can see that the "small_voc.yml.gz" contains descriptors for each node while the "small_db.yml.gz" doesn't.

Is this by design or an issue?

Floating point descriptor - Error when training

Hi, I was noticing that when using a normalized float descriptor like Superpoint, there were some issues when training. The weights were different based on how the descriptor was scaled. Upon debugging, I noticed that when we do the tranform when computing the nodeWeights, we copy a floating point value (the distance of the descriptor from the node) into a uint64 variable. This forces the value to be 0 or 1 for a float descriptor and was reducing the resolution. Is there a reason why this was done ? Thanks!

matrix.cpp:307: error: (-215) s >= 0

i try this project and i meet a problem at Vocabulary voc("small_voc.yml.gz");the error is as follows:OpenCV Error: Assertion failed (s >= 0) in cv::setSize, file C:\build\master_winpack-build-win64-vc14\opencv\modules\core\src\matrix.cpp, line 307
C:\build\master_winpack-build-win64-vc14\opencv\modules\core\src\matrix.cpp:307: error: (-215) s >= 0 in function cv::setSize,and i don't know how to fix it

Akaze descriptors issue

Hi, I get akaze descriptors of images , and vocabulary is orbvoc.dbow3. The score of QueryResults is bigger than orb descriptors, such as 0.5 (akaze) and 0.009 (orb). Isn't orbvoc.dbow3 and akaze descriptors suitable?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.