Giter Club home page Giter Club logo

dxslam's People

Contributors

cedrusx avatar ivipsourcecode avatar oldaaaa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dxslam's Issues

How to run RGB-D Example

I know the command is as follows:

./Examples/RGB-D/rgbd_tum Vocabulary/DXSLAM.fbow Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE OUTPUT/FEATURE/PATH

my command is:

./Examples/RGB-D/rgbd_tum Vocabulary/DXSLAM.fbow  Examples/RGB-D/TUM3.yaml dataset/TUM/rgbd_dataset_freiburg3_walking_halfsphere/ ./ hf-net/output/feature/path/

Run this command, nothing happens。
I want to know where is ASSOCIATIONS_FILE.

terminate called after throwing an instance of 'std::runtime_error'

I am trying to run dxslam. There were few files missing like ORBvoc.txt so I got it from ORBSLAM folder. Then TUM1.yaml file was also missing which I also copied from ORB folder. After running RGBD example I get the following error.

Loading ORB Vocabulary. This could take a while... terminate called after throwing an instance of 'std::runtime_error' what(): Vocabulary::fromStream invalid signature Aborted (core dumped)

编译出错,这是怎么回事呢?

CMakeFiles/DXSLAM.dir/build.make:446: recipe for target 'CMakeFiles/DXSLAM.dir/src/Initializer.cc.o' failed
make[2]: *** [CMakeFiles/DXSLAM.dir/src/Initializer.cc.o] Error 1
CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/DXSLAM.dir/all' failed
make[1]: *** [CMakeFiles/DXSLAM.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Tracking lost easily on TUM fr3_walking_xyz sequence

Hi, I follow the instruction and successfully run DXSLAM on TUM dataset fr3_walking_xyz sequence. Everything seems fine at the beginning, but soon tracking get lost and never recover.

I use TUM3.yaml from ORB-SLAM2 and DXSLAM.fbow in Vocabulary. And the outputs are as follows:

 DXSLAM 


Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!
load time:37.5962


Camera Parameters: 
- fx: 535.4
- fy: 539.2
- cx: 320.1
- cy: 247.6
- k1: 0
- k2: 0
- p1: 0
- p2: 0
- fps: 30
- color order: RGB (ignored if grayscale)

Depth Threshold (Close/Far Points): 2.98842

-------
Start processing sequence ...
Images in the sequence: 827

New map created with 340 points
match numbers: 468
nmatchesMap: 206
match numbers: 297
nmatchesMap: 32
match numbers: 222
nmatchesMap: 24
match numbers: 278
nmatchesMap: 33
match numbers: 298
nmatchesMap: 56
match numbers: 303
nmatchesMap: 54
match numbers: 143
nmatchesMap: 41
match numbers: 164
nmatchesMap: 36
match numbers: 169
nmatchesMap: 42
-------

median tracking time: 0.368993
mean tracking time: 0.284506

Saving keyframe trajectory to KeyFrameTrajectory.txt ...

trajectory saved!

There may be an error in the README

Thanks you for your generous sharing of the open source code.
In README.md
3. RGB-D Example
3.4 Execute the following......
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml ./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER ASSOCIATIONS_FILE OUTPUT/FEATURE/PATH
Following this instruction will cause an error,Perhaps DXSLAM.fbow should be called here instead of the ORB.txt:
./Examples/RGB-D/rgbd_tum Vocabulary/DXSLAM.fbow Examples/RGB-D/TUMX.yaml ./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUMX.yaml PATH_TO_SEQUENCE_FOLDER
After the replacement,it works fine.

This may be a BUG in /src/Matcher.cc

I think this may be a bug in the program src/Matcher.cc on lines 68 to 75.The original code is as follows:

        // The size of the window will depend on the viewing direction
        float r = RadiusByViewingCos(pMP->mTrackViewCos);

        if(bFactor)
            r*=th;

        const std::vector<size_t> vIndices =
                F.GetFeaturesInArea(pMP->mTrackProjX,pMP->mTrackProjY,1,nPredictedLevel-1,nPredictedLevel);

The program comment is described as "The size of the window will depend on the viewing direction",but here the size is fixed to 1 pixel. It's strange. After testing, I found that setting the size to 1.2*r makes the tracking more robust.
The modified code is as follows:

        // The size of the window will depend on the viewing direction
        float r = RadiusByViewingCos(pMP->mTrackViewCos);

        if(bFactor)
            r*=th;

        const std::vector<size_t> vIndices =
                F.GetFeaturesInArea(pMP->mTrackProjX,pMP->mTrackProjY,1.2*r,nPredictedLevel-1,nPredictedLevel);

Is the clone code in README correct?

Hi, Thank you for your open source.

I found the git clone code is the following:
git clone https://github.com/raulmur/DXSLAM.git DXSLAM
is the github user name correct?

Separate scripts to download vocabulary and hf-net model

It is annoying that the build.sh script trys to download some data every time it runs, even if the data have already been downloaded.
How about moving the wget and tar commands out of build.sh into one or two separate script?
Thanks.

Also, it may be better to add a -p param to all the mkdir commands.

OpenVINO model conversion

Hi, first of all thanks for your work.

I am trying to reproduce conversion of HF-Net model used in your code to OpenVINO IR format.
Is it possible to clarify the following:

  1. Which version of model optimizer did you use to convert the model?
  2. Is it possible to share parameters for mo_tf.py? (model optimizer converting script)
    When I tried to convert HF-Net, model optimizer cannot infer shapes/values for each output node (global descriptor, local descriptor and keypoints).
    I assume you used HF-Net Tensorflow model as in readme and used saved_model_dir parameter with mo_tf.py.

Also, as mentioned in your paper, "Most of the layers in HF-Net can be directly processed by the Model Optimizer, except the bilinear interpolation operation for local descriptor upsampling, which is not yet supported". Could you point out/release the part of the code for the post-processing stage after OpenVINO inference?

ATE.RMSE

I use the TUM's evaluate_ate.py to get the ATE.RMSE, but I can't get the paper's result. And why did the trajectory result is different when I use the same hf-net output?
What tool did you use to get the ATE.RMSE? Can you tell me? I want to reappearance your experiment.

Get features from a GPU-trained hf-net

First of all, thank you for your great work.
I want to use DXSLAM to get features from a re-trained hf-net in GPU. I compare the two graphs (yours optimized in OpenVino and mine trained in GPU) in tensorboard and I observed that the two graphs have a significant difference. The optimized graph has two extra parts: simple_nms and top_k_keypoints which include the two tensors used in GetFeature.py script:
pred/simple_nms/radius:0
pred/top_k_keypoints/k:0
where they don't exist in the re-trained hfnet.
Is this difference a modification which was made for some reason, or it is the optimized version by OpenVino ?

How could I get features from a re-trained in GPU Hf-net ?

how do I train the bag of words in this code?

Sorry to ask a potentially low-level question, how do I train the bag of words in this code? To be precise, how to use fbow to train a bag of words featuring HFNet. The code for the fbow example is generated only for examples of several mainstream features, such as ORB and SIFT.Thank you for your answers。

compile error

‘int DXSLAM::Initializer::CheckRT(const cv::Mat&, const cv::Mat&, const std::vectorcv::KeyPoint&, const std::vectorcv::KeyPoint&, const std::vector<std::pair<int, int> >&, std::vector&, const cv::Mat&, std::vector<cv::Point3_ >&, float, std::vector&, float&)’:
/home/mengzhe/dxslam/src/Initializer.cc:822:40: error: ‘isfinite’ was not declared in this scope
if(!isfinite(p3dC1.at(0)) || !isfinite(p3dC1.at(1)) || !isfinite(p3dC1.at(2)))

Online version

Hi, I came across this paper today and it looks really nice!

I'm really interested in the online CPU implementation of the feature detector using the OpenVINO framework. Are you planning to release that part of the code?

Best,
Matias

Exact version of prerequisites

Hi,
I'm trying to run the TUM RGBD demo. But I'm a total freshman on this field, I spent a lot of time on configuring the environment. Could you please provide the exact version of prerequisites to me?Which cuda and OpenCV version can run the work?
Looking forward to your reply. Thank you so much.

The error obtained when reproducing the TUM RGB-D data set is very large.

  I downloaded the image (yerld/dxslam-built) in docker  and tested TUM RGB-D rgbd_dataset_fr3_walking_xyz data set. But the KeyFrameTrajectory.txt trajectory drawn using evo is very different from the ground truth, and the same is true for rgbd_dataset_freiburg1_xyz when tested. 
 What's  wrong with my  experiment?

image
t?

image
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.