Giter Club home page Giter Club logo

open_vins's Introduction

OpenVINS

ROS 1 Workflow ROS 2 Workflow ROS Free Workflow

Welcome to the OpenVINS project! The OpenVINS project houses some core computer vision code along with a state-of-the art filter-based visual-inertial estimator. The core filter is an Extended Kalman filter which fuses inertial information with sparse visual feature tracks. These visual feature tracks are fused leveraging the Multi-State Constraint Kalman Filter (MSCKF) sliding window formulation which allows for 3D features to update the state estimate without directly estimating the feature states in the filter. Inspired by graph-based optimization systems, the included filter has modularity allowing for convenient covariance management with a proper type-based state system. Please take a look at the feature list below for full details on what the system supports.

News / Events

  • May 11, 2023 - Inertial intrinsic support released as part of v2.7 along with a few bug fixes and improvements to stereo KLT tracking. Please check out the release page for details.
  • April 15, 2023 - Minor update to v2.6.3 to support incremental feature triangulation of active features for downstream applications, faster zero-velocity update, small bug fixes, some example realsense configurations, and cached fast state prediction. Please check out the release page for details.
  • April 3, 2023 - We have released a monocular plane-aided VINS, termed ov_plane, which leverages the OpenVINS project. Both now support the released Indoor AR Table dataset.
  • July 14, 2022 - Improved feature extraction logic for >100hz tracking, some bug fixes and updated scripts. See v2.6.1 PR#259 and v2.6.2 PR#264.
  • March 14, 2022 - Initial dynamic initialization open sourcing, asynchronous subscription to inertial readings and publishing of odometry, support for lower frequency feature tracking. See v2.6 PR#232 for details.
  • December 13, 2021 - New YAML configuration system, ROS2 support, Docker images, robust static initialization based on disparity, internal logging system to reduce verbosity, image transport publishers, dynamic number of features support, and other small fixes. See v2.5 PR#209 for details.
  • July 19, 2021 - Camera classes, masking support, alignment utility, and other small fixes. See v2.4 PR#117 for details.
  • December 1, 2020 - Released improved memory management, active feature pointcloud publishing, limiting number of features in update to bound compute, and other small fixes. See v2.3 PR#117 for details.
  • November 18, 2020 - Released groundtruth generation utility package, vicon2gt to enable creation of groundtruth trajectories in a motion capture room for evaulating VIO methods.
  • July 7, 2020 - Released zero velocity update for vehicle applications and direct initialization when standing still. See PR#79 for details.
  • May 18, 2020 - Released secondary pose graph example repository ov_secondary based on VINS-Fusion. OpenVINS now publishes marginalized feature track, feature 3d position, and first camera intrinsics and extrinsics. See PR#66 for details and discussion.
  • April 3, 2020 - Released v2.0 update to the codebase with some key refactoring, ros-free building, improved dataset support, and single inverse depth feature representation. Please check out the release page for details.
  • January 21, 2020 - Our paper has been accepted for presentation in ICRA 2020. We look forward to seeing everybody there! We have also added links to a few videos of the system running on different datasets.
  • October 23, 2019 - OpenVINS placed first in the IROS 2019 FPV Drone Racing VIO Competition . We will be giving a short presentation at the workshop at 12:45pm in Macau on November 8th.
  • October 1, 2019 - We will be presenting at the Visual-Inertial Navigation: Challenges and Applications workshop at IROS 2019. The submitted workshop paper can be found at this link.
  • August 21, 2019 - Open sourced ov_maplab for interfacing OpenVINS with the maplab library.
  • August 15, 2019 - Initial release of OpenVINS repository and documentation website!

Project Features

  • Sliding window visual-inertial MSCKF
  • Modular covariance type system
  • Comprehensive documentation and derivations
  • Extendable visual-inertial simulator
    • On manifold SE(3) b-spline
    • Arbitrary number of cameras
    • Arbitrary sensor rate
    • Automatic feature generation
  • Five different feature representations
    1. Global XYZ
    2. Global inverse depth
    3. Anchored XYZ
    4. Anchored inverse depth
    5. Anchored MSCKF inverse depth
    6. Anchored single inverse depth
  • Calibration of sensor intrinsics and extrinsics
    • Camera to IMU transform
    • Camera to IMU time offset
    • Camera intrinsics
    • Inertial intrinsics (including g-sensitivity)
  • Environmental SLAM feature
    • OpenCV ARUCO tag SLAM features
    • Sparse feature SLAM features
  • Visual tracking support
    • Monocular camera
    • Stereo camera (synchronized)
    • Binocular cameras (synchronized)
    • KLT or descriptor based
    • Masked tracking
  • Static and dynamic state initialization
  • Zero velocity detection and updates
  • Out of the box evaluation on EuRocMav, TUM-VI, UZH-FPV, KAIST Urban and other VIO datasets
  • Extensive evaluation suite (ATE, RPE, NEES, RMSE, etc..)

Codebase Extensions

  • ov_plane - A real-time monocular visual-inertial odometry (VIO) system which leverages environmental planes. At the core it presents an efficient robust monocular-based plane detection algorithm which does not require additional sensing modalities such as a stereo, depth camera or neural network. The plane detection and tracking algorithm enables real-time regularization of point features to environmental planes which are either maintained in the state vector as long-lived planes, or marginalized for efficiency. Planar regularities are applied to both in-state SLAM and out-of-state MSCKF point features, enabling long-term point-to-plane loop-closures due to the large spacial volume of planes.

  • vicon2gt - This utility was created to generate groundtruth trajectories using a motion capture system (e.g. Vicon or OptiTrack) for use in evaluating visual-inertial estimation systems. Specifically we calculate the inertial IMU state (full 15 dof) at camera frequency rate and generate a groundtruth trajectory similar to those provided by the EurocMav datasets. Performs fusion of inertial and motion capture information and estimates all unknown spacial-temporal calibrations between the two sensors.

  • ov_maplab - This codebase contains the interface wrapper for exporting visual-inertial runs from OpenVINS into the ViMap structure taken by maplab. The state estimates and raw images are appended to the ViMap as OpenVINS runs through a dataset. After completion of the dataset, features are re-extract and triangulate with maplab's feature system. This can be used to merge multi-session maps, or to perform a batch optimization after first running the data through OpenVINS. Some example have been provided along with a helper script to export trajectories into the standard groundtruth format.

  • ov_secondary - This is an example secondary thread which provides loop closure in a loosely coupled manner for OpenVINS. This is a modification of the code originally developed by the HKUST aerial robotics group and can be found in their VINS-Fusion repository. Here we stress that this is a loosely coupled method, thus no information is returned to the estimator to improve the underlying OpenVINS odometry. This codebase has been modified in a few key areas including: exposing more loop closure parameters, subscribing to camera intrinsics, simplifying configuration such that only topics need to be supplied, and some tweaks to the loop closure detection to improve frequency.

Demo Videos


Credit / Licensing

This code was written by the Robot Perception and Navigation Group (RPNG) at the University of Delaware. If you have any issues with the code please open an issue on our github page with relevant implementation details and references. For researchers that have leveraged or compared to this work, please cite the following:

@Conference{Geneva2020ICRA,
  Title      = {{OpenVINS}: A Research Platform for Visual-Inertial Estimation},
  Author     = {Patrick Geneva and Kevin Eckenhoff and Woosik Lee and Yulin Yang and Guoquan Huang},
  Booktitle  = {Proc. of the IEEE International Conference on Robotics and Automation},
  Year       = {2020},
  Address    = {Paris, France},
  Url        = {\url{https://github.com/rpng/open_vins}}
}

The codebase and documentation is licensed under the GNU General Public License v3 (GPL-3). You must preserve the copyright and license notices in your derivative work and make available the complete source code with modifications under the same license (see this; this is not legal advice).

open_vins's People

Contributors

a1r0b0t avatar asyounis avatar borongyuan avatar cancanzeng avatar ccchu0816 avatar charmoniumq avatar edwinem avatar ghuangud avatar goldbattle avatar jessebloecker avatar kevineck avatar nikolausdemmel avatar nmerrill67 avatar pritzvac avatar rpng-guest avatar smnogar avatar therishidesai avatar wang-lq avatar woosiklee2510 avatar yangyulin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open_vins's Issues

Discrete noise covariance matrices

I am working with OpenVins and I have noticed something that is unclear for me, in particular in "ov_msckf/src/state/Propagator.cpp" at lines 327:328 reported below:

Qc.block(6,6,3,3) = _noises.sigma_wb_2/dt*Eigen::Matrix<double,3,3>::Identity(); Qc.block(9,9,3,3) = _noises.sigma_ab_2/dt*Eigen::Matrix<double,3,3>::Identity();

based on what reported in "Indirect Kalman Filter for 3D Attitude Estimation Technical Report
Number 2005-002, Rev. 57 March 2005" an on my knowledge I would like to say that the division "/" should be substituted by a multiplication "*" for the biases as follows:

Qc.block(6,6,3,3) = _noises.sigma_wb_2*dt*Eigen::Matrix<double,3,3>::Identity(); Qc.block(9,9,3,3) = _noises.sigma_ab_2*dt*Eigen::Matrix<double,3,3>::Identity();

Thanks.

About the error state propagation

Hi Patrick,

Thanks for sharing the codes!

I read and understand the derivation about the error-state propagation.

But I have a question about the difference between your derivation and the one described in Joan Sola's "Quaternion kinematics for the error-state Kalman filter". In Joan's paper, the error-state kinematics is derived by combining the kinematics of true and norminal state, and then the error state propagation is obtained through certain integration method. This should also be correct, but the result (using Euler integration as a sample, picture below) is different from yours (the major difference is the missing of Jr_so3 in Joan's derivation).
I'm wondering why this happens? THANKS!

Screenshot from 2020-04-23 22-45-40

questions about feature tracking

Hi,
I have two questions about the tracking part and really hope you could help.

(1) in the part of "check maximum baseline" in FeatureIntializer.cpp. Is there any reference about the way we calculate the base_line as
double base_line = ((Q.block(0,1,3,2)).transpose() * p_CiinA).norm();

(2)In the part of KLT based stereo tracking, only features that are successfully tracked in both cameras are good features.
If we do not do feature matching between left and right cameras and only do KLT individually on each camera. with the extrinsics of two cameras, features that cannot be mathched between two cameras, but can be tracking on one of the cameras will also be used to update the state. Do you have any idea about the performance of this two methods?

Thanks,

Propagator::select_imu_readings(): There are not enough measurements to propagate with 0 sec missing

So I have created a system that uses the non-ROS OpenVINS and sends IMU/Camera data from the dataset files similar to the ros example subscribing to the ROS bag. I seem to be having some timing issues that are resulting the following to be printed to stderr:
Propagator::select_imu_readings(): There are not enough measurements to propagate with 0 sec missing Propagator::select_imu_readings(): IMU-CAMERA time offset is likely messed up, check time offset value!!! /tmp/workspace/catkin_ws_ov/src/open_vins/ov_msckf/src/state/Propagator.cpp on line 207

This is causing the VIO to drift a lot. I'm a little confused as to what causes this issue to occur so it is hard for me to debug my system. If you guys could give insight as to why an issue like this would happen that would help me a lot.

feature id in simulation

Hi,

In Simulator::get_next_cam in src/sim/Simulator.cpp, this part modifies the feature ids

        // Append the map size so all cameras have unique features in them (but the same map)
        // Only do this if we are not enforcing stereo constraints between all our cameras
        for (size_t f=0; f<uvs.size() && !use_stereo; f++){
            uvs.at(f).first += i*featmap.size();
        }

in feed_measurement_simulation in /src/track/TrackSIM.cpp the ids are

        // Update our feature database, with theses new observations
        // NOTE: we add the "currid" since we need to offset the simulator
        // NOTE: ids by the number of aruoc tags we have specified as tracking
        for(const auto &feat : feats.at(i)) {

            // Get our id value
            size_t id = feat.first+currid;

it seems that each feature measurement in one frame has a unique feature id, so I am wondering how does the tracker know that the measurements across different frames are from the same feature?

compare variable(type of double)

A trivial question puzzled me.
Lots of comparison of two double type variables in open_vins, like:
if (timefeat == timestamp) { //timefeat and timestamp are double
has_timestamp = true;
break;
}
However I didn't find the definenation of operator “==”.
And in some std::map usage, double is the key. I didn't find the hashfunction of < or == either.
Maybe I lost important details, could you please tell me where the implementation is.

sequential measurement update to reduce process load

In measurement update, QR decomposition using Givens rotations is not cheap, and H must be multiplied by Q. The K matrix could be calculated using sequential measurement update, for R is always diagonal. In this method matrix inversion is not required and no additonal process is needed.

Error whild building ov_core

[ 68%] Built target ov_core_lib
[ 73%] �[32m�[1mLinking CXX executable /home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_tracking�[0m
[ 78%] �[32m�[1mLinking CXX executable /home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_repeat�[0m
[ 84%] �[32m�[1mLinking CXX executable /home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_simulator�[0m
/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/libov_core_lib.so: undefined reference to `cv::aruco::detectMarkers(cv::_InputArray const&, cv::Ptr<cv::aruco::Dictionary> const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::Ptr<cv::aruco::DetectorParameters> const&, cv::_OutputArray const&, cv::_InputArray const&, cv::_InputArray const&)'
collect2: error: ld returned 1 exit status
/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/libov_core_lib.so: undefined reference to `cv::aruco::detectMarkers(cv::_InputArray const&, cv::Ptr<cv::aruco::Dictionary> const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::Ptr<cv::aruco::DetectorParameters> const&, cv::_OutputArray const&, cv::_InputArray const&, cv::_InputArray const&)'
CMakeFiles/test_repeat.dir/build.make:219: recipe for target '/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_repeat' failed
make[2]: *** [/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_repeat] Error 1
collect2: error: ld returned 1 exit status
CMakeFiles/Makefile2:2418: recipe for target 'CMakeFiles/test_repeat.dir/all' failed
make[1]: *** [CMakeFiles/test_repeat.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
CMakeFiles/test_simulator.dir/build.make:219: recipe for target '/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_simulator' failed
make[2]: *** [/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_simulator] Error 1
CMakeFiles/Makefile2:2381: recipe for target 'CMakeFiles/test_simulator.dir/all' failed
make[1]: *** [CMakeFiles/test_simulator.dir/all] Error 2
/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/libov_core_lib.so: undefined reference to `cv::aruco::detectMarkers(cv::_InputArray const&, cv::Ptr<cv::aruco::Dictionary> const&, cv::_OutputArray const&, cv::_OutputArray const&, cv::Ptr<cv::aruco::DetectorParameters> const&, cv::_OutputArray const&, cv::_InputArray const&, cv::_InputArray const&)'
collect2: error: ld returned 1 exit status
CMakeFiles/test_tracking.dir/build.make:219: recipe for target '/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_tracking' failed
make[2]: *** [/home/ivy/workspace/catkin_ws_ov/devel/.private/ov_core/lib/ov_core/test_tracking] Error 1
CMakeFiles/Makefile2:899: recipe for target 'CMakeFiles/test_tracking.dir/all' failed
make[1]: *** [CMakeFiles/test_tracking.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2

I'm trying to build open vins, but it fails. How can I fix it?

Does open_vins run on ROS Melodic?

Question in the title.

For more info, I am trying to run open_vins on a Jetson AGX which ships with ubuntu 18.04 so I can't install ros-kinetic unless I build the whole thing from source.

How to run simulation properly?

Hi, I am currently running the simulation by the command roslaunch ov_msckf pgeneva_sim.launch with the original provided launch file. However, I am not able to run the simulation properly. It shows that none of the features are used in MSCKF update. Any advice? Thanks!
Screenshot from 2019-09-03 16-23-00

imu noise in Propagator.cpp

Hi,
As the default noise parameter _noises is in the continuous case, for the bias driving noise, should we use _noises.sigma_wb_2*dt instead of _noises.sigma_wb_2/dt to convert to discrete according to (130)?

// Note that we need to convert our continuous time noises to discrete
// Equations (129) amd (130) of Trawny tech report
Eigen::Matrix<double,12,12> Qc = Eigen::Matrix<double,12,12>::Zero();
Qc.block(0,0,3,3) = _noises.sigma_w_2/dtEigen::Matrix<double,3,3>::Identity();
Qc.block(3,3,3,3) = _noises.sigma_a_2/dt
Eigen::Matrix<double,3,3>::Identity();
Qc.block(6,6,3,3) = _noises.sigma_wb_2/dtEigen::Matrix<double,3,3>::Identity();
Qc.block(9,9,3,3) = _noises.sigma_ab_2/dt
Eigen::Matrix<double,3,3>::Identity();

outlier rejection

Hi, if I understand correctly, there's no gating test (eq. 6 in this paper) in your code. The only outlier rejection is the ransac used in tracking, is that correct? I am wondering whether there's a reason why there's no gating test used in this codebase?

loop through SLAM features

Hi, when loop through current SLAM features and grab them for update, is that possible "feats_slam" already has "feat2" from "feats_maxtracks" ? If so, "feat2" will be double used ?

VioManager.cpp (line 599-607 )
if(feat2 != nullptr) feats_slam.push_back(feat2);

How do I use my own datasets?

When I use my own datasets,the terminal appears following error:
[ERROR] [1575476563.620017963]: cv_bridge exception: [8UC1] is not a color format. but [mono8] is. The conversion does not make sense.
I don't know how to solve it,could you help me,thank you!

Reading bag file differently

Thank you very much for prompt reply for issue #31.
I appreciate your help!!

I have one more question: Is it possible to read the V1_01_easy.bag in separate terminal using rosplay for example. rosbag play V1_01_easy.bag.bag --pause -s 10.

to achieve this, which file should be modified? an example would be really helpful.
Looking forward to hear from you. Thank you very much.

odometry is increasing continuously such as drift is not being corrected.

added support to ready stereo feed, and IMU , I am using the Zed mini from stereoLabs.

I added the corresponding offline calibration parameters and noise parameter of IMU and tried modifying the "_sigma_px" parameters. But it seems not to help much

After it starts the launch file, the inertial frame moves towards infinity specially on the z axis.
Do you have any idea why this is happening?
I was wondering whether if there is something on the KTL that is causing this? also I ran it with the KTL set to false (discrete mode), and it broke with an exception.

any ideas?

OpenVINS standalone build

I am doing work with OpenVINS that requires it to be used without ROS. I have been able to hack around the build process to use OpenVINS in this way but I won't be able to pull updates from the upstream repo without a lot of manual work. I was wondering if you guys have considered building OpenVINS in a way that even the core libraries don't rely on ROS. Much of the core libraries are built in that way from what I have seen but they still #include ros/ros.h for the logging abilities. I think if OpenVINS were to have a standalone build to use as a library it would make it more widely used in a variety of different areas.

kalibr distortion model

re. kalibr distortion, make sure my understanding is correct (w/ pinhole)?

  • if fisheye, using equi
  • if not, using radtan

About the FEJ implementation

Hi,
In Propagator.cpp, when do_fej, why do you use line280 instead of line281 (and similarly 283/284, 288/289, 294/295)? I think the line 281 is the correct one according to derivative of FEJ methods.

Thanks : )

Optimization framework

Hi, thanks for the code! Do you have plans to incorporate g2o or Ceres for pose graph optimization into this?

Future Roadmap

Hi, when do you plan to release these two features?

  1. Leverage this sliding-window batch method to perform SFM initialization of all methods.
  2. Support for arbitrary Schmidt'ing of state elements allowing for modeling of their prior uncertainties but without optimizing their values online.

Kostia

IMU biases and noise

Hi,

Thank you for open-sourcing such an amazing tool.

I hoped I could get some advice. I needed to explicitely get the noise/bias corrected estimates of the IMU angular velocity and linear acceleration. These corrected measurements are used in the Propagation step as described here. My question is what are the correct values I should be looking to get and what is the proper way to extract them into ROS topics? I had two ideas and would appreciate your comments.

  1. Create a new member publisher in RosVisualizer class and use it to publish the gyroscope and accelerometer biases in the method publish_state by getting the bias values using calls like state->imu()->bias_g(). This, however, would require that I somehow subtract this value from the correct imu measurement, which would need some sort of synchronization. I would prefer to avoid this.

  2. Make the local variables w_hat and a_hat in lines 299 and 300 a member of Propagator class, and obtain them from RosVisualizer through its VioManager* app member. I might need to create a method get_propagator in VioManager and get_w_hat, get_a_hat in Propagator. Finally use a call like _app -> get_propagator() -> get_w_hat() from RosVisualizer. If this was sensible, should I extract w_hat, a_hat or w_hat2, a_hat2?

This diagram shows, in red, the previous ideas:

OpenVINS_uml

Finally, I made some changes to publish biases. I got the following results using the Euroc easy rosbag. The field linear_acceleration is in deed acceleration_bias and angular_velocity is in deed gyroscope_bias. I was wondering if the estimated gyroscope biases are what is to be expected, since are very different from accelerometer biases and very small.

openVINS_IMUbiases
A zoomed-in view:
openVINS_IMUbiases_close

Thank you!

Monocular MSCKF launch example

Thank you very much for the open source software.
Is it possible to upload and launch file for Monocular vision?
Or
How we can modify the existing launch file to make it function like Monocular camera?
Thank you very much in advance and for your suggestions.

EUROC dataset and IMU excitation

I have set up a system where a EUROC dataset reader mimics the timing of the sensors in the dataset and passes the data to VioManager similar to a ros subscriber system. VioManager is setup with the params in the pgeneva_ros_eth.launch file. When I am running the V1_01_easy (Vicon Room Easy) it takes about 5 seconds before there is enough IMU excitation to initialize the MSCKF and then the poses seem fairly accurate. This makes sense to me since the drone is sitting still for the first 4.7 seconds of the dataset. How accurate is the IMU_threshold of 1.5 for the EUROC dataset and in a real system would you have to manually tune this excitation to get accurate poses?

feature tracking

hi,

It seem that lots of features are featsup_MSCKF, but only few of them will be the good_features_msckf. Most of the features will be rejected by "ct_meas < 3"
When using descriptor and only doing msckf, only about 1/5 features will be used. Most of the "feats_lost" have the length less than 3. is this nomal?

anchor frame time problem

You said set the first measurement to be the anchor frame,but in the FUNCTION single_triangulationfeat->anchor_clone_timestamp = feat->timestamps.at(feat->anchor_cam_id).back(); you set the newest time, Am I not understanding correctly?

besides in the file of FeatureInitializerOptions.h/// **Minimum** distance to accept triangulated features double max_dist = 40;

Does open VINS contain loop closure?

Many msckf VIO implementations on github don't contain loop closure so they can't be used as a true SLAM implementation. I was wondering if open VINS contained loop closure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.