Giter Club home page Giter Club logo

vdo_slam's Introduction

VDO-SLAM

Authors: Jun Zhang*, Mina Henein*, Robert Mahony and Viorela Ila (*equally contributed)

VDO-SLAM is a Visual Object-aware Dynamic SLAM library for RGB-D cameras that is able to track dynamic objects, estimate the camera poses along with the static and dynamic structure, the full SE(3) pose change of every rigid object in the scene, extract velocity information, and be demonstrable in real-world outdoor scenarios. We provide examples to run the SLAM system in the KITTI Tracking Dataset, and in the Oxford Multi-motion Dataset.

VDO-SLAM VDO-SLAM

Click HERE to watch a demo video.

1. License

VDO-SLAM is released under a GPLv3 license. For a list of all code/library dependencies (and associated licenses), please see Dependencies.md.

If you use VDO-SLAM in an academic work, please cite:

@article{zhang2020vdoslam,
  title={{VDO-SLAM: A Visual Dynamic Object-aware SLAM System}},
  author={Zhang, Jun and Henein, Mina and Mahony, Robert and Ila, Viorela},
  year={2020},
  eprint={2005.11052},
  archivePrefix={arXiv},
  primaryClass={cs.RO}
 }

Related Publications:

  • VDO-SLAM: A Visual Dynamic Object-aware SLAM System
    Jun Zhang*, Mina Henein*, Robert Mahony and Viorela Ila. ArXiv:2005.11052. [ArXiv/PDF] [Code] [Video] [BibTex] NOTE: a new version of the manuscript has been updated, please check ⬆ the Arxiv link (Dec 2021).
  • Robust Ego and Object 6-DoF Motion Estimation and Tracking
    Jun Zhang, Mina Henein, Robert Mahony and Viorela Ila. The IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS 2020. [ArXiv/PDF] [BibTex]
  • Dynamic SLAM: The Need For Speed
    Mina Henein, Jun Zhang, Robert Mahony and Viorela Ila. The International Conference on Robotics and Automation. ICRA 2020. [ArXiv/PDF] [BibTex]

2. Prerequisites

We have tested the library in Mac OS X 10.14 and Ubuntu 16.04, but it should be easy to compile in other platforms.

c++11, gcc and clang

We use some functionalities of c++11, and the tested gcc version is 9.2.1 (ubuntu), the tested clang version is 1000.11.45.5 (Mac).

OpenCV

We use OpenCV to manipulate images and features. Download and install instructions can be found at: http://opencv.org. Required at least 3.0. Tested with OpenCV 3.4.

Eigen3

Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0.

g2o (Included in dependencies folder)

We use modified versions of g2o library to perform non-linear optimizations. The modified libraries (which are BSD) are included in the dependencies folder.

Use Dockerfile for auto installation

For Ubuntu users, a Dockerfile is added for automatically installing all dependencies for reproducible environment, built and tested with KITTI dataset. (Thanks @satyajitghana for the contributions 👍 )

3. Building VDO-SLAM Library

Clone the repository:

git clone https://github.com/halajun/VDO_SLAM.git VDO-SLAM

We provide a script build.sh to build the dependencies libraries and VDO-SLAM. Please make sure you have installed all required dependencies (see section 2). Please also change the library file suffix, i.e., '.dylib' for Mac (default) or '.so' for Ubuntu, in the main CMakeLists.txt. Then Execute:

cd VDO-SLAM
chmod +x build.sh
./build.sh

This will create

  1. libObjSLAM.dylib (Mac) or libObjSLAM.so (Ubuntu) at lib folder,

  2. libg2o.dylib (Mac) or libg2o.so (Ubuntu) at /dependencies/g2o/lib folder,

  3. and the executable vdo_slam in example folder.

4. Running Examples

KITTI Tracking Dataset

  1. Download the demo sequence: kitti_demo, and uncompress it.

  2. Execute the following command.

./example/vdo_slam example/kitti-0000-0013.yaml PATH_TO_KITTI_SEQUENCE_DATA_FOLDER

Oxford Multi-motion Dataset

  1. Download the demo sequence: omd_demo, and uncompress it.

  2. Execute the following command.

./example/vdo_slam example/omd.yaml PATH_TO_OMD_SEQUENCE_DATA_FOLDER

5. Processing Your Own Data

You will need to create a settings (yaml) file with the calibration of your camera. See the settings files provided in the example/ folder. RGB-D input must be synchronized and depth registered. A list of timestamps for the images is needed for input.

The system also requires image pre-processing as input, which includes instance-level semantic segmentation and optical flow estimation. In our experiments, we used Mask R-CNN for instance segmentation (for KITTI only; we applied colour-based method to segment cuboids in OMD, check the matlab code in tools folder), and PWC-NET (PyTorch version) for optic-flow estimation. Other state-of-the-art methods can also be applied instead for better performance.

For evaluation purpose, ground truth data of camera pose and object pose are also needed as input. Details of input format are shown as follows,

Input Data Pre-processing

  1. The input of segmentation mask is saved as matrix, same size as image, in .txt file. Each element of the matrix is integer, with 0 stands for background, and 1,2,...,n stands for different instance label. Note that, to easily compare with ground truth object motion in KITTI dataset, we align the estimated mask label with the ground truth label. The .txt file generation (from .mask) and alignment code is in tools folder.

  2. The input of optical flow is the standard .flo file that can be read and processed directly using OpenCV.

Ground Truth Input for Evaluation

  1. The input of ground truth camera pose is saved as .txt file. Each row is organized as follows,
FrameID R11 R12 R13 t1 R21 R22 R23 t2 R31 R32 R33 t3 0 0 0 1

Here Rij are the coefficients of the camera rotation matrix R and ti are the coefficients of the camera translation vector t.

  1. The input of ground truth object pose is also saved as .txt file. One example of such file (KITTI Tracking Dataset), which each row is organized as follows,
FrameID ObjectID B1 B2 B3 B4 t1 t2 t3 r1

Where ti are the coefficients of 3D object location t in camera coordinates, and r1 is the Rotation around Y-axis in camera coordinates. B1-4 is 2D bounding box of object in the image, used for visualization. Please refer to the details in KITTI Tracking Dataset if necessary.

The provided object pose format of OMD dataset is axis-angle + translation vector. Please see the provided demos for details. A user can input a custom data format, but need to write a new function to input the data.

vdo_slam's People

Contributors

halajun avatar minahenein avatar satyajitghana avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vdo_slam's Issues

recipe for target 'CMakeFiles/ObjSLAM.dir/src/cvplot/window.cc.o' failed

Dear authors,

When compile the work, I have the following two errors, do you know how to deal with them?

recipe for target 'CMakeFiles/ObjSLAM.dir/src/cvplot/window.cc.o' failed

CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/ObjSLAM.dir/all' failed

make[1]: *** [CMakeFiles/ObjSLAM.dir/all] Error 2

Makefile:83: recipe for target 'all' failed

unable to build using Dockerfile

I downloaded the Dockfile provided and try to build using:
docker build -t vdo_slam .

And during the installation of
https://github.com/opencv/opencv_contrib/archive/3.4.0.zip
This error occurs:
fatal error: opencv2/xfeatures2d/cuda.hpp: No such file or directory

I think this is the problem of openCV extra modules, could you give an updated Dockfile such that it can compile?

g2o Stuck at 97% while building

Hello, Thanks a lot for great repository and explanation! Request help to please guide for a problem which i am experiencing while running ./build.sh for g2o. It gets stuck at 97%. Any hint which can help me resolve this will be greatly appreciated, Thanks!.

error: ‘SOLVEPNP_AP3P’ is not a member of ‘cv’

Hi Jun,
I am following your instruction in build.sh. I met a opencv problem as followed:

/home/yuheng/project/VDO_SLAM/src/Tracking.cc:1752:70: error: ‘SOLVEPNP_AP3P’ is not a member of ‘cv’
                iter_num, reprojectionError, confidence, inliers, cv::SOLVEPNP_AP3P); // AP3P EPNP P3P ITERATIVE DLS
                                                                      ^~~~~~~~~~~~~
/home/yuheng/project/VDO_SLAM/src/Tracking.cc:1752:70: note: suggested alternative: ‘SOLVEPNP_P3P’
                iter_num, reprojectionError, confidence, inliers, cv::SOLVEPNP_AP3P); // AP3P EPNP P3P ITERATIVE DLS
                                                                      ^~~~~~~~~~~~~
                                                                      SOLVEPNP_P3P

I am using Opencv 3.2. Is there any suggestions for this issue.

In the paper, you test the RGB-D version of “VDO-SLAM” on the "KITTI Tracking Dataset", I notice that this dataset only provides the Velodyne point clouds, but does not directly provide the depth map. What method do you adapt to convert the point cloud data into the depth map?

In the paper, you test the RGB-D version of “VDO-SLAM” on the "KITTI Tracking Dataset", I notice that this dataset only provides the Velodyne point clouds, but does not directly provide the depth map. What method do you adapt to convert the point cloud data into the depth map?

Output of segmentation mask from Mask-RCNN

Hi,

Thanks for this wonderful repository of VDO-SLAM. I am trying to use Mask-RCNN directly into this system, instead of using the semantic inputs. The mask matrix that I am getting as output gives the same coefficient value for both the cars in the KITTI dataset. I even used the "kitti_mask_sem2gt.cpp" file's code to make the Mask-RCNN's output mask matrix similar to the Ground Truth object pose.

The issue is that the two objects (cars) have the same shade of color as shown in the figure below. I want a similar output as yours. How do I solve this?

Screenshot from 2021-11-23 11-57-50

about the speed calculation

Hi! Thanks for your great work firstly.

I have 2 questions when calculating the speed of objects(please see the following code). why times 36 in the sp_est_norm? From m/s to km/h --> supposed to be 3.6 times?And the formula 24 in the paper, why not use the timestamps of consecutive frames as the delta_t to calculate the speed?

// // ***** get the ground truth object speed here ***** (use version 1 here)
cv::Mat sp_gt_v, sp_gt_v2;
sp_gt_v = H_p_c.rowRange(0,3).col(3) - (cv::Mat::eye(3,3,CV_32F)-H_p_c.rowRange(0,3).colRange(0,3))*ObjCentre3D_pre; // L_w_p.rowRange(0,3).col(3) or ObjCentre3D_pre
sp_gt_v2 = L_w_p.rowRange(0,3).col(3) - L_w_c.rowRange(0,3).col(3);
float sp_gt_norm = std::sqrt( sp_gt_v.at<float>(0)*sp_gt_v.at<float>(0) + sp_gt_v.at<float>(1)*sp_gt_v.at<float>(1) + sp_gt_v.at<float>(2)*sp_gt_v.at<float>(2) )*36;
// float sp_gt_norm2 = std::sqrt( sp_gt_v2.at<float>(0)*sp_gt_v2.at<float>(0) + sp_gt_v2.at<float>(1)*sp_gt_v2.at<float>(1) + sp_gt_v2.at<float>(2)*sp_gt_v2.at<float>(2) )*36;
mCurrentFrame.vObjSpeed_gt[i] = sp_gt_norm;

// // ***** calculate the estimated object speed *****
// estimated speed
cv::Mat sp_est_v;
// formula 24
sp_est_v = mCurrentFrame.vObjMod[i].rowRange(0,3).col(3) - (cv::Mat::eye(3,3,CV_32F)-mCurrentFrame.vObjMod[i].rowRange(0,3).colRange(0,3))*ObjCentre3D_pre;

float sp_est_norm = std::sqrt( sp_est_v.at<float>(0)*sp_est_v.at<float>(0) + sp_est_v.at<float>(1)*sp_est_v.at<float>(1) + sp_est_v.at<float>(2)*sp_est_v.at<float>(2) )*36;

cout << "estimated and ground truth object speed: " << sp_est_norm << "km/h " << sp_gt_norm << "km/h " << endl;

Instance Segementation Input Requirement

Hi there!

When I run the MASK-RCNN to generate the preprocessed result, the label output is quite different. For example, in the first image, the background is 0, object A is 1, the object B is 2. However, in the second image, object A is 2, but object B is 1 (the background is still 0).

How to fix a problem like this to correlate the instances across different frames? Does the VDO slam system require the semantic instances to be aligned across all frames?

Thank you very much.

The Method to get the Ground truth of camera poses

Dr. Henein:
I'm sorry to bother you, When I used KITTI Tracking data set to verify my own algorithm, I found that KITTI raw provided IMU and GPS data instead of 6 DoF poses .We would sincerely appreciate it if you were willing to share some experience about these problems.
Looking forward to your reply!

Running prediction without ground truth

Hi,

I wanted to use your code to run predictions on a custom dataset. I generated the required input data in terms of semantic masks, optical flow, and depth maps. The issue is in the object ground truth pose.
In your tracking.cpp file, if there is no ground truth for the object, the iteration is skipped and no speed estimation is done.
if (!bCheckGT1 || !bCheckGT2) { cout << "Found a detected object with no ground truth motion! ! !" << endl; mCurrentFrame.bObjStat[i] = false; mCurrentFrame.vObjMod_gt[i] = cv::Mat::eye(4,4, CV_32F); mCurrentFrame.vObjMod[i] = cv::Mat::eye(4,4, CV_32F); mCurrentFrame.vObjCentre3D[i] = (cv::Mat_<float>(3,1) << 0.f, 0.f, 0.f); mCurrentFrame.vObjSpeed_gt[i] = 0.0; mCurrentFrame.vnObjInlierID[i] = ObjIdNew[i]; continue; }

I don't have ground truths on my custom dataset so do you have any advice on how to run with this missing input?

Thanks for your great work!

source data

Thank you for your excellent work in vdo_slam. Recently, we want to compared with vdo in our latest work. In order to ensure the authenticity of the data, can you provide the whole the flow files and the semantic files of datasets used in the paper.
Emile:[email protected]. Thanks again.

Problem when running demo-omd

Hi, this work is really fantastic.
When I tried demo-kitti, it worked well and got a very good estimation. I just had some problem when running demo-omd. And I had the following mistake:

..........Dealing with Camera Pose..........
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.2.0) /home/yxz/lib_sources/opencv/opencv/modules/calib3d/src/solvepnp.cpp:216: error: (-215:Assertion failed) npoints >= 4 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) in function 'solvePnPRansac'

I tried to modify the variable reprojectionError to have more good matches, but it didn't work. Do you have any ideas about this mistake?

How to get pose_ gt. txt data

Kitti's tracking data set only has the pose of the object, not the pose of the camera. How do you get the real pose of the camera? I want to try other data, not just those given in "kitti_demo"

make g2o several error

when i make g2o , i get several error about it

from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:38:3: error: expected unqualified-id before ‘using’ using VectorN = Eigen::Matrix<number_t, N, 1, Eigen::ColMajor>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:39:9: error: expected nested-name-specifier before ‘Vector2’ using Vector2 = VectorN<2>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:40:9: error: expected nested-name-specifier before ‘Vector3’ using Vector3 = VectorN<3>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:41:9: error: expected nested-name-specifier before ‘Vector4’ using Vector4 = VectorN<4>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:42:9: error: expected nested-name-specifier before ‘Vector6’ using Vector6 = VectorN<6>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:43:9: error: expected nested-name-specifier before ‘Vector7’ using Vector7 = VectorN<7>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:44:9: error: expected nested-name-specifier before ‘VectorX’ using VectorX = VectorN<Eigen::Dynamic>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:47:3: error: expected unqualified-id before ‘using’ using MatrixN = Eigen::Matrix<number_t, N, N, Eigen::ColMajor>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:48:9: error: expected nested-name-specifier before ‘Matrix2’ using Matrix2 = MatrixN<2>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:49:9: error: expected nested-name-specifier before ‘Matrix3’ using Matrix3 = MatrixN<3>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:50:9: error: expected nested-name-specifier before ‘Matrix4’ using Matrix4 = MatrixN<4>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:51:9: error: expected nested-name-specifier before ‘MatrixX’ using MatrixX = MatrixN<Eigen::Dynamic>; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:53:28: error: ‘number_t’ was not declared in this scope typedef Eigen::Transform<number_t,2,Eigen::Isometry,Eigen::ColMajor> Isometry2; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:53:70: error: template argument 1 is invalid typedef Eigen::Transform<number_t,2,Eigen::Isometry,Eigen::ColMajor> Isometry2; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:54:28: error: ‘number_t’ was not declared in this scope typedef Eigen::Transform<number_t,3,Eigen::Isometry,Eigen::ColMajor> Isometry3; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:54:70: error: template argument 1 is invalid typedef Eigen::Transform<number_t,3,Eigen::Isometry,Eigen::ColMajor> Isometry3; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:56:28: error: ‘number_t’ was not declared in this scope typedef Eigen::Transform<number_t,2,Eigen::Affine,Eigen::ColMajor> Affine2; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:56:68: error: template argument 1 is invalid typedef Eigen::Transform<number_t,2,Eigen::Affine,Eigen::ColMajor> Affine2; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:57:28: error: ‘number_t’ was not declared in this scope typedef Eigen::Transform<number_t,3,Eigen::Affine,Eigen::ColMajor> Affine3; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:57:68: error: template argument 1 is invalid typedef Eigen::Transform<number_t,3,Eigen::Affine,Eigen::ColMajor> Affine3; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:59:29: error: ‘number_t’ was not declared in this scope typedef Eigen::Rotation2D<number_t> Rotation2D; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:59:37: error: template argument 1 is invalid typedef Eigen::Rotation2D<number_t> Rotation2D; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:61:29: error: ‘number_t’ was not declared in this scope typedef Eigen::Quaternion<number_t> Quaternion; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:61:37: error: template argument 1 is invalid typedef Eigen::Quaternion<number_t> Quaternion; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:63:28: error: ‘number_t’ was not declared in this scope typedef Eigen::AngleAxis<number_t> AngleAxis; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/eigen_types_new.h:63:36: error: template argument 1 is invalid typedef Eigen::AngleAxis<number_t> AngleAxis; ^ In file included from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:33:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:55:39: error: expected initializer before ‘extractRotation’ inline Isometry3::ConstLinearPart extractRotation(const Isometry3& A) ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h: In function ‘void g2o::internal::nearestOrthogonalMatrix(const Eigen::MatrixBase<Derived>&)’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:69:24: error: ‘Matrix3’ was not declared in this scope Eigen::JacobiSVD<Matrix3> svd(R, Eigen::ComputeFullU | Eigen::ComputeFullV); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:69:31: error: template argument 1 is invalid Eigen::JacobiSVD<Matrix3> svd(R, Eigen::ComputeFullU | Eigen::ComputeFullV); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:69:81: error: expression list treated as compound expression in initializer [-fpermissive] Eigen::JacobiSVD<Matrix3> svd(R, Eigen::ComputeFullU | Eigen::ComputeFullV); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:70:7: error: ‘number_t’ was not declared in this scope number_t det = (svd.matrixU() * svd.matrixV().adjoint()).determinant(); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:71:15: error: expected ‘;’ before ‘scaledU’ Matrix3 scaledU(svd.matrixU()); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:72:7: error: ‘scaledU’ was not declared in this scope scaledU.col(0) /= det; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:72:25: error: ‘det’ was not declared in this scope scaledU.col(0) /= det; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:73:66: error: request for member ‘matrixV’ in ‘svd’, which is of non-class type ‘int’ const_cast<Eigen::MatrixBase<Derived>&>(R) = scaledU * svd.matrixV().transpose(); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h: In function ‘void g2o::internal::approximateNearestOrthogonalMatrix(const Eigen::MatrixBase<Derived>&)’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:84:7: error: ‘Matrix3’ was not declared in this scope Matrix3 E = R.transpose() * R; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:85:7: error: ‘E’ was not declared in this scope E.diagonal().array() -= 1; ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h: At global scope: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:102:26: error: ‘Vector3’ does not name a type G2O_TYPES_SLAM3D_API Vector3 toEuler(const Matrix3& R); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:106:26: error: ‘Matrix3’ does not name a type G2O_TYPES_SLAM3D_API Matrix3 fromEuler(const Vector3& v); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:110:26: error: ‘Vector3’ does not name a type G2O_TYPES_SLAM3D_API Vector3 toCompactQuaternion(const Matrix3& R); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:115:26: error: ‘Matrix3’ does not name a type G2O_TYPES_SLAM3D_API Matrix3 fromCompactQuaternion(const Vector3& v); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:121:26: error: ‘Vector6’ does not name a type G2O_TYPES_SLAM3D_API Vector6 toVectorMQT(const Isometry3& t); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:125:26: error: ‘Vector6’ does not name a type G2O_TYPES_SLAM3D_API Vector6 toVectorET(const Isometry3& t); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:129:26: error: ‘Vector7’ does not name a type G2O_TYPES_SLAM3D_API Vector7 toVectorQT(const Isometry3& t); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:134:56: error: ‘Vector6’ does not name a type G2O_TYPES_SLAM3D_API Isometry3 fromVectorMQT(const Vector6& v); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:138:55: error: ‘Vector6’ does not name a type G2O_TYPES_SLAM3D_API Isometry3 fromVectorET(const Vector6& v); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/isometry3d_mappings.h:142:55: error: ‘Vector7’ does not name a type G2O_TYPES_SLAM3D_API Isometry3 fromVectorQT(const Vector7& v); ^ In file included from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:31:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/base_vertex.h: In instantiation of ‘class g2o::BaseVertex<6, int>’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:50:49: required from here /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/base_vertex.h:62:72: warning: ‘Eigen::AlignedBit’ is deprecated [-Wdeprecated-declarations] typedef Eigen::Map<Matrix<double, D, D>, Matrix<double,D,D>::Flags & AlignedBit ? Aligned : Unaligned > HessianBlockType; ^ In file included from /usr/local/include/eigen3/Eigen/Core:363:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/jacobian_workspace.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/optimizable_graph.h:41, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/base_vertex.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:31, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /usr/local/include/eigen3/Eigen/src/Core/util/Constants.h:162:37: note: declared here EIGEN_DEPRECATED const unsigned int AlignedBit = 0x80; ^ In file included from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:31:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/base_vertex.h:62:72: warning: ‘Eigen::AlignedBit’ is deprecated [-Wdeprecated-declarations] typedef Eigen::Map<Matrix<double, D, D>, Matrix<double,D,D>::Flags & AlignedBit ? Aligned : Unaligned > HessianBlockType; ^ In file included from /usr/local/include/eigen3/Eigen/Core:363:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/jacobian_workspace.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/optimizable_graph.h:41, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/base_vertex.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:31, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /usr/local/include/eigen3/Eigen/src/Core/util/Constants.h:162:37: note: declared here EIGEN_DEPRECATED const unsigned int AlignedBit = 0x80; ^ In file included from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:31:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/base_vertex.h:62:72: warning: ‘Eigen::AlignedBit’ is deprecated [-Wdeprecated-declarations] typedef Eigen::Map<Matrix<double, D, D>, Matrix<double,D,D>::Flags & AlignedBit ? Aligned : Unaligned > HessianBlockType; ^ In file included from /usr/local/include/eigen3/Eigen/Core:363:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/jacobian_workspace.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/optimizable_graph.h:41, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/../core/base_vertex.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:31, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.h:30, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_prior.cpp:27: /usr/local/include/eigen3/Eigen/src/Core/util/Constants.h:162:37: note: declared here EIGEN_DEPRECATED const unsigned int AlignedBit = 0x80; ^ In file included from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3.h:33:0, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_offset.h:31, from /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/edge_se3_offset.cpp:27: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:66:46: error: ‘number_t’ does not name a type virtual bool setEstimateDataImpl(const number_t* est){ ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:72:36: error: ‘number_t’ has not been declared virtual bool getEstimateData(number_t* est) const{ ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:82:53: error: ‘number_t’ does not name a type virtual bool setMinimalEstimateDataImpl(const number_t* est){ ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:88:43: error: ‘number_t’ has not been declared virtual bool getMinimalEstimateData(number_t* est) const{ ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:105:36: error: ‘number_t’ does not name a type virtual void oplusImpl(const number_t* update) ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h: In member function ‘virtual void g2o::VertexSE3::setToOriginImpl()’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:60:21: error: ‘Isometry3’ is not a class or namespace _estimate = Isometry3::Identity(); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h: In member function ‘virtual bool g2o::VertexSE3::setEstimateDataImpl(const int*)’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:67:26: error: ISO C++ forbids declaration of ‘type name’ with no type [-fpermissive] Eigen::Map<const Vector7> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:67:33: error: template argument 1 is invalid Eigen::Map<const Vector7> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:67:40: error: invalid conversion from ‘const int*’ to ‘int’ [-fpermissive] Eigen::Map<const Vector7> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h: In member function ‘virtual bool g2o::VertexSE3::getEstimateData(int*) const’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:73:20: error: ‘Vector7’ was not declared in this scope Eigen::Map<Vector7> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:73:27: error: template argument 1 is invalid Eigen::Map<Vector7> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:73:34: error: invalid conversion from ‘int*’ to ‘int’ [-fpermissive] Eigen::Map<Vector7> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:74:11: error: ‘toVectorQT’ is not a member of ‘g2o::internal’ v=internal::toVectorQT(_estimate); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h: In member function ‘virtual bool g2o::VertexSE3::setMinimalEstimateDataImpl(const int*)’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:83:26: error: ISO C++ forbids declaration of ‘type name’ with no type [-fpermissive] Eigen::Map<const Vector6> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:83:33: error: template argument 1 is invalid Eigen::Map<const Vector6> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:83:40: error: invalid conversion from ‘const int*’ to ‘int’ [-fpermissive] Eigen::Map<const Vector6> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h: In member function ‘virtual bool g2o::VertexSE3::getMinimalEstimateData(int*) const’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:89:20: error: ‘Vector6’ was not declared in this scope Eigen::Map<Vector6> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:89:27: error: template argument 1 is invalid Eigen::Map<Vector6> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:89:34: error: invalid conversion from ‘int*’ to ‘int’ [-fpermissive] Eigen::Map<Vector6> v(est); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:90:13: error: ‘toVectorMQT’ is not a member of ‘g2o::internal’ v = internal::toVectorMQT(_estimate); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h: In member function ‘virtual void g2o::VertexSE3::oplusImpl(const int*)’: /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:107:26: error: ISO C++ forbids declaration of ‘type name’ with no type [-fpermissive] Eigen::Map<const Vector6> v(update); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:107:33: error: template argument 1 is invalid Eigen::Map<const Vector6> v(update); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:107:43: error: invalid conversion from ‘const int*’ to ‘int’ [-fpermissive] Eigen::Map<const Vector6> v(update); ^ /home/henry/henry_ws/src/VDO_SLAM/dependencies/g2o/g2o/types/vertex_se3.h:112:66: error: request for member ‘matrix’ in ‘((g2o::VertexSE3*)this)->g2o::VertexSE3::<anonymous>.g2o::BaseVertex<6, int>::_estimate’, which is of non-class type ‘g2o::BaseVertex<6, int>::EstimateType {aka int}’ internal::approximateNearestOrthogonalMatrix(_estimate.matrix().topLeftCorner<3,3>());

time.txt

请问一下大家你们demo里面的time.txt文件是怎么获取的?怎么和官网给的timestamps.txt格式不一样?

how to get depth.png in kitti

hi! thanks for your excellent work , I have a quesion about how you get the depth.png under the /demo-kitti/depth?

about fine-tuned PWC-net

Thanks for your excellent work again! I found that you guys fine-tune the PWC-net in Sintel and KITTI training datasets, and I found a large performance gap between your fine-tuned PWC-net and the original one in dataset "demo-kitti", could you please share the fine-tuned pareameter file? This will be a huge help for me, thanks!

I replaced the ground truth camera pose with the GT calculated by the devkit.

Hello. I find the sequences of your demo. I have extracted the ground truth camera pose from the "data_tracking_oxts", the KITTI Tracking Dataset. And I replaced the ground truth camera pose in the demo with the ground truth file calculated by myself. But the results were not as good as yours. So what should I noticed when I extract the ground truth using the raw data development kit? And could you tell me how you extract it for details?

Cannot run omddemo due to N_s

When I run omd-demo, an error is reported in solvePnPRansac. I checked the input of the function solvePnPRansac. There are too few points in pre_3d. I locate the problem in Frame.cc. When the optical flow is used to judge the corresponding points, the number of corresponding points is too small. The corresponding points in mvCorres are very small, resulting in the variable N_s being very small. Therefore, pre_3d has few feature points. Has anyone encountered the same problem? How to solve it? Thank you very much!

预处理-实例分割

您好!我已经运行过您的demo了,想尝试使用其他的kitti数据,同时也会使用别的实例分割算法。因此我很好奇实例分割得到的rgb图怎么能存成物体是1,2,……,n的呢,根据掩膜的区域,以及区域中的颜色吗?如果能分享以下您的代码就好了。万分感谢!

Why are the number of .flo files and .png files the same?

Sorry, my question may be a little naive, but I really don't understand it. Optical flow is the result of two consecutive frames, so the number of .flo should be 1 less than the number of .png. Why are the number of .flo and .png files the same in the kitti-demo provided? If I understand correctly, does 000000.flo correspond to 000000.png-000001.png? So which two pictures correspond to the last 000055.flo?

Issue in PlainObjectBase.h When Running Demos

The compliation is fine. But when I run the command to run the demo datasets, there is an issue:

vdo_slam: /usr/include/eigen3/Eigen/src/Core/PlainObjectBase.h:285: void Eigen::PlainObjectBase::resize(Eigen::Index, Eigen::Index) [with Derived = Eigen::Matrix<double, 3, 3>; Eigen::Index = long int]: Assertion `(!(RowsAtCompileTime!=Dynamic) || (rows==RowsAtCompileTime)) && (!(ColsAtCompileTime!=Dynamic) || (cols==ColsAtCompileTime)) && (!(RowsAtCompileTime==Dynamic && MaxRowsAtCompileTime!=Dynamic) || (rows<=MaxRowsAtCompileTime)) && (!(ColsAtCompileTime==Dynamic && MaxColsAtCompileTime!=Dynamic) || (cols<=MaxColsAtCompileTime)) && rows>=0 && cols>=0 && "Invalid sizes when resizing a matrix or array."' failed.
Aborted (core dumped)

libObjSLAM libg2o problem

Hey, could you give me some hints where is the problem with it? It seems that your modified g2o is able to compile separately but something stuck when I compile from the script (and normally cmake.. ; make -j16 from build folder)

Configuring and building g2o ...
mkdir: cannot create directory ‘build’: File exists
-- BUILD TYPE:Release
-- Compiling on Unix
CMake Warning (dev) at /usr/local/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
  The package name passed to `find_package_handle_standard_args` (CSPARSE)
  does not match the name of the calling package (CSparse).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake_modules/FindCSparse.cmake:26 (find_package_handle_standard_args)
  CMakeLists.txt:83 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Configuring done
-- Generating done
-- Build files have been written to: /home/andrzej/code_workspace/deep_learning_lo/VDO-SLAM/dependencies/g2o/build
[100%] Built target g2o
Configuring and building VDO-SLAM ...
mkdir: cannot create directory ‘build’: File exists
Build type: Release
-- Using flag -std=c++11.
CMake Warning (dev) at /usr/local/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
  The package name passed to `find_package_handle_standard_args` (CSPARSE)
  does not match the name of the calling package (CSparse).  This can lead to
  problems in calling code that expects `find_package` result variables
  (e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
  cmake_modules/FindCSparse.cmake:26 (find_package_handle_standard_args)
  CMakeLists.txt:43 (find_package)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Configuring done
-- Generating done
-- Build files have been written to: /home/andrzej/code_workspace/deep_learning_lo/VDO-SLAM/build
make[2]: *** No rule to make target '../dependencies/g2o/lib/libg2o.dylib', needed by '../lib/libObjSLAM.so'.  Stop.
CMakeFiles/Makefile2:123: recipe for target 'CMakeFiles/ObjSLAM.dir/all' failed
make[1]: *** [CMakeFiles/ObjSLAM.dir/all] Error 2
Makefile:102: recipe for target 'all' failed
make: *** [all] Error 2
andrzej@femust:~/code_works

About the trace of rotation matrix during calculating camera pose error

Thanks for your excellent work!

I have confused with the operation of computing the trace of rotation matrix during calculating camera pose error in Tracking.cc:

trace_rpe_cam = trace_rpe_cam + 1.0-(RePoEr_cam.at<float>(i,i)-1.0);

...
cv::Mat T_lc_inv = mCurrentFrame.mTcw*Converter::toInvMatrix(mLastFrame.mTcw);
cv::Mat T_lc_gt = mLastFrame.mTcw_gt*Converter::toInvMatrix(mCurrentFrame.mTcw_gt);
cv::Mat RePoEr_cam = T_lc_inv*T_lc_gt;

float t_rpe_cam = std::sqrt( RePoEr_cam.at<float>(0,3)*RePoEr_cam.at<float>(0,3) + RePoEr_cam.at<float>(1,3)*RePoEr_cam.at<float>(1,3) + RePoEr_cam.at<float>(2,3)*RePoEr_cam.at<float>(2,3) );
float trace_rpe_cam = 0;
for (int i = 0; i < 3; ++i)
{
    // NOTICE HERE
    if (RePoEr_cam.at<float>(i,i)>1.0)
         trace_rpe_cam = trace_rpe_cam + 1.0-(RePoEr_cam.at<float>(i,i)-1.0);
    else
        trace_rpe_cam = trace_rpe_cam + RePoEr_cam.at<float>(i,i);
}
cout << std::fixed << std::setprecision(6);
float r_rpe_cam = acos( (trace_rpe_cam -1.0)/2.0 )*180.0/3.1415926;
...

Why we need to determine whether the pivot elements are greater than 1 rather than compute trace directly? And what is the meaning of the + 1.0 - (RePoEr_cam.at<float>(i, i) - 1.0) operation?

That seems related to computational robustness, but I can't figure out why. I sincerely look forward to your reply. Thank you very much! ❤️

object_pose

thanks for your code!
I got some error, could tell me how to generate the object_pose file,

evaluation tools

hi,what evaluation tools you use in the last chapter of the paper? i want to use it to process my data,thank you

RPE (rotation) results of object motions are computed as RMSE?

Thanks a lot for open-sourcing this great work :)

I've come up with a doubt regarding the computation used in the paper for the Relative Pose Error of an object motion:

By reading the paper, at first, I thought that this RPE it's computed as the RMSE of the relative angles between the estimations and the ground-truth.
But know I'm doubting if my understanding is right, because looking at this part of the code:

VDO_SLAM/src/Tracking.cc

Lines 989 to 997 in 2b13517

float trace_rpe = 0;
for (int i = 0; i < 3; ++i)
{
if (RePoEr.at<float>(i,i)>1.0)
trace_rpe = trace_rpe + 1.0-(RePoEr.at<float>(i,i)-1.0);
else
trace_rpe = trace_rpe + RePoEr.at<float>(i,i);
}
float r_rpe = acos( ( trace_rpe -1.0 )/2.0 )*180.0/3.1415926;

I think it's being computed directly as the relative angle.

So, is this last approach being used for the results reported in the paper?

Thanks in advance!

Problem about generating GT

Hello. Thank you so much for your work, it's very cool.
I find the sequences of your demo. I use Matlab to generate the ground truth camera pose from the "data_tracking_oxts", the KITTI Tracking Dataset.
I have some problems with GT generation. The GT I generate using Matlab has a larger error than the GT you generate.
e.g.
my GT:
143 0.988913918 0.000807471 -0.148487746 9.974954671 -0.002023517 0.999965642 -0.008038644 2.741625980 0.148476154 0.008249994 0.988881575 172.391896386 0.000000000 0.000000000 0.000000000 1.000000000
GT in kitti- demo:
143 0.988933274 -0.001158800 -0.148356442 -8.665262612 -0.000368661 0.999947215 -0.010267974 -0.690116978 0.148360527 0.010209034 0.988880646 172.520788973 0.000000000 0.000000000 0.000000000 1.000000000

When evo is used for evaluation, the rotation error is similar, but the translation error will increase.
And could you tell me how you extract it for details?
Finally, thank you again

Question about Optimizer::PoseOptimizationFlow2

HI, I would like to express my admiration and gratitude towards your work. I have been using your code posted on GitHub and have been extremely impressed with your efforts.

I find I question about Optimizer::PoseOptimizationFlow2 inTrack(). I find that you use create the same type of edge g2o::EdgeSE3ProjectFlow2() in both dealing with static features and objects.

However, it seems like you ignore the object feature motion in this part. Could you please explain this.
Certainly, please do correct me if I'm wrong.

How to solve this problem,thank you

[100%] Linking CXX executable ../example/vdo_slam
CMakeFiles/vdo_slam.dir/example/vdo_slam.cc.o: In function main': vdo_slam.cc:(.text.startup+0xc08): undefined reference to cv::optflow::readOpticalFlow(cv::String const&)'
collect2: error: ld returned 1 exit status
CMakeFiles/vdo_slam.dir/build.make:115: recipe for target '../example/vdo_slam' failed
make[2]: *** [../example/vdo_slam] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/vdo_slam.dir/all' failed
make[1]: *** [CMakeFiles/vdo_slam.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Issue when building vdo-slam

this error pops up:

make[2]: *** [CMakeFiles/ObjSLAM.dir/build.make:76: CMakeFiles/ObjSLAM.dir/src/Tracking.cc.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:105: CMakeFiles/ObjSLAM.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

any solution?

What is your input image to generate the .flo file with PWC-NET?

I use pytorch-pwc to generate the .flo file but it seems to be no data within the object,
Firstly, the input to pwc I used is the original image,
000001
after I generate the .flo file, I test the vdo_slam, but it seems no data in flow
2021-05-29 16-27-39 的屏幕截图
2021-05-29 16-28-20 的屏幕截图
then I convert it to .png file, but it seems to be no flow in the object:
000001

quertion

Hi!@halajun
How to export TXT document from data set trained by mask R-CNN and Pw net
looking forward your reply

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.