Giter Club home page Giter Club logo

botanicgarden's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

botanicgarden's Issues

Groundtruth Map

@robot-pesg
Thank for your hard work !
The dataset is so useful for me.
But i wonder that can you provide groundtruth map for me to compare with SLAM algorithm map.
Thank you !

hi,how can i solve the problem of "IMU excitation not enouth! Not enough features or parallax; Move device around"。

When I run that dataset with VINS-MOMO, the terminal keeps having the error message, "IMU excitation not enouth! Not enough features or parallax; Move device around",can you help me and give me some advice?
Here are my parameter settings for the yaml file。

%YAML:1.0

#common parameters
#support: 1 imu 1 cam; 1 imu 2 cam: 2 cam;

imu: 1

num_of_cam: 1

imu_topic: "/imu/data"
image_topic: "/dalsa_gray/left/image_raw"
output_path: "/home/zhang/output/"

distortion_parameters:
k1: -0.061606731291039
k2: 0.100129900708930
p1: 0.0
p2: 0.0
projection_parameters:
fx: 643.5951111267952
fy: 642.6656126857330
cx: 475.2215406900663
cy: 307.4120184196884
image_width: 960
image_height: 600

Extrinsic parameter between IMU and Camera.

estimate_extrinsic: 0 # 0 Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
# 1 Have an initial guess about extrinsic parameters. We will optimize around your initial guess.

extrinsicRotation: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [-0.00375313,0.01609339,0.99986345,
-0.99997267,-0.00642959,-0.00365005,
0.00636997,-0.99984982,0.01611708]
#Translation from camera frame to imu frame, imu^T_cam
extrinsicTranslation: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [0.17517298,0.13259005, 0.0670137]

#Multiple thread support
multiple_thread: 4 # origin 1

#feature traker paprameters
max_cnt: 150 # max feature number in feature tracking
min_dist: 30 # min distance between two features
freq: 0 # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image
F_threshold: 1.0 # ransac threshold (pixel)
show_track: 1 # publish tracking image as topic
flow_back: 1 # perform forward and backward optical flow to improve feature tracking accuracy

#optimization parameters
max_solver_time: 0.04 # max solver itration time (ms), to guarantee real time origin 0.04
max_num_iterations: 8 # max solver itrations, to guarantee real time origin 8
keyframe_parallax: 10.0 # keyframe selection threshold (pixel)

#imu parameters The more accurate parameters you provide, the better performance
acc_n: 0.001 # accelerometer measurement noise standard deviation.
gyr_n: 0.0002 # gyroscope measurement noise standard deviation.
acc_w: 0.00002 # accelerometer bias random work noise standard deviation.
gyr_w: 0.000005 # gyroscope bias random work noise standard deviation.
g_norm: 9.81007 # gravity magnitude

#unsynchronization parameters
estimate_td: 0 # online estimate time offset between camera and imu
td: 0.0 # initial value of time offset. unit: s. readed image clock + td = real image clock (IMU clock)

#loop closure parameters
loop_closure:1
load_previous_pose_graph: 0 # load and reuse previous pose graph; load from 'pose_graph_save_path'
pose_graph_save_path: "/home/zhang/output/pose_gragh/" # save and load path
save_image: 1 # save image in pose graph for visualization prupose; you can close this function by setting 0

Wheel Encoder Data

Is wheel encoder data available? That is mentioned in the sensor setup section of readme.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.