Giter Club home page Giter Club logo

Comments (13)

baishibona avatar baishibona commented on June 3, 2024

@QinZiwen I think it should be the same when you use kinect2, just run the driver for kinect2 instead of "3dsensor.launch" (you will find it in "/launchturtlebot_gmapping.launch" --line 4).

I haven't tested it with kinect2 yet, so please let me know if it works for you or not.
Thanks!

from turtlebot_exploration_3d.

QinZiwen avatar QinZiwen commented on June 3, 2024

Yes,I run the driver for kinect2 instead of "3dsensor.launch". After, my turtlebot can go around our lab, but not be perfect. Because, turtlebot sometimes be in collision with obstacles. I find that the reason maybe is dwa_local_planner that have some problem or I give it some wrong parameters.

#dwa_local_planner_param.yaml
DWAPlannerROS:

# Robot Configuration Parameters - Kobuki
  max_vel_x: 0.2  # 0.55
  min_vel_x: 0.0 

  max_vel_y: 0.0  # diff drive robot
  min_vel_y: 0.0  # diff drive robot

  max_trans_vel: 0.5 # choose slightly less than the base's capability
  min_trans_vel: 0.1  # this is the min trans velocity when there is negligible rotational velocity
  trans_stopped_vel: 0.1

  # Warning!
  #   do not set min_trans_vel to 0.0 otherwise dwa will always think translational velocities
  #   are non-negligible and small in place rotational velocities will be created.

  max_rot_vel: 1.0  # choose slightly less than the base's capability
  min_rot_vel: 0.4  # this is the min angular velocity when there is negligible translational velocity
  rot_stopped_vel: 0.4
  
  acc_lim_x: 0.5 # maximum is theoretically 2.0, but we 
  acc_lim_theta: 0.5
  acc_lim_y: 0.0      # diff drive robot

# Goal Tolerance Parameters
  yaw_goal_tolerance: 0.1  # 0.05
  xy_goal_tolerance: 0.1  # 0.10
  # latch_xy_goal_tolerance: false

# Forward Simulation Parameters
  sim_time: 1.0       # 1.7
  vx_samples: 6       # 3
  vy_samples: 1       # diff drive robot, there is only one sample
  vtheta_samples: 20  # 20

# Trajectory Scoring Parameters
  path_distance_bias: 80.0      # 32.0   - weighting for how much it should stick to the global path plan
  goal_distance_bias: 12.0      # 24.0   - wighting for how much it should attempt to reach its goal
  occdist_scale: 0.5            # 0.01   - weighting for how much the controller should avoid obstacles
  forward_point_distance: 0.325 # 0.325  - how far along to place an additional scoring point
  stop_time_buffer: 0.2         # 0.2    - amount of time a robot must stop in before colliding for a valid traj.
  scaling_speed: 0.25           # 0.25   - absolute velocity at which to start scaling the robot's footprint
  max_scaling_factor: 0.2       # 0.2    - how much to scale the robot's footprint when at speed.

# Oscillation Prevention Parameters
  oscillation_reset_dist: 0.05  # 0.05   - how far to travel before resetting oscillation flags

# Debugging
  publish_traj_pc : true
  publish_cost_grid_pc: true
  global_frame_id: odom


# Differential-drive robot configuration - necessary?
#  holonomic_robot: false

from turtlebot_exploration_3d.

QinZiwen avatar QinZiwen commented on June 3, 2024

@baishibona , I have been confused with something as follow:
1, In turtlebot_exploration_3d.cpp, there is a code that "MIs[i] = calculateMutualInformation(cur_tree, c.first, hits, before)/pow( pow(c.first.x()-laser_orig.x(), 2) + pow(c.first.y() - laser_orig.y(), 2) ,1.5);". I do not known the meaning.
2, In line 765 of turtlebot_exploration_3d.cpp, "p = candidates.size()-1" is how to promise max index.
3, In "Inference-Enabled Information-Theoretic Exploration of Continuous action space", I am not find how to reduce computational effort by support vector regression. Could you give me some more detailed explain?
Thank you very much!

from turtlebot_exploration_3d.

baishibona avatar baishibona commented on June 3, 2024

@QinZiwen It's great that you got it working in your lab. However it will have collision with some obstacles in our lab as well. We played with dwa_local_planner for a while but still can not avoid all of them. One thing keep in mind is there is only one slide of kinect data being used for a fake laser scan, so turtlebot can not really see an obstacle if it's higher or lower than a certain hight(might be 0.4m in this case).

About the program:
1, The mutual information was normalized using distance from current location of the robot.
2, We notice sometimes move_base will fail to navigate to a specific goal, if that happens we will switch to a sub-optimal goal for the turtlebot to reach.
3, Sorry that we recently update the code and now it's been using Bayesian Optimization which is more robust than GP Regression, Please download our latest version and give it a try (it will be much faster!). In case you are interested in SVM regression, please find the details in this paper. However we do not have an implementation to SVM regression as it's the same as GP regression.

One trick while turtlebot collide to something, just try to kick it out of the obstacle and it will continue (we done that a lot...).

from turtlebot_exploration_3d.

baishibona avatar baishibona commented on June 3, 2024

@QinZiwen
P.S. About the program, 3, When I said the same as GP regression, I meant same performance as GP regression for exploration.

from turtlebot_exploration_3d.

QinZiwen avatar QinZiwen commented on June 3, 2024

In your paper, How to compute H(m), because I do not known the size of i and j. You said that "index i refers to the individual grid cells of the map and index j refers to the possible outcomes that represents each grid cell",so M_i_j , i and j is subscript, represents what?

Second,"H(m|x_i) is the expected entropy of the map given a new sensor observation at configuration x_i", I do not understand this in your paper. How to get "new sensor observation at configuration x_i"?

from turtlebot_exploration_3d.

baishibona avatar baishibona commented on June 3, 2024

@QinZiwen About the computation of H(m), it is using the Shannon entropy as indicated in Eq. 1. You may assume i refers to a specific grid and ignore j which makes better sense (j can be ignored if you assume the sensor is perfect, which is true in this repository). If you interested in more details, please refer to "S. Thrun, W. Burgard, and D. Fox, “Exploration,” In Probabilistic Robotics, pp. 569-605, MIT Press, 2005."

H(m|x_i) is the expected entropy given a new sensor observation (simulated, assuming all the unknown grids will be free once a sensor ray intersect with it).

I wish that helps, and please feel free to shoot me email if you have further questions about the paper. However I am closing on this for now, but feel free to pull out new issues.

from turtlebot_exploration_3d.

QinZiwen avatar QinZiwen commented on June 3, 2024

@baishibona Thank you!

from turtlebot_exploration_3d.

lmy19880626 avatar lmy19880626 commented on June 3, 2024

Hello author! I am a new learner.I have some question to inquiry you ,hope to help me. when I scan the code, Idon't understand what is meaning of that,for example ,sensorModel Kinect_360(64, 48, 2PI57/360, 2PI43/360, 6), why were the parameters set? what is the countFreeVolume's function ? please don't ignore my question.I will look forward to your answer.

from turtlebot_exploration_3d.

baishibona avatar baishibona commented on June 3, 2024

@lmy19880626 Hey, I am happy to help. sensorModel Kinect_360 is defining a sensor model for Kinect, which field of view is 57 by 43 degrees, while resolution should be 640 by 480, however we down sampling it by a factor of 10, in order to speed up.
countFreeVolume, is approx. of map entropy. You may find more details in the paper.

As this is a new question, please feel free to start an new issue, and I am closing this one.

from turtlebot_exploration_3d.

lmy19880626 avatar lmy19880626 commented on June 3, 2024

Thank you very much!

from turtlebot_exploration_3d.

lmy19880626 avatar lmy19880626 commented on June 3, 2024

Disturb again! Because I read the code, there are parts which I don't understand. next_vp (visit pose) is sent to "move_base" one by one according to MI? and what's function of GPRegressor? " train()" and "test()" is for what? it is base and important that I know nothing about "covMaterniso3()". please explain it in detail, I would be grateful for that.

from turtlebot_exploration_3d.

baishibona avatar baishibona commented on June 3, 2024

Yes, each time only one view point was sent to move_base.
GPRegressor is the GP regression part, which I will refer you to the paper.

If you can post your questions in "issue", which will benefit others. Otherwise, it is not relevant to keep your questions in a different "issue" thread.

from turtlebot_exploration_3d.

Related Issues (14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.