Giter Club home page Giter Club logo

mapping-and-navigation-with-turtlebot3's Introduction

Mapping-and-Navigation-with-TurtleBot3

Description

The primary objective of this project is to implement robotics concepts like mapping, localization, path planning, navigation and much more while augmenting the understanding of the ROS environment.

Status

  • Used gmapping package to map the environment.
  • Once mapped autonomous navigation can be performed to get the robot reach the desired position.
  • Localization can be visualizes independently using AMCL package.

Development process

Axis and Grid

Description of launch/src files

roslaunch gazebo.launch

This launched a simulation based environment where robot's data such as joint state, sensor readings and transforms are published to respective topic. While working with real life robots this data will be published by the on-board computing device located in the robot itself.

  • The launch file imports an empty world and incorporates obstacles that are defined in the "obstacles.world" file.
  • It also uses the "spawn_model" node, located inside the gazebo_ros package, to import the robot model of the TurtleBot3 within the Gazebo simulation environment.
  • The necessary arguments and parameters are passed to the node to ensure that the robot model is positioned correctly within the simulation environment.
  • In the process of testing the project on a different machine, a requirement to install the TurtleBot3 packages in advance was encountered. This was due to the models such as walls, boxes being located within the TurtleBot3 package. As a solution, the models were moved inside this project and utilized the GAZEBO_MODEL_PATH environment variable to reference them.

gazebo

Gazebo Simulation environment


gazebo_rosgraph

Robot data published by Gazebo environment

Mapping

For a robot to autonomously move through an environment it should have a map of the environment presaved. The map defines the boundaries and obstacles inside the working space.

Gmapping is a open-source package that implements Grid-based FastSLAM algorithm for Simultaneous Localization and Mapping (SLAM).The gmapping package takes laser scan data from lidar and odometry information from the robot's motion, and generates a 2D occupancy grid map of the environment while estimating the robot's pose (position and orientation) within that map.

roslaunch gmapping

  • The gmapping package is utilized to generate a map of the environment.
  • The robot_state_publisher node is launched to visualize the current position of the robot in the generated map, and to provide the transform of the base_scan to the map frame. It read the URDF file and publishes transforms between different frames.
  • The gmapping node is initialized with the necessary parameters, and the frame and topic were passed as arguments.

gmapping

Map generation using Gmapping


gmapping_rosgraph

Visualization of topic, node relation

Localization

Next step towards autonomy is to localize the robot within the presaved map.

roslaunch robot_localization

  • In the initial stages of the project, efforts were made to localize the robot within the generated map. However, upon realization that developing the particle filter from scratch would enhance the understanding of ROS, the localization efforts were suspended, and the focus was shifted to the particle filter implementation.

roslaunch particle_filter

  • The Rviz platform is initialized with predefined settings to facilitate visualization of the localization process.
  • The global frame of reference is set by importing the map generated by the gmapping package using the map_server package.
  • The robot model is imported to visualize the robot's position within the generated map.
  • To determine the robot's position relative to the global frame of reference (i.e., the map), the odom frame is updated as the robot moves. By attaching the odom frame to the origin of the map, the robot's motion can be visualized in Rviz.
  • To establish the transformation between the map and odom frames, the tf package is employed. The node static_transform_publisher is used to set the transform between these frames once, as needed.

rosrun localizer

  • The program is structured into two sections: the setup and the loop.
  • The program functions as a ROS node that subscribes to the "/odom" node to retrieve the robot's current location.
  • Additionally, it publishes to two topics, namely "particle_cloud_PoseArray" and "particle_cloud_Origin", which receive "PoseArray" data for visualizing the poses of particles.
  • The frame of the "PoseArray" is attached to the "map" to connect their tf data.
  • Once the frames are attached, the data of each particle is updated with random poses with uniform distribution.
  • The publishing rate is then set to 50 Hz, and a second delay is given until the system stabilizes.
  • During the execution of the node, the pose of each particle is updated according to the robot's pose. Mathematically, the rotational transform is applied first, while the translational transform is then applied to update the position of the particle with respect to the robot's position.

particle_filter

Particle Filter in action


particlefilter_rosgraph

Visualization of topic, node relation

AMCL (Adaptive Mote-Carlo Localization)

AMCL is a localization package that uses a particle filter to estimate the pose of a robot in an environment. AMCL combines information from odometry and laser scans, to estimate the robot's pose with respect to a given map of the environment.

It maintains a set of particles, each representing a hypothesis of the robot's pose, and updates them based on the sensor measurements and motion model. The particles with higher weights indicate more likely robot poses. It resamples particles based on their weights to obtain a diverse set of hypotheses that represent the robot's pose distribution. The AMCL algorithm adapts the number of particles based on the quality of the localization estimate and can handle dynamic environments where the map or robot's position may change over time.

roslaunch amcl

  • The AMCL package requires a map against which localization is performed which is provided map_server node within the map_server package.
  • robot_state_publisher node from the package with the same name publishes robot transforms by reading the URDF description file of the robot.
  • Lastly the AMCL node is launched which performs the task of localization. Various parameters are set which are used to configure the node.

Localization in action


amcl_rosgraph

Visualization of topic, node relation

Navigation

Autonomous navigation is the process to plan and execute a path in an given or unknown environment to reach a desired destination. Navigation is bigger picture which can be achieved using perception, mapping, localization, path planning, and control modules.

roslaunch navigation

Once the gazebo simulation is up, the navigation launch files launches another two launch files.

  • First is the AMCL launch files which is used to localize to the robot in the map provided by the map_server. This is the same launch file used in AMCL section.
  • The seconds file is the move_base which actually is a part of the ROS Navigation Stack. The move is responsible for planning and controlling the motor of the robot to reach the specified destination.The move_base package combines several components to achieve autonomous navigation.
    • Global planner which uses Dijkstra's algorithm to generate a high level global plan.
    • Local planner which uses Dynamic Window Approach to performs tasks such as obstacles avoidance and generates the velocity commands to follow the global plan.
    • Costmap is a representation of the environment as a grid map with different cost values assigned to different areas, used by the planners to make decisions about path planning and obstacle avoidance.

navigation

Autonomous navigation of TurtleBot3


navigation_all_topics

Visualization of all active topic, node relation


navigation_nodes_only

Visualization of inter-node relation

rosrun initialPose

  • The AMCL node needs to know the initial Pose of the robot for the robot to localize, this can be done by publishing to the topic /initialpose.
  • This script instead of running several times execute only onces. First it waits for message to publish on /odom topic, which is the position of the robot from the origin of the map. After reception of the message secondary felids such as frame_id and covariance matrix is attached and the message is published to the initialpose topic.

initialPose

Publishment to topic initialpose

Todo

  • Drive the robot until robot is localized within the threshold limit. This function can be added with the initialPose node.
  • Publish goalPosition on topic /move_base_simple/Goal, once reached publish another Goal Position, use PID controller to check for position validity.
  • Perform same task of getting the robot reach a specific task but this time use rosservices to check if the robot has reached the position or not.

Experimentations

360 vs 6 Samples of Lidar Sensor

A experiment was conducted to observe the effects of number of samples and the coverage area of the lidar sensor over its perception of the environment. Surprisingly the performance of the robot with 360 samples and 6 samples were nearly same while needing to adjust few parameter like increasing the update rate and reducing the robot's speed.

Localization of robot with 360 and 6 samples of lidar sensor

Does this imples that a Lidar sensor can be swapped with few distance sensors for Swarm Robotics Application?

Yet to experiment on:

  • Do we really need localization for autonomous navigation?
  • Does the robot need to have a map presaved for autonomous mobility.
  • Navigation but the map is provided by gmapping package.
  • Mapping and navigation using camera instead of lidar?

mapping-and-navigation-with-turtlebot3's People

Contributors

maker-atom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.