Giter Club home page Giter Club logo

realsense-ros's Introduction

Intel® RealSense™

ROS Wrapper for Intel(R) RealSense(TM) Cameras
Latest release notes


rolling iron humble foxy ubuntu22 ubuntu20

GitHubWorkflowStatus GitHubcontributors License


Table of contents


ROS1 and ROS2 Legacy

Intel RealSense ROS1 Wrapper Intel Realsense ROS1 Wrapper is not supported anymore, since our developers team are focusing on ROS2 distro.
For ROS1 wrapper, go to ros1-legacy branch
Moving from ros2-legacy to ros2-development
  • Changed Parameters:
    • "stereo_module", "l500_depth_sensor" are replaced by "depth_module"
    • For video streams: <module>.profile replaces <stream>_width, <stream>_height, <stream>_fps
      • ROS2-legacy (Old):
        • ros2 launch realsense2_camera rs_launch.py depth_width:=640 depth_height:=480 depth_fps:=30.0 infra1_width:=640 infra1_height:=480 infra1_fps:=30.0
      • ROS2-development (New):
        • ros2 launch realsense2_camera rs_launch.py depth_module.profile:=640x480x30
    • Removed paramets <stream>_frame_id, <stream>_optical_frame_id. frame_ids are now defined by camera_name
    • "filters" is removed. All filters (or post-processing blocks) are enabled/disabled using "<filter>.enable"
    • "align_depth" is now a regular processing block and as such the parameter for enabling it is replaced with "align_depth.enable"
    • "allow_no_texture_points", "ordered_pc" are now belong to the pointcloud filter and as such are replaced by "pointcloud.allow_no_texture_points", "pointcloud.ordered_pc"
    • "pointcloud_texture_stream", "pointcloud_texture_index" belong now to the pointcloud filter and were renamed to match their librealsense' names: "pointcloud.stream_filter", "pointcloud.stream_index_filter"
  • Allow enable/disable of sensors in runtime (parameters <stream>.enable)
  • Allow enable/disable of filters in runtime (parameters <filter_name>.enable)
  • unite_imu_method parameter is now changeable in runtime.
  • enable_sync parameter is now changeable in runtime.

Installation on Ubuntu

Step 1: Install the ROS2 distribution
Step 2: Install latest Intel® RealSense™ SDK 2.0

Please choose only one option from the 3 options below (in order to prevent multiple versions installation and workspace conflicts)

  • Option 1: Install librealsense2 debian package from Intel servers

  • Option 2: Install librealsense2 (without graphical tools and examples) debian package from ROS servers (Foxy EOL distro is not supported by this option):

    • Configure your Ubuntu repositories
    • Install all realsense ROS packages by sudo apt install ros-<ROS_DISTRO>-librealsense2*
      • For example, for Humble distro: sudo apt install ros-humble-librealsense2*
  • Option 3: Build from source

Step 3: Install Intel® RealSense™ ROS2 wrapper

Option 1: Install debian package from ROS servers (Foxy EOL distro is not supported by this option):

  • Configure your Ubuntu repositories
  • Install all realsense ROS packages by sudo apt install ros-<ROS_DISTRO>-realsense2-*
  • For example, for Humble distro: sudo apt install ros-humble-realsense2-*

Option 2: Install from source

  • Create a ROS2 workspace

    mkdir -p ~/ros2_ws/src
    cd ~/ros2_ws/src/
  • Clone the latest ROS2 Intel® RealSense™ wrapper from here into '~/ros2_ws/src/'

    git clone https://github.com/IntelRealSense/realsense-ros.git -b ros2-development
    cd ~/ros2_ws
    
  • Install dependencies

sudo apt-get install python3-rosdep -y
sudo rosdep init # "sudo rosdep init --include-eol-distros" for Foxy and earlier
rosdep update # "sudo rosdep update --include-eol-distros" for Foxy and earlier
rosdep install -i --from-path src --rosdistro $ROS_DISTRO --skip-keys=librealsense2 -y
  • Build
colcon build
  • Source environment
ROS_DISTRO=<YOUR_SYSTEM_ROS_DISTRO>  # set your ROS_DISTRO: iron, humble, foxy
source /opt/ros/$ROS_DISTRO/setup.bash
cd ~/ros2_ws
. install/local_setup.bash

Installation on Windows

PLEASE PAY ATTENTION: RealSense ROS2 Wrapper is not meant to be supported on Windows by our team, since ROS2 and its packages are still not fully supported over Windows. We added these installation steps below in order to try and make it easier for users who already started working with ROS2 on Windows and want to take advantage of the capabilities of our RealSense cameras

Step 1: Install the ROS2 distribution
  • Windows 10/11

    Please choose only one option from the two options below (in order to prevent multiple versions installation and workspace conflicts)

Step 2: Download RealSense™ ROS2 Wrapper and RealSense™ SDK 2.0 source code from github:
  • Download Intel® RealSense™ ROS2 Wrapper source code from Intel® RealSense™ ROS2 Wrapper Releases
  • Download the corrosponding supported Intel® RealSense™ SDK 2.0 source code from the "Supported RealSense SDK" section of the specific release you chose fronm the link above
  • Place the librealsense folder inside the realsense-ros folder, to make the librealsense package set beside realsense2_camera, realsense2_camera_msgs and realsense2_description packages
Step 3: Build
  1. Before starting building of our packages, make sure you have OpenCV for Windows installed on your machine. If you choose the Microsoft IOT way to install it, it will be installed automatically. Later, when colcon build, you might need to expose this installation folder by setting CMAKE_PREFIX_PATH, PATH, or OpenCV_DIR environment variables

  2. Run "x64 Native Tools Command Prompt for VS 2019" as administrator

  3. Setup ROS2 Environment (Do this for every new terminal/cmd you open):

    • If you choose the Microsoft IOT Binary option for installation

      > C:\opt\ros\humble\x64\setup.bat
      
    • If you choose the ROS2 formal documentation:

      > call C:\dev\ros2_iron\local_setup.bat
      
  4. Change directory to realsense-ros folder

    > cd C:\ros2_ws\realsense-ros
  5. Build librealsense2 package only

    > colcon build --packages-select librealsense2 --cmake-args -DBUILD_EXAMPLES=OFF -DBUILD_WITH_STATIC_CRT=OFF -DBUILD_GRAPHICAL_EXAMPLES=OFF
    • User can add --event-handlers console_direct+ parameter to see more debug outputs of the colcon build
  6. Build the other packages

    > colcon build --packages-select realsense2_camera_msgs realsense2_description realsense2_camera
    • User can add --event-handlers console_direct+ parameter to see more debug outputs of the colcon build
  7. Setup environment with new installed packages (Do this for every new terminal/cmd you open):

    > call install\setup.bat

Usage

Start the camera node

with ros2 run:

ros2 run realsense2_camera realsense2_camera_node
# or, with parameters, for example - temporal and spatial filters are enabled:
ros2 run realsense2_camera realsense2_camera_node --ros-args -p enable_color:=false -p spatial_filter.enable:=true -p temporal_filter.enable:=true

with ros2 launch:

ros2 launch realsense2_camera rs_launch.py
ros2 launch realsense2_camera rs_launch.py depth_module.depth_profile:=1280x720x30 pointcloud.enable:=true

Camera Name And Camera Namespace

User can set the camera name and camera namespace, to distinguish between cameras and platforms, which helps identifying the right nodes and topics to work with.

Example

  • If user have multiple cameras (might be of the same model) and multiple robots then user can choose to launch/run his nodes on this way.

  • For the first robot and first camera he will run/launch it with these parameters:

    • camera_namespace:

      • robot1
    • camera_name

      • D455_1
    • With ros2 launch (via command line or by editing these two parameters in the launch file):

    ros2 launch realsense2_camera rs_launch.py camera_namespace:=robot1 camera_name:=D455_1

    • With ros2 run (using remapping mechanisim Reference):

    ros2 run realsense2_camera realsense2_camera_node --ros-args -r __node:=D455_1 -r __ns:=robot1

    • Result
    > ros2 node list
    /robot1/D455_1
    
    > ros2 topic list
    /robot1/D455_1/color/camera_info
    /robot1/D455_1/color/image_raw
    /robot1/D455_1/color/metadata
    /robot1/D455_1/depth/camera_info
    /robot1/D455_1/depth/image_rect_raw
    /robot1/D455_1/depth/metadata
    /robot1/D455_1/extrinsics/depth_to_color
    /robot1/D455_1/imu
    
    > ros2 service list
    /robot1/D455_1/device_info
    

Default behavior if non of these parameters are given:

  • camera_namespace:=camera
  • camera_name:=camera
> ros2 node list
/camera/camera

> ros2 topic list
/camera/camera/color/camera_info
/camera/camera/color/image_raw
/camera/camera/color/metadata
/camera/camera/depth/camera_info
/camera/camera/depth/image_rect_raw
/camera/camera/depth/metadata
/camera/camera/extrinsics/depth_to_color
/camera/camera/imu

> ros2 service list
/camera/camera/device_info

Parameters

Available Parameters:

  • For the entire list of parameters type ros2 param list.
  • For reading a parameter value use ros2 param get <node> <parameter_name>
    • For example: ros2 param get /camera/camera depth_module.emitter_enabled
  • For setting a new value for a parameter use ros2 param set <node> <parameter_name> <value>
    • For example: ros2 param set /camera/camera depth_module.emitter_enabled 1

Parameters that can be modified during runtime:

  • All of the filters and sensors inner parameters.
  • Video Sensor Parameters: (depth_module and rgb_camera)
    • They have, at least, the <stream_type>_profile parameter.
      • The profile parameter is a string of the following format: <width>X<height>X<fps> (The dividing character can be X, x or ",". Spaces are ignored.)
      • For example: depth_module.depth_profile:=640x480x30 depth_module.infra_profile:=640x480x30 rgb_camera.color_profile:=1280x720x30
      • Note: The param depth_module.infra_profile is common for all infra streams. i.e., infra 0, 1 & 2.
      • If the specified combination of parameters is not available by the device, the default or previously set configuration will be used.
        • Run ros2 param describe <your_node_name> <param_name> to get the list of supported profiles.
      • Note: Should re-enable the stream for the change to take effect.
    • <stream_name>_format
      • This parameter is a string used to select the stream format.
      • <stream_name> can be any of infra, infra1, infra2, color, depth.
      • For example: depth_module.depth_format:=Z16 depth_module.infra1_format:=y8 rgb_camera.color_format:=RGB8
      • This parameter supports both lower case and upper case letters.
      • If the specified parameter is not available by the stream, the default or previously set configuration will be used.
        • Run ros2 param describe <your_node_name> <param_name> to get the list of supported formats.
      • Note: Should re-enable the stream for the change to take effect.
    • If the stream doesn't support the user selected profile <width>X<height>X<fps> + <format>, it will not be opened and a warning message will be shown.
      • Should update the profile settings and re-enable the stream for the change to take effect.
      • Run rs-enumerate-devices command to know the list of profiles supported by the connected sensors.
  • enable_<stream_name>:
    • Choose whether to enable a specified stream or not. Default is true for images and false for orientation streams.
    • <stream_name> can be any of infra, infra1, infra2, color, depth, gyro, accel.
    • For example: enable_infra1:=true enable_color:=false
  • enable_sync:
    • gathers closest frames of different sensors, infra red, color and depth, to be sent with the same timetag.
    • This happens automatically when such filters as pointcloud are enabled.
  • <stream_type>_qos:
    • Sets the QoS by which the topic is published.
    • <stream_type> can be any of infra, infra1, infra2, color, depth, gyro, accel.
    • Available values are the following strings: SYSTEM_DEFAULT, DEFAULT, PARAMETER_EVENTS, SERVICES_DEFAULT, PARAMETERS, SENSOR_DATA.
    • For example: depth_qos:=SENSOR_DATA
    • Pointcloud QoS is controlled with the pointcloud.pointcloud_qos parameter in the pointcloud filter, refer to the Post-Processing Filters section for details.
    • Reference: ROS2 QoS profiles formal documentation
  • Notice: <stream_type>_info_qos refers to both camera_info topics and metadata topics.
  • tf_publish_rate:
    • double, rate (in Hz) at which dynamic transforms are published
    • Default value is 0.0 Hz (means no dynamic TF)
    • This param also depends on publish_tf param
      • If publish_tf:=false, then no TFs will be published, even if tf_publish_rate is >0.0 Hz
      • If publish_tf:=true and tf_publish_rate set to >0.0 Hz, then dynamic TFs will be published at the specified rate
  • unite_imu_method:
    • For the D400 cameras with built in IMU components, below 2 unrelated streams (each with it's own frequency) will be created:
      • gyro - which shows angular velocity
      • accel - which shows linear acceleration.
    • Both streams will publish data to its corresponding topics:
      • '/camera/camera/gyro/sample' & '/camera/camera/accel/sample'
      • Though both topics are of same message type 'sensor_msgs::Imu', only their relevant fields are filled out.
    • A new topic called imu will be created, when both accel and gyro streams are enabled and the param unite_imu_method set to > 0.
      • Data from both accel and gyro are combined and published to this topic
      • All the fields of the Imu message are filled out.
      • It will be published at the rate of the gyro.
    • unite_imu_method param supports below values:
      • 0 -> none: no imu topic
      • 1 -> copy: Every gyro message will be attached by the last accel message.
      • 2 -> linear_interpolation: Every gyro message will be attached by an accel message which is interpolated to gyro's timestamp.
    • Note: When the param unite_imu_method is dynamically updated, re-enable either gyro or accel stream for the change to take effect.
  • accelerate_gpu_with_glsl:
    • Boolean: GPU accelerated with GLSL for processing PointCloud and Colorizer filters.
    • Note:
      • To have smooth transition between the processing blocks when this parameter is updated dynamically, the node will:
        • Stop the video sensors
        • Do necessary GLSL configuration
        • And then, start the video sensors
      • To enable GPU acceleration, turn ON BUILD_ACCELERATE_GPU_WITH_GLSL during build:
      colcon build --cmake-args '-DBUILD_ACCELERATE_GPU_WITH_GLSL=ON'

Parameters that cannot be changed in runtime:

  • serial_no:
    • will attach to the device with the given serial number (serial_no) number.
    • Default, attach to the first (in an inner list) RealSense device.
    • Note: serial number should be defined with "_" prefix.
      • That is a workaround until a better method will be found to ROS2's auto conversion of strings containing only digits into integers.
    • Example: serial number 831612073525 can be set in command line as serial_no:=_831612073525.
  • usb_port_id:
    • will attach to the device with the given USB port (usb_port_id).
    • For example: usb_port_id:=4-1 or usb_port_id:=4-2
    • Default, ignore USB port when choosing a device.
  • device_type:
    • will attach to a device whose name includes the given device_type regular expression pattern.
    • Default, ignore device type.
    • For example:
      • device_type:=d435 will match d435 and d435i.
      • device_type=d435(?!i) will match d435 but not d435i.
  • reconnect_timeout:
    • When the driver cannot connect to the device try to reconnect after this timeout (in seconds).
    • For Example: reconnect_timeout:=10
  • wait_for_device_timeout:
    • If the specified device is not found, will wait wait_for_device_timeout seconds before exits.
    • Defualt, wait_for_device_timeout < 0, will wait indefinitely.
    • For example: wait_for_device_timeout:=60
  • rosbag_filename:
    • Publish topics from rosbag file. There are two ways for loading rosbag file:
    • Command line - ros2 run realsense2_camera realsense2_camera_node -p rosbag_filename:="/full/path/to/rosbag.bag"
    • Launch file - set rosbag_filename parameter with rosbag full path (see realsense2_camera/launch/rs_launch.py as reference)
  • initial_reset:
    • On occasions the device was not closed properly and due to firmware issues needs to reset.
    • If set to true, the device will reset prior to usage.
    • For example: initial_reset:=true
  • base_frame_id: defines the frame_id all static transformations refers to.
  • clip_distance:
    • Remove from the depth image all values above a given value (meters). Disable by giving negative value (default)
    • For example: clip_distance:=1.5
  • linear_accel_cov, angular_velocity_cov: sets the variance given to the Imu readings.
  • hold_back_imu_for_frames: Images processing takes time. Therefor there is a time gap between the moment the image arrives at the wrapper and the moment the image is published to the ROS environment. During this time, Imu messages keep on arriving and a situation is created where an image with earlier timestamp is published after Imu message with later timestamp. If that is a problem, setting hold_back_imu_for_frames to true will hold the Imu messages back while processing the images and then publish them all in a burst, thus keeping the order of publication as the order of arrival. Note that in either case, the timestamp in each message's header reflects the time of it's origin.
  • publish_tf:
    • boolean, enable/disable publishing static and dynamic TFs
    • Defaults to True
      • So, static TFs will be published by default
      • If dynamic TFs are needed, user should set the param tf_publish_rate to >0.0 Hz
    • If set to false, both static and dynamic TFs won't be published, even if the param tf_publish_rate is set to >0.0 Hz
  • diagnostics_period:
    • double, positive values set the period between diagnostics updates on the /diagnostics topic.
    • 0 or negative values mean no diagnostics topic is published. Defaults to 0.
      The /diagnostics topic includes information regarding the device temperatures and actual frequency of the enabled streams.

ROS2(Robot) vs Optical(Camera) Coordination Systems:

  • Point Of View:
    • Imagine we are standing behind of the camera, and looking forward.
    • Always use this point of view when talking about coordinates, left vs right IRs, position of sensor, etc..

image

  • ROS2 Coordinate System: (X: Forward, Y:Left, Z: Up)
  • Camera Optical Coordinate System: (X: Right, Y: Down, Z: Forward)
  • References: REP-0103, REP-0105
  • All data published in our wrapper topics is optical data taken directly from our camera sensors.
  • static and dynamic TF topics publish optical CS and ROS CS to give the user the ability to move from one CS to other CS.

TF from coordinate A to coordinate B:

  • TF msg expresses a transform from coordinate frame "header.frame_id" (source) to the coordinate frame child_frame_id (destination) Reference
  • In RealSense cameras, the origin point (0,0,0) is taken from the left IR (infra1) position and named as "camera_link" frame
  • Depth, left IR and "camera_link" coordinates converge together.
  • Our wrapper provide static TFs between each sensor coordinate to the camera base (camera_link)
  • Also, it provides TFs from each sensor ROS coordinates to its corrosponding optical coordinates.
  • Example of static TFs of RGB sensor and Infra2 (right infra) sensor of D435i module as it shown in rviz2: example

Extrinsics from sensor A to sensor B:

  • Extrinsic from sensor A to sensor B means the position and orientation of sensor A relative to sensor B.
  • Imagine that B is the origin (0,0,0), then the Extrensics(A->B) describes where is sensor A relative to sensor B.
  • For example, depth_to_color, in D435i:
    • If we look from behind of the D435i, extrinsic from depth to color, means, where is the depth in relative to the color.
    • If we just look at the X coordinates, in the optical coordiantes (again, from behind) and assume that COLOR(RGB) sensor is (0,0,0), we can say that DEPTH sensor is on the right of RGB by 0.0148m (1.48cm).

d435i

administrator@perclnx466 ~/ros2_humble $ ros2 topic echo /camera/camera/extrinsics/depth_to_color
rotation:
- 0.9999583959579468
- 0.008895332925021648
- -0.0020127370953559875
- -0.008895229548215866
- 0.9999604225158691
- 6.045500049367547e-05
- 0.0020131953060626984
- -4.254872692399658e-05
- 0.9999979734420776
translation:
- 0.01485931035131216
- 0.0010161789832636714
- 0.0005317096947692335
---
  • Extrinsic msg is made up of two parts:
    • float64[9] rotation (Column - major 3x3 rotation matrix)
    • float64[3] translation (Three-element translation vector, in meters)

Published Topics

The published topics differ according to the device and parameters. After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type ros2 topic list):

  • /camera/camera/aligned_depth_to_color/camera_info
  • /camera/camera/aligned_depth_to_color/image_raw
  • /camera/camera/color/camera_info
  • /camera/camera/color/image_raw
  • /camera/camera/color/metadata
  • /camera/camera/depth/camera_info
  • /camera/camera/depth/color/points
  • /camera/camera/depth/image_rect_raw
  • /camera/camera/depth/metadata
  • /camera/camera/extrinsics/depth_to_color
  • /camera/camera/imu
  • /diagnostics
  • /parameter_events
  • /rosout
  • /tf_static

This will stream relevant camera sensors and publish on the appropriate ROS topics.

Enabling accel and gyro is achieved either by adding the following parameters to the command line:
ros2 launch realsense2_camera rs_launch.py pointcloud.enable:=true enable_gyro:=true enable_accel:=true
or in runtime using the following commands:

ros2 param set /camera/camera enable_accel true
ros2 param set /camera/camera enable_gyro true

Enabling stream adds matching topics. For instance, enabling the gyro and accel streams adds the following topics:

  • /camera/camera/accel/imu_info
  • /camera/camera/accel/metadata
  • /camera/camera/accel/sample
  • /camera/camera/extrinsics/depth_to_accel
  • /camera/camera/extrinsics/depth_to_gyro
  • /camera/camera/gyro/imu_info
  • /camera/camera/gyro/metadata
  • /camera/camera/gyro/sample

RGBD Topic

RGBD new topic, publishing [RGB + Depth] in the same message (see RGBD.msg for reference). For now, works only with depth aligned to color images, as color and depth images are synchronized by frame time tag.

These boolean paramters should be true to enable rgbd messages:

  • enable_rgbd: new paramter, to enable/disable rgbd topic, changeable during runtime
  • align_depth.enable: align depth images to rgb images
  • enable_sync: let librealsense sync between frames, and get the frameset with color and depth images combined
  • enable_color + enable_depth: enable both color and depth sensors

The current QoS of the topic itself, is the same as Depth and Color streams (SYSTEM_DEFAULT)

Example:

ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true align_depth.enable:=true enable_color:=true enable_depth:=true 

Metadata topic

The metadata messages store the camera's available metadata in a json format. To learn more, a dedicated script for echoing a metadata topic in runtime is attached. For instance, use the following command to echo the camera/depth/metadata topic:

python3 src/realsense-ros/realsense2_camera/scripts/echo_metadada.py /camera/camera/depth/metadata

Post-Processing Filters

The following post processing filters are available:

  • align_depth: If enabled, will publish the depth image aligned to the color image on the topic /camera/camera/aligned_depth_to_color/image_raw.

    • The pointcloud, if created, will be based on the aligned depth image.
  • colorizer: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values .

  • pointcloud: will add a pointcloud topic /camera/camera/depth/color/points.

    • The texture of the pointcloud can be modified using the pointcloud.stream_filter parameter.
    • The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting pointcloud.allow_no_texture_points to true.
    • pointcloud is of an unordered format by default. This can be changed by setting pointcloud.ordered_pc to true.
    • The QoS of the pointcloud topic is independent from depth and color streams and can be controlled with the pointcloud.pointcloud_qos parameter.
      • The same set of QoS values are supported as other streams, refer to <stream_type>_qos in the Parameters section of this page.
      • The launch file should include the parameter with initial QoS value, for example,{'name': 'pointcloud.pointcloud_qos', 'default': 'SENSOR_DATA', 'description': 'pointcloud qos'}
      • The QoS value can also be overridden at launch with command option, for example, pointcloud.pointcloud_qos:=SENSOR_DATA
      • At runtime, the QoS can be changed dynamically but require the filter re-enable for the change to take effect, for example,
        ros2 param set /camera/camera pointcloud.pointcloud_qos SENSOR_DATA
        ros2 param set /camera/camera pointcloud.enable false
        ros2 param set /camera/camera pointcloud.enable true
  • hdr_merge: Allows depth image to be created by merging the information from 2 consecutive frames, taken with different exposure and gain values.

  • depth_module.hdr_enabled: to enable/disable HDR. The way to set exposure and gain values for each sequence:

    • during Runtime:
      • is by first selecting the sequence id, using the depth_module.sequence_id parameter and then modifying the depth_module.gain, and depth_module.exposure.
      • From FW versions 5.14.x.x and above, if HDR is enabled, the preset configs (like exposure, gain, etc.,) cannot be updated.
        • Disable the HDR first using depth_module.hdr_enabled parameter and then, update the required presets.
    • during Launch time of the node:
      • is by setting below parameters
        • depth_module.exposure.1
        • depth_module.gain.1
        • depth_module.exposure.2
        • depth_module.gain.2
      • Make sure to set depth_module.hdr_enabled to true, otherwise these parameters won't be considered.
    • To view the effect on the infrared image for each sequence id use the filter_by_sequence_id.sequence_id parameter.
    • For in-depth review of the subject please read the accompanying white paper.
    • Note: Auto exposure functionality is not supported when HDR is enabled. i.e., Auto exposure will be auto-disabled if HDR is enabled.
  • The following filters have detailed descriptions in : https://github.com/IntelRealSense/librealsense/blob/master/doc/post-processing-filters.md

    • disparity_filter - convert depth to disparity before applying other filters and back.
    • spatial_filter - filter the depth image spatially.
    • temporal_filter - filter the depth image temporally.
    • hole_filling_filter - apply hole-filling filter.
    • decimation_filter - reduces depth scene complexity.

Each of the above filters have it's own parameters, following the naming convention of <filter_name>.<parameter_name> including a <filter_name>.enable parameter to enable/disable it.


Available services

  • device_info : retrieve information about the device - serial_number, firmware_version etc. Type ros2 interface show realsense2_camera_msgs/srv/DeviceInfo for the full list. Call example: ros2 service call /camera/camera/device_info realsense2_camera_msgs/srv/DeviceInfo

Efficient intra-process communication:

Our ROS2 Wrapper node supports zero-copy communications if loaded in the same process as a subscriber node. This can reduce copy times on image/pointcloud topics, especially with big frame resolutions and high FPS.

You will need to launch a component container and launch our node as a component together with other component nodes. Further details on "Composing multiple nodes in a single process" can be found here.

Further details on efficient intra-process communication can be found here.

Example

Manually loading multiple components into the same process

  • Start the component:

    ros2 run rclcpp_components component_container
  • Add the wrapper:

    ros2 component load /ComponentManager realsense2_camera realsense2_camera::RealSenseNodeFactory -e use_intra_process_comms:=true

    Load other component nodes (consumers of the wrapper topics) in the same way.

Limitations

  • Node components are currently not supported on RCLPY
  • Compressed images using image_transport will be disabled as this isn't supported with intra-process communication

Latency test tool and launch file

For getting a sense of the latency reduction, a frame latency reporter tool is available via a launch file. The launch file loads the wrapper and a frame latency reporter tool component into a single container (so the same process). The tool prints out the frame latency (now - frame.timestamp) per frame.

The tool is not built unless asked for. Turn on BUILD_TOOLS during build to have it available:

colcon build --cmake-args '-DBUILD_TOOLS=ON'

The launch file accepts a parameter, intra_process_comms, controlling whether zero-copy is turned on or not. Default is on:

ros2 launch realsense2_camera rs_intra_process_demo_launch.py intra_process_comms:=true

realsense-ros's People

Contributors

arun-prasad-v avatar atyshka avatar bhirashi avatar christian-rauch avatar deep0294 avatar dorodnic avatar doronhi avatar efernandez avatar gilaadb avatar gwen2018 avatar helenol avatar hyunseok-yang avatar icarpis avatar kadiredd avatar kamilritz avatar maloel avatar marqrazz avatar mattcurfman avatar mcamurri avatar mikolajz avatar nir-az avatar prasrsros avatar remibettan avatar rhysmck avatar samerkhshiboun avatar schmidtp1 avatar sshoshan avatar stwirth avatar sundermann avatar yycho0108 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

realsense-ros's Issues

Any way to increase the TF frame rate for the camera topics?

Hi,

Just posting here a quick question on how to change the TF frame rate for the camera topics (increase the frame rate), because I get the following Warnings when running Realsense R200 with RTAB:

[ WARN] [1470216120.812503232, 1470155297.576390646]: rtabmapviz: Could not get transform from odom to camera_depth_optical_frame after 0.200000 seconds (for stamp=1470155297.151934)!

Let me know if there is any parameter i could change or service I could call that might fix this problem by increasing the TF frame publication rate of the camera. Thanks!

Gabe

Remove native node support for PointCloud

System Configuration

Please complete Your Configuration detail below. Refer to the BKC at Configuration.

Version Your Configuration
Operating System Ubuntu 16.04 LTS
Kernel 4.4.0-22-generic
Backend video4linux
ROS kinetic
ROS RealSense 1.1.0
librealsense v0.9.2
R200 Firmware 1.0.72.06

Expected Behavior

No frame loss on Color, Depth, and IR streams.

Actual Behavior

Running streams with higher frame rates and/or resolution can result in dropped frames.

The publishing loop for the stream is slowed when native PointCloud generation is enabled in the ROS RealSense node. If kept, this should be placed into a separate thread for publishing the PointCould.

Instead of having the functionality native in the Node, most user would use the existing ROS packages to generate any needed PointClouds.

Recommendation: Remove the native PointCloud from the ROS RealSense node as it is redundant and creating unnecessary overhead.

Error for realsense_camera realsense_r200_nodelet_standalone_preset.launch

System Configuration

Version Known Good Your Configuration
Operating System ubuntu 14.04 ubuntu 14.04
Kernel Version 4.4.0-13 4.4.0-040400-generic
R200 FW 1.0.72.06 1.0.72.06
librealsense 0.9.1 0.9.0

I can run cpp-capture and build the package using catkin-make. However when I run
"roslaunch realsense_camera realsense_r200_nodelet_standalone_preset.launch" , an error occurs.

[ INFO] [1460466020.703162344]: Initializing nodelet with 4 worker threads.
[ INFO] [1460466020.790980800]: RealSense Camera - Starting camera nodelet.
**[ERROR] [1460466021.095609328]: RealSense Camera - Error calling rs_create_context ( api_version:4 ):
uvc_open2(...) returned Access denied

[FATAL] [1460466021.232074645]: Service call failed!
[standalone_nodelet-2] process has died [pid 7430, exit code 1, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=standalone_nodelet _log:=/home/aicrobo/.ros/log/82e77eb2-00ae-11e6-a8a2-c01885c34cc3/standalone_nodelet-2.log].
log file: /home/aicrobo/.ros/log/82e77eb2-00ae-11e6-a8a2-c01885c34cc3/standalone_nodelet-2
.log
[RealsenseNodelet-3] process has died [pid 7437, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load realsense_camera/RealsenseNodelet standalone_nodelet mode preset enable_depth true enable_color true enable_pointcloud true enable_tf true __name:=RealsenseNodelet _log:=/home/aicrobo/.ros/log/82e77eb2-00ae-11e6-a8a2-c01885c34cc3/RealsenseNodelet-3.log].
log file: /home/aicrobo/.ros/log/82e77eb2-00ae-11e6-a8a2-c01885c34cc3/RealsenseNodelet-3
.log
**

Do I need to run the command as a super user? I tried to add "sudo" before the command but it can not find the command "roslaunch"
What can I do?

Video4Linux backend patching

System Configuration

Please complete Your Configuration detail below. Refer to the BKC at Configuration.

Version Your Configuration
Operating System Ubuntu 16.04 LTS
Kernel 4.4.0-040400-generic
Backend
ROS kinetic
ROS RealSense Cannot locate [realsense_camera]
librealsense 0.9.1
R200 Firmware 1.0.72.06

When I run the example program "cpp-capture", it only can last for 2 seconds and then it crashes. So, I suspect that maybe my Video4Linux backend isn't installed correctly. Then, I tried several times to reinstall the backend and all attempts seem failed.

Patching command:
$ ./scripts/patch-uvcvideo-16.04.simple.sh
After I reloaded the uvcvideo. I have some errors:

$ sudo dmesg | tail -n 50
...
[   27.284877] usb 4-1: Product: Intel RealSense 3D Camera R200
[   27.284878] usb 4-1: Manufacturer: Intel Corp
[   27.284879] usb 4-1: SerialNumber: SN_2421007586
[  146.474076] usb 4-1: usbfs: process 2512 (cpp-capture) did not claim interface 3 before use
[  146.477690] usb 4-1: usbfs: process 2512 (cpp-capture) did not claim interface 5 before use
[  152.267237] usb 4-1: usbfs: process 2514 (cpp-capture) did not claim interface 2 before use
[  152.283057] usb 4-1: usbfs: process 2514 (cpp-capture) did not claim interface 4 before use
[  162.653226] usb 4-1: USB disconnect, device number 2
[  597.419219] usb 4-1: new SuperSpeed USB device number 3 using xhci_hcd
[  597.437910] usb 4-1: New USB device found, idVendor=8086, idProduct=0a80
[  597.437916] usb 4-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[  597.437920] usb 4-1: Product: Intel RealSense 3D Camera R200
[  597.437923] usb 4-1: Manufacturer: Intel Corp
[  597.437926] usb 4-1: SerialNumber: SN_2421007586

I have reported this issue to IntalRealsense community. However, 5 days passed and nobody reply me. Could you please help me to solve this issue? Thanks!

Service '/camera/get_settings' is missing

System Configuration

Please complete Your Configuration detail below. Refer to the BKC at Configuration.

Version Your Configuration
Operating System Ubuntu 16.04 LTS
Kernel 4.4.0-28-generic
Backend video4linux
ROS kinetic
ROS RealSense 1.2.0
librealsense 0.9.2
R200 Firmware 1.0.72.06

The ROS service /camera/get_settings is no longer available in release 1.2.0.
$ rosservice list | grep get_settings

Expected Behavior

$ rosservice call /camera/get_settings
configuration_str: color_backlight_compensation:1;color_brightness:56;color_contrast:32;color_gain:32;color_gamma:220;color_hue:0;color_saturation:128;color_sharpness:0;color_white_balance:6500;color_enable_auto_white_balance:1;r200_lr_auto_exposure_enabled:0;r200_lr_gain:400;r200_lr_exposure:164;r200_emitter_enabled:1;r200_depth_units:1000;r200_depth_clamp_min:0;r200_depth_clamp_max:65535;r200_disparity_multiplier:32;r200_auto_exposure_mean_intensity_set_point:512;r200_auto_exposure_bright_ratio_set_point:0.20000000298023224;r200_auto_exposure_kp_gain:0.0024999999441206455;r200_auto_exposure_kp_exposure:0.0024999999441206455;r200_auto_exposure_kp_dark_threshold:10;r200_auto_exposure_top_edge:0;r200_auto_exposure_bottom_edge:65535;r200_auto_exposure_left_edge:0;r200_auto_exposure_right_edge:65535;r200_depth_control_estimate_median_decrement:5;r200_depth_control_estimate_median_increment:5;r200_depth_control_median_threshold:192;r200_depth_control_score_minimum_threshold:1;r200_depth_control_score_maximum_threshold:512;r200_depth_control_texture_count_threshold:6;r200_depth_control_texture_difference_threshold:24;r200_depth_control_second_peak_threshold:27;r200_depth_control_neighbor_threshold:7;r200_depth_control_lr_threshold:24;

Actual Behavior

$ rosservice call /camera/get_settings
ERROR: Service [/camera/get_settings] is not available.

Steps to Reproduce

Window 1:
$ roslaunch realsense_camera r200_nodelet_default.launch
Window 2:
$ rosservice call /camera/get_settings

Service call failed! in odd number runs

System Configuration

Please complete Your Configuration detail below.

Version Best Known Your Configuration
Operating System Ubuntu 14.04.4 LTS Ubuntu 14.04.4 LTS
Kernel 4.4.0-040400-generic 3.10.96+
Backend video4linux video4linux
ROS indigo indigo
ROS RealSense Latest Release 1.0.3
librealsense 0.9.1 0.9.1
R200 Firmware 1.0.72.06 3.10.10.0

Expected Behavior

I want to run realsense_camera_nodelet on odroid xu4, Samsung Exynos5422 ARM® Cortex™-A15 Quad 2.0GHz/Cortex™-A7 Quad 1.4GHz. I am using SR300 camera. I used the package for sr300 from here:
https://github.com/nlyubova/realsense
and librealsense library for ARM from here:
https://github.com/Maghoumi/librealsense
I reduce fps to 30.

Actual Behavior

Every time that I run the node for the first time it crashed, and I received Error below, but second time it run without error, and alternatively once time it crashes and the other time it runs successfully :

[FATAL] [1356999608.372189726]: Service call failed!
[SR300_nodelet_manager-2] process has died [pid 10925, exit code -11, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=SR300_nodelet_manager __log:=/root/.ros/log/fa7186aa-53a8-11e2-bf31-7cdd908e0595/SR300_nodelet_manager-2.log].
log file: /root/.ros/log/fa7186aa-53a8-11e2-bf31-7cdd908e0595/SR300_nodelet_manager-2*.log
[camera_nodelet-3] process has died [pid 10931, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load realsense_camera/RealsenseNodelet SR300_nodelet_manager rgb/image_raw:=app/camera/rgb/image_raw camera/depth/image_raw:=depth/image_raw camera/color/image_raw:=rgb/image_raw camera/infrared1/image_raw:=ir/image_raw camera/infrared2/image_raw:=ir2/image_raw camera/depth/points:=depth/points camera/depth/camera_info:=depth/camera_info camera/color/camera_info:=rgb/camera_info camera/infrared1/camera_info:=ir/camera_info camera/infrared2/camera_info:=ir2/camera_info __name:=camera_nodelet __log:=/root/.ros/log/fa7186aa-53a8-11e2-bf31-7cdd908e0595/camera_nodelet-3.log].
log file: /root/.ros/log/fa7186aa-53a8-11e2-bf31-7cdd908e0595/camera_nodelet-3*.log

Do you have any idea? I think some where we should add waitforSerivce line, but I don't know where.

Steps to Reproduce

I can run the roslaunch successfully using line below:

timeout 10 roslaunch realsense_camera realsense_sr300.launch
roslaunch realsense_camera realsense_sr300.launch 

R200 Jetson TX1 ROS

### System Configuration
Please complete Your Configuration detail below.

Version Best Known Your Configuration
Operating System Ubuntu 14.04.4 LTS Ubuntu 14.04.1 LTS
Kernel 4.4.0-040400-generic 3.10.67-g458d45c
ROS indigo indigo
librealsense 0.9.1 0.9.1
R200 Firmware 1.0.72.06 1.0.72.06

### Expected Behavior
Realsense ROS package compiling.

### Actual Behavior
Examples from librealsense working properly using LibUVC backend on NVIDIA Jetson TX1, the only problem is the one about unplugging and plugging the realsense.
Realsense ROS package not compiling:

ubuntu@tegra-ubuntu:~/rs_ws$ catkin_make
Base path: /home/ubuntu/rs_ws
Source space: /home/ubuntu/rs_ws/src
Build space: /home/ubuntu/rs_ws/build
Devel space: /home/ubuntu/rs_ws/devel
Install space: /home/ubuntu/rs_ws/install

Running command: "make cmake_check_build_system" in "/home/ubuntu/rs_ws/build"

Running command: "make -j4 -l4" in "/home/ubuntu/rs_ws/build"

[ 0%] [ 11%] Built target std_msgs_generate_messages_py
Built target realsense_camera_gencfg
[ 11%] Built target std_msgs_generate_messages_cpp
[ 22%] [ 22%] Built target gtest
Built target std_msgs_generate_messages_lisp
[ 22%] Built target _realsense_camera_generate_messages_check_deps_cameraConfiguration
[ 33%] [ 44%] [ 66%] Built target realsense_camera_generate_messages_cpp
Built target realsense_camera_generate_messages_lisp
Built target realsense_camera_generate_messages_py
make[2]: *** No rule to make target /usr/lib/arm-linux-gnueabihf/libopencv_videostab.so.2.4.8', needed by/home/ubuntu/rs_ws/devel/lib/librealsense_camera_nodelet.so'. Stop.
make[1]: *** [realsense/camera/CMakeFiles/realsense_camera_nodelet.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[2]: *** No rule to make target /usr/lib/arm-linux-gnueabihf/libopencv_videostab.so.2.4.8', needed by/home/ubuntu/rs_ws/devel/lib/realsense_camera/realsense_camera_test_rgbd'. Stop.
make[1]: *** [realsense/camera/CMakeFiles/realsense_camera_test_rgbd.dir/all] Error 2
make[2]: *** No rule to make target /usr/lib/arm-linux-gnueabihf/libopencv_videostab.so.2.4.8', needed by/home/ubuntu/rs_ws/devel/lib/realsense_camera/realsense_camera_test'. Stop.
make[1]: *** [realsense/camera/CMakeFiles/realsense_camera_test.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j4 -l4" failed

It is a problem with opencv due to TX1 is using opencv4tegra. Is there any known fix for it?

### Steps to Reproduce

  1. Follow installation guide for librealsense using LibUVC backend and avoiding the mssse flag for compilation.
  2. Download ROS package for realsense and trying to compile it with catkin_make.
  3. Getting the error previously described.

F200 support?

Hello,

I am really new to ROS and 3D cameras, so forgive me if the issue is easy to solve.

I managed to install everything according to the docs, but when running one of the launch files, there is an error about infrared2 stream:

`process[RealsenseNodelet-2]: started with pid [9442]
[ INFO] [1458304316.516457675]: Initializing nodelet with 8 worker threads.
[ INFO] [1458304316.584243499]: RealSense Camera - Starting camera nodelet.
[ INFO] [1458304317.185167616]: RealSense Camera - Number of cameras connected: 1
[ INFO] [1458304317.185233790]: RealSense Camera - Firmware version: 2.60.0.0
[ INFO] [1458304317.185246538]: RealSense Camera - Name: Intel RealSense F200
[ INFO] [1458304317.185260261]: RealSense Camera - Serial no: 015150131405
[ INFO] [1458304317.185271646]: RealSense Camera - Enabling Color Stream: preset mode
[ INFO] [1458304317.185402923]: RealSense Camera - Enabling Depth Stream: preset mode
[ INFO] [1458304317.185522817]: RealSense Camera - Enabling Infrared Stream: preset mode
[ INFO] [1458304317.185639040]: RealSense Camera - Enabling Infrared2 Stream: preset mode
[ERROR] [1458304317.185765818]: RealSense Camera - Error calling rs_get_stream_intrinsics ( device:0x1c22cb0, stream:INFRARED2, intrin:0x7ffc92766080 ):
stream not enabled: INFRARED2

[FATAL] [1458304317.301574020]: Service call failed!
`

I suppose this is related to differences between the f200 and r200 cameras? What do you suggest to do?

Thanks,
Gregor

SR300

Do you plan to support SR300 ? what should be its format for RGB and depth data ?

DEPTH_FORMAT = RS_FORMAT_Z16;
COLOR_FORMAT = RS_FORMAT_RGB8;
IR1_FORMAT = RS_FORMAT_Y16;

Is that right?

Parent link stays as "camera_link" even after changing the camera parameter

When running two instances of realsense_r200_rgbd.launch simultaneously, both cameras retain the parent link of "camera_link" for the rgb and optical frames.

e.g. If you set camera:=camera1, the rgb and depth links will be camera1_rgb_frame and camera1_depth_frame, but the parent link will be camera_link instead of camera1_link.

Invalid roslaunch syntax

System Configuration

Please complete Your Configuration detail below.

Version Best Known Your Configuration
Operating System Ubuntu 14.04.4 LTS ?
Kernel 4.4.0-040400-generic ?-?-?
Backend video4linux ?
ROS indigo ?
ROS RealSense Latest Release ?.?.?
librealsense 0.9.1 ?.?.?
R200 Firmware 1.0.72.06 ?.?.?.?

How to collect Configuration Data

This section can be deleted before submission.

Version Method
Operating System grep DISTRIB_DESCRIPTION /etc/*elease*
Kernel uname -r
Backend `ls /sys/class
ROS rosversion -d
ROS RealSense rosversion realsense_camera
librealsense `cat /librealsense/readme.md
R200 Firmware View the ROS log from running nodelet OR `/librealsense/bin/cpp-enumerate-devices

Expected Behavior

launching the the launch files when called with roslaunch in the folder where the launch files are found

Actual Behavior

while processing /home/oushesh/catkin_ws_r200/src/realsense_camera/launch/includes/realsense_r200_nodelet.launch.xml: Invalid roslaunch XML syntax: [Errno 2] No such file or directory: u'/home/oushesh/catkin_ws_r200/src/realsense_camera/launch/includes/realsense_r200_nodelet.launch.xml'

Steps to Reproduce

Any idea why this is so?

I tried also a different git :
https://github.com/BlazingForests/realsense_camera/

with this one its working fine. only the I do not get RGB picture. Depth picture and Point Clouds I got but was quite noisy.

So I wanted to try this new git since its more recent.

Thanks for any help guys.

Registered depth and color

Version Best Known Your Configuration
Operating System Ubuntu 14.04.4 LTS Ubuntu 14.04.4 LTS
Kernel 4.4.0-040400-generic 4.4.0-040400-generic
ROS indigo indigo
librealsense 0.9.1 0.9.1
R200 Firmware 1.0.72.06 1.0.72.06

Expected Behavior

Registered(matched) depth and color image at 640x480.
I'm trying to use rtabmap(mapping package, not so relevant to the problem) in ros, and I want to get a color and a matching depth image at 640x480 resolution.

Actual Behavior

Using "realsense_r200_nodelet_standalone_manual.launch" (with color&depth width and height set to 640x480), and using the topics
/camera/color/image_raw
/camera/depth/image_raw
I get the following results:
both
depth
color

where the color and depth doesn't match.
Using "realsense_r200_rgbd.launch", which should register(match?) the color&depth gives me a 480x360 resolution at "/camera/depth/image_raw". other topics didn't give me an input, or bad(480x360) resolution.
Only "/camera/depth_registered/sw_registered/image_rect" and "#same#/image_rect_raw" gave me results at the desired resolution(640x480) but at some wired format(not as "/camera/depth/image_raw")

trying to change the default resolution at realsense_camera_nodelet.h resulted in a flickering(bad) depth image. Couldn't find other direction to try.

I saw in the limitation section you mention that there isn't a hardware based registartion, but the software registration doesn't suite me either, is there a way to fix it? Or something else to try to get a matching depth and color image at 640x480.

Steps to Reproduce

Mentioned launch files and "rosrun image_view image_view image:=#topic#"

Setting static parameters R200

System Configuration

Please complete Your Configuration detail below.

Version Known Good Your Configuration
Operating System ubuntu 14.04 ubuntu 14.04
Kernel Version 4.4.0-13 4.4.0-040400-generic
R200 FW 1.0.72.06 1.0.72.06
librealsense 0.9.1 0.9.1

Question

How can I set the static parameters for R200? I'm new to ROS and I have googled around an can't find anything. Any help is appreciated, thanks.

Install issue/question: missing symlink

Hi,

I come across this error while catkin making.

make[2]: *** No rule to make target `/usr/local/lib/librealsense.so', needed by `/home/user/catkin_ws/devel/lib/librealsense_camera_nodelet.so'.  Stop.
make[1]: *** [ros/realsense/camera/CMakeFiles/realsense_camera_nodelet.dir/all] Error 2

I added "/usr/local/lib" to my "LD_LIBRARY_PATH". However, there isn't a librealsense.so file in "/usr/local/lib".

Can I solve this by putting a symlink to the librealsense folder in usr/local/lib ? Is there a better way to fix this?

Thanks,

Christian

Make clear that the node also supports F200 - SR300 cameras

Great to see official ROS support for the RealSense cameras.

While the node supports the F200 and SR300 cameras as well, it is not very obvious.

The documentation should mention it, and the launch files should not be named *_r200_* as they are not R200 specific.

320*240 depth image with wrong data

System Configuration

Version Your Configuration
Operating System Ubuntu14.04.4 LTS
Kernel 4.4.0-040400-generic
Backend video4linux
ROS Indigo
ROS RealSense 1.1.0
librealsense 0.9.1
R200 Firmware 1.0.72.06

I subscribe the /camera/depth/image_raw topic to get depth image but the depth data seems wrong. So I just tested the depth data in the realsense_camera_nodelet.cpp. In function "RealsenseNodelet::prepareStreamData", copy the depth data and save the depth image. The 360_480 depth images looks fine but when I change the resolution to 240_320, the depth image seems to be wrong, as
test320_240

Cannot install realsense_camera

System Configuration

Please complete Your Configuration detail below. Refer to the BKC at Configuration.

Version Your Configuration
Operating System Ubuntu 16.04 LTS
Kernel 4.4.0-040400-generic
Backend
ROS kinetic
ROS RealSense Cannot locate [realsense_camera]
librealsense 0.9.1
R200 Firmware 1.0.72.06

Expected Behavior

Install realsense_camera

Actual Behavior

$ rosdep install realsense_camera

ERROR: Rosdep cannot find all required resources to answer your query
Missing resource realsense_camera
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/opt/ros/kinetic/share
ROS path [2]=/opt/ros/kinetic/stacks

Steps to Reproduce

Issue with color space

It seems that the RGB image published by the node does not use the correct color space (see screenshot below)

color_issue

Nodelet crashes when trying to display point cloud

Happens with a SR300 on a 32bit Ubuntu 14.04 (with a patched kernel 4.4) + ROS Jade

Steps to reproduce:

  • start roscore and rviz
  • launch realsense_camera realsense_r200_nodelet_standalone_preset.launch
  • in rviz, add a PointCloud2 visualization and set the topic to camera/depth/points
  • the nodelet crashes.

roslaunch output:

$ roslaunch realsense_camera realsense_r200_nodelet_standalone_preset.launch 
... logging to /home/lab/.ros/log/b4e40ce0-dfb7-11e5-8a15-00dbdf1f9ec4/roslaunch-PORTEGE-R830-4670.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://PORTEGE-R830:38149/

SUMMARY
========

PARAMETERS
 * /rosdistro: jade
 * /rosversion: 1.11.16

NODES
  /
    RealsenseNodelet (nodelet/nodelet)
    standalone_nodelet (nodelet/nodelet)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
process[standalone_nodelet-1]: started with pid [4688]
process[RealsenseNodelet-2]: started with pid [4689]
[ INFO] [1456841667.896780624]: Initializing nodelet with 4 worker threads.
[ INFO] [1456841668.391947827]: RealSense Camera - Starting camera nodelet.
[ INFO] [1456841669.237153943]: RealSense Camera - Number of cameras connected: 1
[ INFO] [1456841669.237346757]: RealSense Camera - Firmware version: 3.10.10.0
[ INFO] [1456841669.237507359]: RealSense Camera - Name: Intel RealSense SR300
[ INFO] [1456841669.237656812]: RealSense Camera - Serial no: 54FFFFFF9205001823
[ INFO] [1456841669.237785441]: RealSense Camera - Enabling Depth Stream: preset mode
[ INFO] [1456841669.238289684]: RealSense Camera - Enabling Color Stream: preset mode
[ INFO] [1456841669.275615697]: RealSense Camera - Publishing camera transforms
[standalone_nodelet-1] process has died [pid 4688, exit code -11, cmd /opt/ros/jade/lib/nodelet/nodelet manager __name:=standalone_nodelet __log:=/home/lab/.ros/log/b4e40ce0-dfb7-11e5-8a15-00dbdf1f9ec4/standalone_nodelet-1.log].
log file: /home/lab/.ros/log/b4e40ce0-dfb7-11e5-8a15-00dbdf1f9ec4/standalone_nodelet-1*.log
[RealsenseNodelet-2] process has finished cleanly
log file: /home/lab/.ros/log/b4e40ce0-dfb7-11e5-8a15-00dbdf1f9ec4/RealsenseNodelet-2*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done

The nodelets logs do not contain anything useful.

Importantly, displaying the camera in rviz works as expected.

error roslaunch realsense_camera realsense_r200_nodelet_standalone_preset.launch

System Configuration

Please complete Your Configuration detail below.

Version Best Known Your Configuration
Operating System Ubuntu 14.04.4 LTS Ubuntu 14.04.4 LTS
Kernel 4.4.0-040400-generic 4.4.0-040400-generic
Backend video4linux Video4Linux backend
ROS indigo indigo
ROS RealSense Latest Release Latest Release
librealsense 0.9.1 0.9.1
R200 Firmware 1.0.72.06 1.0.72.06

HI! I'm Wejay
I have same problem like issue #26,I'm not really good at linux,please tell me how to solve this
problem.
And I can not run cpp-capture by command line,but running within QT creator works.
It shows:
RealSense error calling rs_create_context(api_version:4):
uvc_open2(...) returned Access denied
In https://github.com/IntelRealSense/librealsense/issues/81,I really can not understand how to fix this problem.
2016-05-09 20 51 20

xyzrgb point cloud freezes in rviz after displaying for 1 second

System Configuration

Version Known Good Your Configuration
Operating System ubuntu 14.04 ubuntu 14.04
Kernel Version 4.4.0-13 4.4.0-13
R200 FW 1.0.72.06 1.0.72.06
librealsense 0.9.1 master f019a > 0.9.1
ROS Indigo Jade

Expected Behavior

Start rgbd node, should display xyzrgb point cloud. (Point cloud with color from rgb image).

Actual Behavior

xyzrgb point cloud ( /camera/depth_registered/points ) displays for 1 second before crashing

xyz point cloud displays fine ( /camera/depth/points )

Steps to Reproduce

Unplug the realsense usb connection and reconnect it (it won't reliably start /camera/depth_registered/points at all otherwise.

roslaunch realsense_camera realsense_r200_rgbd.launch 
rviz

Add /camera/depth_registered/points as PointCloud2

Refactor code to easily support additional cameras

System Configuration

Version Your Configuration
Operating System Ubuntu 16.04 LTS
Kernel 4.4.0-22-generic
Backend video4linux
ROS kinetic
ROS RealSense 1.1.0
librealsense v0.9.2
R200 Firmware 1.0.72.06

Details

Refactor the current code to make it easier to extend to additional RealSense cameras.
Suggest creating a core, base camera class and then extended it for the R200.

After kernel update, booting problem occur!

System Configuration

Please complete Your Configuration detail below. Refer to the BKC at Configuration.

Version Your Configuration
Operating System Ubuntu 14.04 LTS
Kernel 4.4.0-040400-generic
Backend ?
ROS indigo
ROS RealSense ?.?.?
librealsense ?.?.?
R200 Firmware ?.?.?.?

How to collect Configuration Data

This section can be deleted before submission.

Version Method
Operating System grep DISTRIB_DESCRIPTION /etc/*elease*
Kernel uname -r
Backend `ls /sys/class
ROS rosversion -d
ROS RealSense rosversion realsense_camera
librealsense `cat /librealsense/readme.md
R200 Firmware View the ROS log from running nodelet OR `/librealsense/bin/cpp-enumerate-devices

After I upated the kernel as following websites:

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md

Updated 4.4 Stable Kernel 
◾Run the following script to install necessary dependencies (GCC 4.9 compiler and openssl) and update kernel to v4.4-wily 
◾ ./scripts/install_dependencies-4.4.sh 

Then I reboot,

After I log in the ubuntu, nothing showed up.

desktop is not enabled, therefore I was not able to run anything.

So, I tried to reboot in previous kernel version,

but the result was the same.

Would any of you please help me how to fix it?

Thank you in advance.

Expected Behavior

Actual Behavior

Steps to Reproduce

problem cpp-capture example : rs.warn: Subdevice 1 bad magic number 0x0

I have the following issue when trying to use a R200 with librealsense:

it works as following figures:

screenshot from 2016-06-23 16 41 30

COLOR image is correctly shown,

but detph / infrared1/infrared 2 image are not enabled, and are shown as black

wkyoun@wkyoun-15Z960-GA70K:~/librealsense-stable/bin$ ./cpp-capture

Capturing DEPTH at 480 x 360, fov = 56.2 x 43.6, distortion = NONE
Capturing COLOR at 640 x 480, fov = 54.5 x 41.9, distortion = MODIFIED_BROWN_CONRADY
Capturing INFRARED at 480 x 360, fov = 56.2 x 43.6, distortion = NONE
Capturing INFRARED2 at 480 x 360, fov = 56.2 x 43.6, distortion = NONE
rs.warn: Subdevice 0 bad magic number 0x260025
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x230026
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x290024
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x29002f
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x1250108
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x1380139
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x14b0137
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x140013b
rs.warn: Subdevice 1 bad magic number 0x0
rs.warn: Subdevice 0 bad magic number 0x1380146
rs.warn: Subdevice 1 bad magic number 0x0

sudo dmesg | tail -n 50 command result in following

wkyoun@wkyoun-15Z960-GA70K:~/librealsense-stable/bin$ sudo dmesg | tail -n 50

[ 4457.046735] input: Intel RealSense 3D Camera R200 as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.4/input/input29
[ 4457.213421] uvcvideo: Found UVC 1.00 device LG HD WebCam (2232:5008)
[ 4457.215481] input: LG HD WebCam as /devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6:1.0/input/input30
[ 4457.222238] uvcvideo: Unknown video format 49323159-0000-0010-8000-00aa00389b71
[ 4457.222247] uvcvideo: Unknown video format 20493859-0000-0010-8000-00aa00389b71
[ 4457.222256] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 4457.263339] uvcvideo: Unknown video format 2036315a-0000-0010-8000-00aa00389b71
[ 4457.263367] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 4457.267106] uvcvideo: Unable to create debugfs 2-4 directory.
[ 4457.268395] input: Intel RealSense 3D Camera R200 as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.2/input/input31
[ 4457.362164] uvcvideo: Unknown video format 30315752-0000-0010-8000-00aa00389b71
[ 4457.362182] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 4457.367833] uvcvideo: Unable to create debugfs 2-4 directory.
[ 4457.369267] input: Intel RealSense 3D Camera R200 as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.4/input/input32
[ 4636.125445] uvcvideo: Non-zero status (-71) in video completion handler.
[ 4750.332679] uvcvideo: Non-zero status (-71) in video completion handler.
[ 4750.341494] uvcvideo: Non-zero status (-71) in video completion handler.
[ 4750.351163] uvcvideo: Non-zero status (-71) in video completion handler.
[ 4767.050127] systemd-hostnamed[7288]: Warning: nss-myhostname is not installed. Changing the local hostname might make it unresolveable. Please install nss-myhostname!
[ 7309.098505] usbcore: deregistering interface driver uvcvideo
[ 7430.853682] media: Linux media interface: v0.10
[ 7430.859091] Linux video capture interface: v2.00
[ 7430.951182] uvcvideo: Found UVC 1.00 device LG HD WebCam (2232:5008)
[ 7430.953611] input: LG HD WebCam as /devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6:1.0/input/input33
[ 7431.037235] uvcvideo: Unknown video format 49323159-0000-0010-8000-00aa00389b71
[ 7431.037252] uvcvideo: Unknown video format 20493859-0000-0010-8000-00aa00389b71
[ 7431.037268] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 7431.040997] uvcvideo: Unknown video format 2036315a-0000-0010-8000-00aa00389b71
[ 7431.041022] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 7431.042912] uvcvideo: Unable to create debugfs 2-4 directory.
[ 7431.043529] input: Intel RealSense 3D Camera R200 as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.2/input/input34
[ 7431.045414] uvcvideo: Unknown video format 30315752-0000-0010-8000-00aa00389b71
[ 7431.045435] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 7431.047256] uvcvideo: Unable to create debugfs 2-4 directory.
[ 7431.047705] input: Intel RealSense 3D Camera R200 as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.4/input/input35
[ 7431.048918] usbcore: registered new interface driver uvcvideo
[ 7431.048929] USB Video Class driver (1.1.1)
[ 7431.151828] uvcvideo: Found UVC 1.00 device LG HD WebCam (2232:5008)
[ 7431.157193] input: LG HD WebCam as /devices/pci0000:00/0000:00:14.0/usb1/1-6/1-6:1.0/input/input36
[ 7431.170372] uvcvideo: Unknown video format 49323159-0000-0010-8000-00aa00389b71
[ 7431.170396] uvcvideo: Unknown video format 20493859-0000-0010-8000-00aa00389b71
[ 7431.170423] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 7431.224245] uvcvideo: Unknown video format 2036315a-0000-0010-8000-00aa00389b71
[ 7431.224255] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 7431.226607] uvcvideo: Unable to create debugfs 2-4 directory.
[ 7431.226758] input: Intel RealSense 3D Camera R200 as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.2/input/input37
[ 7431.265523] uvcvideo: Unknown video format 30315752-0000-0010-8000-00aa00389b71
[ 7431.265531] uvcvideo: Found UVC 1.10 device Intel RealSense 3D Camera R200 (8086:0a80)
[ 7431.267786] uvcvideo: Unable to create debugfs 2-4 directory.
[ 7431.268090] input: Intel RealSense 3D Camera R200 as /devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.4/input/input38

It seems to me that there are similar problems reported like mine, but I cannot see any clear solutions.

Last question is following

would you please let me know how to find the Firmware Version of R200 that I am using?

Calibration Question

Hello,

I am wondering if there are any plans to add a node to perform joint calibration of the RGB and Depth cameras in order to increase the accuracy and allow for the depth images to be registered in the RGB frame.

Thanks

Start multiple camera nodes from single roslaunch XML

System Configuration

Version Known Good Your Configuration
Operating System ubuntu 14.04 ubuntu 14.04
Kernel Version 4.4.0-13 4.4.0-040400-generic
R200 FW 1.0.72.06 1.0.72.06
librealsense 0.9.1 0.9.1

It'd be nice to be able to start up multiple camera nodes from a single .launch file. For example:

<launch>
    <include file="$(find realsense_camera)/launch/realsense_r200_rgbd.launch">
        <arg name="camera"      value="rs1" />
        <arg name="serial_no"   value="1234567890" />
    </include>

    <include file="$(find realsense_camera)/launch/realsense_r200_rgbd.launch">
        <arg name="camera"      value="rs2" />
        <arg name="serial_no"   value="0987654321" />
    </include>
</launch>

Running the script above results in the following error which brings down the process:

[ERROR] [1460047172.302085622]: RealSense Camera - Error calling rs_create_context ( api_version:4 ): 
UVCIOC_CTRL_QUERY:UVC_SET_CUR error 5, Input/output error

Multiple cameras question

I have some questions on using the ros wrapper for multiple cameras. Currently I see in your ros code a rs_context object that contains all camera device information. Your source calls the first realsense device (index 0) and pushes the data from the device into ros messages.

It will be convenient to spawn multiple nodelets where each nodelet brings up a single realsense camera. Currently if I try that, the nodelets crash. I believe this is due a single rs_context object controlling all cameras and when a new rs_context object is created (from a different nodelet), it causes some low level permissions issue. Please correct me if I am wrong.

One way I can think of getting around this problem is to just use a single nodelet to get all datastreams from all realsenses using a single rs_context object. Is there a more elegant way to solve this problem?

I would prefer a solution where each realsense camera can be brought up from a separate nodelet. Please let me know if you want further information or clarifications.

RealSense with OpenNi

I'm trying to work with OpenNi using the RealSense camera. I found in some place that OpenNi doesn't support RealSense using it directly (I don't know if it's true but I try sometime I didn't figure out how to do it).

The thing is that I'm trying to do it through ROS, I launch roslaunch realsense_camera realsense_r200_rgbd.launch and then roslaunch openni2_launch openni2.launch. But it gives me the following error:

SUMMARY
========

PARAMETERS
 * /camera/camera_nodelet_manager/num_worker_threads: 4
 * /camera/depth_rectify_depth/interpolation: 0
 * /camera/driver/auto_exposure: True
 * /camera/driver/auto_white_balance: True
 * /camera/driver/color_depth_synchronization: False
 * /camera/driver/depth_camera_info_url: 
 * /camera/driver/depth_frame_id: camera_depth_opti...
 * /camera/driver/depth_registration: False
 * /camera/driver/device_id: #1
 * /camera/driver/rgb_camera_info_url: 
 * /camera/driver/rgb_frame_id: camera_rgb_optica...
 * /rosdistro: indigo
 * /rosversion: 1.11.16

NODES
  /camera/
    camera_nodelet_manager (nodelet/nodelet)
    depth_metric (nodelet/nodelet)
    depth_metric_rect (nodelet/nodelet)
    depth_points (nodelet/nodelet)
    depth_rectify_depth (nodelet/nodelet)
    depth_registered_sw_metric_rect (nodelet/nodelet)
    driver (nodelet/nodelet)
    points_xyzrgb_sw_registered (nodelet/nodelet)
    rectify_color (nodelet/nodelet)
    register_depth_rgb (nodelet/nodelet)
  /
    camera_base_link (tf/static_transform_publisher)
    camera_base_link1 (tf/static_transform_publisher)
    camera_base_link2 (tf/static_transform_publisher)
    camera_base_link3 (tf/static_transform_publisher)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
process[camera/camera_nodelet_manager-1]: started with pid [27714]
process[camera/driver-2]: started with pid [27717]
process[camera/rectify_color-3]: started with pid [27718]
process[camera/depth_rectify_depth-4]: started with pid [27719]
process[camera/depth_metric_rect-5]: started with pid [27727]
process[camera/depth_metric-6]: started with pid [27742]
process[camera/depth_points-7]: started with pid [27765]
process[camera/register_depth_rgb-8]: started with pid [27782]
process[camera/points_xyzrgb_sw_registered-9]: started with pid [27795]
process[camera/depth_registered_sw_metric_rect-10]: started with pid [27812]
[FATAL] [1460642189.985024526]: Service call failed!
process[camera_base_link-11]: started with pid [27829]
process[camera_base_link1-12]: started with pid [27833]
process[camera_base_link2-13]: started with pid [27837]
process[camera_base_link3-14]: started with pid [27841]
[ INFO] [1460642190.084527595]: Initializing nodelet with 4 worker threads.
[camera/rectify_color-3] process has died [pid 27718, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=rgb/image_raw image_rect:=rgb/image_rect_color __name:=rectify_color __log:=/home/lluis/.ros/log/c772d3be-0244-11e6-aa51-08edb95c077a/camera-rectify_color-3.log].
log file: /home/lluis/.ros/log/c772d3be-0244-11e6-aa51-08edb95c077a/camera-rectify_color-3*.log
[FATAL] [1460642190.529826850]: Service call failed!
[camera/driver-2] process has died [pid 27717, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load openni2_camera/OpenNI2DriverNodelet camera_nodelet_manager ir:=ir rgb:=rgb depth:=depth depth_registered:=depth_registered rgb/image:=rgb/image_raw depth/image:=depth_registered/image_raw __name:=driver __log:=/home/lluis/.ros/log/c772d3be-0244-11e6-aa51-08edb95c077a/camera-driver-2.log].
log file: /home/lluis/.ros/log/c772d3be-0244-11e6-aa51-08edb95c077a/camera-driver-2*.log

Any idea of what I can do to solve that or use OpenNi with the RealSense camera?

Thank you so much!
Lluis

Filtering noise from depth image/point cloud

The point clouds generated by the RealSense have a bit of noise in them which appear as these sort of "ghost points" or "speckles" in mid-air. These ghosts points are problematic in places like the ROS nav stack in which they are marked as obstacles. I've tested this across 5 different sensors, so I'm presuming it's just a property of the sensor.

In order to use the RealSense's in the ROS nav stack, the point clouds have to be filtered. I've had very good luck removing these ghost points using PCL's statistical outlier filter through the StatisticalOutlierNodelet provided with pcl_ros.

However, this filter is extremely computationally intensive -- using one filter on the realsense running at 320x240,30 fps generates a filtered stream at 5 fps that uses 100% CPU on a dual core i7.

I've been able to reduce this computational load by throttling the depth image topic, binning the depth image (via image_proc/CropDecimate nodelet), projecting a new point cloud from it, and filtering that point cloud. This reduces the load and the sensors are now usable, but the load is still quite high due to the stat outlier filter. I'm still searching for better solutions.

My request is - can we get out-of-the-box filtering with the RealSense on the full resolution depth image/point cloud that is computationally efficient? This would make the RealSenses much more plug & play and much more useful for ROS applications.

  • Has anybody come up with a better way to filter such as run a filter directly on the depth image or perhaps there are settings on the camera that can natively do this?
  • Is it possible to do this filtering onboard the firmware of the sensor?
  • If filtering can't be done on the hardware, could Intel or somebody else provide a set of nodelets that can do the filtering in software + launch files for them?

ROS Installation of Realsense package

I try to install the ROS-Librealsense package on OS X. ROS is installed properly and working well. Also librealsense is working; cpp-capture also works fine.

When trying to run catkin_make on the ROS package, I get the following error:

Scanning dependencies of target _realsense_camera_generate_messages_check_deps_cameraConfiguration
Scanning dependencies of target realsense_camera_gencfg
[ 28%] Generating dynamic reconfigure files from cfg/camera_params.cfg: /Users/ecmnet/PixHawk/ros/devel/include/realsense_camera/camera_paramsConfig.h /Users/ecmnet/PixHawk/ros/devel/lib/python2.7/site-packages/realsense_camera/cfg/camera_paramsConfig.py
**/Users/ecmnet/PixHawk/ros/build/camera/setup_custom_pythonpath.sh: line 4: /Users/ecmnet/PixHawk/ros/src/camera/cfg/camera_params.cfg: Permission denied**
/Users/ecmnet/PixHawk/ros/build/camera/setup_custom_pythonpath.sh: line 4: exec: /Users/ecmnet/PixHawk/ros/src/camera/cfg/camera_params.cfg: cannot execute: Undefined error: 0
make[2]: *** [/Users/ecmnet/PixHawk/ros/devel/include/realsense_camera/camera_paramsConfig.h] Error 126
make[1]: *** [camera/CMakeFiles/realsense_camera_gencfg.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

Any advise how to overcome that issue? Thank you

Add missing install targets

Add missing install targets for the following:

  • nodelet library
  • launch files
  • xml files
  • rviz config files

This will be fixed in the next release.

proportional-to-distance mean error of depth measurements

System Configuration

Version Your Configuration
Operating System Ubuntu 14.04.4 LTS
Kernel 4.4.0-040400-generic
Backend video4linux
ROS indigo
ROS RealSense 1.0.4
librealsense 0.9.1
R200 Firmware 1.0.72.06

I'm currently trying to use the R200 on a quadrocopter for 3D map generation. Something that I observed is that the (or my) R200 seems to have a mean error in depth measurements that is dependent on the measured distance. The following maps were generated while moving the quadrocopter slowly towards a wall. In the first image you can clearly see how the wall shifts forward when the quadrotor gets nearer (i.e. mean error gets smaller). In the second image I used a quadratic function of measured distance to subtract the mean error.

without_sensor_model_top1 with_sensor_model_top1

The subtraction of the mean error works fairly well. I'm just wondering whether this is a problem of my R200 or whether this is expected behaviour so I can include it in the report.
Depth image resolution is 320x240 at 30fps. I don't modify any of the calibration settings.

Not able to launch camera due to Error calling rs_enable_stream

Even after reinstalling it many times

when i try to launch realsense camera with the following command
roslaunch realsense_camera r200_nodelet_default.launch

i get the following error

NODES
/
camera (nodelet/nodelet)
nodelet_manager (nodelet/nodelet)

auto-starting new master
process[master]: started with pid [22679]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to bd721756-48c3-11e6-8aa1-78d00420a772
process[rosout-1]: started with pid [22692]
started core service [/rosout]
process[nodelet_manager-2]: started with pid [22707]
process[camera-3]: started with pid [22710]
[ INFO] [1468391622.192277938]: Initializing nodelet with 8 worker threads.
[ INFO] [1468391623.083983247]: /camera - Detected the following cameras:
- Serial No: 014150160808; Firmware: 2.60.0.0; Name: Intel RealSense F200; USB Port ID: 2-1
[ INFO] [1468391623.087121962]: /camera - Connecting to camera with Serial No: 014150160808 USB Port ID: 2-1
[ INFO] [1468391623.089646350]: /camera - Enabling Color stream: manual mode
[ INFO] [1468391623.092237652]: /camera - Enabling Depth stream: manual mode
[ INFO] [1468391623.094707818]: /camera - Enabling IR stream: manual mode
[ INFO] [1468391623.097742781]: /camera - Enabling IR2 stream: manual mode

[ERROR] [1468391623.098017013]: /camera - Error calling rs_enable_stream ( device:0x21f2908, stream:INFRARED2, width:640, height:480, format:Y8, framerate:30 ):
unsupported stream

[FATAL] [1468391623.272767279]: Failed to load nodelet '/cameraof typerealsense_camera/R200Nodeletto managernodelet_manager'
[nodelet_manager-2] process has died [pid 22707, exit code 1, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=nodelet_manager _log:=/home/acfr/.ros/log/bd721756-48c3-11e6-8aa1-78d00420a772/nodelet_manager-2.log].
log file: /home/acfr/.ros/log/bd721756-48c3-11e6-8aa1-78d00420a772/nodelet_manager-2
.log
[camera-3] process has died [pid 22710, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load realsense_camera/R200Nodelet nodelet_manager camera/depth/image_raw:=camera/depth/image_raw camera/color/image_raw:=camera/color/image_raw camera/ir/image_raw:=camera/ir/image_raw camera/ir2/image_raw:=camera/ir2/image_raw camera/depth/points:=camera/depth/points camera/depth/camera_info:=camera/depth/camera_info camera/color/camera_info:=camera/color/camera_info camera/ir/camera_info:=camera/ir/camera_info camera/ir2/camera_info:=camera/ir2/camera_info camera/get_settings:=camera/get_settings __name:=camera _log:=/home/acfr/.ros/log/bd721756-48c3-11e6-8aa1-78d00420a772/camera-3.log].
log file: /home/acfr/.ros/log/bd721756-48c3-11e6-8aa1-78d00420a772/camera-3
.log**

System Configuration

Please complete Your Configuration detail below. Refer to the BKC at Configuration.

Version Your Configuration
Operating System ubutnu 14.04.4
Kernel 4.2.0
Backend ?
ROS indigo
ROS RealSense F200
librealsense 0.9.1
R200 Firmware 2.71

How to collect Configuration Data

This section can be deleted before submission.

Version Method
Operating System grep DISTRIB_DESCRIPTION /etc/*elease*
Kernel uname -r
Backend `ls /sys/class
ROS rosversion -d
ROS RealSense rosversion realsense_camera
librealsense `cat /librealsense/readme.md
R200 Firmware View the ROS log from running nodelet OR `/librealsense/bin/cpp-enumerate-devices

Expected Behavior

Actual Behavior

Steps to Reproduce

roslaunch realsense_camera realsense_r200_rgbd.launch (Crash)

Hi,

I followed the "Installation Guide" step by step and I can even use qtcreator to run cpp-capture. I see the images just fine. I then proceeded to follow the instructions in the README file within the camera directory. I set my LD_PATH... to usr/local/include etc. As far as I can tell I've done everything correctly.

After calling catkin_make and successfully building the package, I run the instruction roslaunch in the title of this issue, and this is what I get:

screenshot from 2016-03-10 23 13 08

What can I do?

Error: run r200_nodelet_rgbd.lauch

System Configuration

Version Your Configuration
Operating System Ubuntu 16.04 LTS
Kernel 4.4.0-31-generic
Backend
ROS kinetic
ROS RealSense Cannot locate [realsense_camera]
librealsense 0.9.1
R200 Firmware 1.0.72.06

Hi! I just got these errors when I was trying to run r200_nodelet_rgbd.lauch.
Please help! Thanks!

jack@jack-All-Series:~/catkin_ws$ roslaunch realsense_camera r200_nodelet_r
r200_nodelet_resolution.test  r200_nodelet_rgbd.test
r200_nodelet_rgbd.launch      
jack@jack-All-Series:~/catkin_ws$ roslaunch realsense_camera r200_nodelet_rgbd.launch 
... logging to /home/jack/.ros/log/f7e33cf8-5444-11e6-86bf-086266340963/roslaunch-jack-All-Series-12936.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://jack-All-Series:41003/

SUMMARY
========

PARAMETERS
 * /camera/camera/base_frame_id: camera_link
 * /camera/camera/color_fps: 30
 * /camera/camera/color_frame_id: camera_rgb_frame
 * /camera/camera/color_height: 480
 * /camera/camera/color_optical_frame_id: camera_rgb_optica...
 * /camera/camera/color_width: 640
 * /camera/camera/depth_fps: 30
 * /camera/camera/depth_frame_id: camera_depth_frame
 * /camera/camera/depth_height: 360
 * /camera/camera/depth_optical_frame_id: camera_depth_opti...
 * /camera/camera/depth_width: 480
 * /camera/camera/enable_color: True
 * /camera/camera/enable_depth: True
 * /camera/camera/enable_pointcloud: False
 * /camera/camera/enable_tf: True
 * /camera/camera/ir2_frame_id: camera_ir2_frame
 * /camera/camera/ir_frame_id: camera_ir_frame
 * /camera/camera/mode: manual
 * /camera/camera/serial_no: 
 * /camera/camera/usb_port_id: 
 * /camera/camera_nodelet_manager/num_worker_threads: 4
 * /camera/depth_rectify_depth/interpolation: 0
 * /camera/disparity_depth/max_range: 4.0
 * /camera/disparity_depth/min_range: 0.5
 * /camera/disparity_registered_sw/max_range: 4.0
 * /camera/disparity_registered_sw/min_range: 0.5
 * /camera/points_xyzrgb_sw_registered/queue_size: 100
 * /rosdistro: kinetic
 * /rosversion: 1.12.2

NODES
  /camera/
    camera (nodelet/nodelet)
    camera_nodelet_manager (nodelet/nodelet)
    debayer (nodelet/nodelet)
    depth_metric (nodelet/nodelet)
    depth_metric_rect (nodelet/nodelet)
    depth_points (nodelet/nodelet)
    depth_rectify_depth (nodelet/nodelet)
    depth_registered_sw_metric_rect (nodelet/nodelet)
    disparity_depth (nodelet/nodelet)
    disparity_registered_sw (nodelet/nodelet)
    points_xyzrgb_sw_registered (nodelet/nodelet)
    rectify_color (nodelet/nodelet)
    rectify_ir (nodelet/nodelet)
    rectify_mono (nodelet/nodelet)
    register_depth_rgb (nodelet/nodelet)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
process[camera/camera_nodelet_manager-1]: started with pid [12954]
process[camera/camera-2]: started with pid [12955]
process[camera/debayer-3]: started with pid [12956]
process[camera/rectify_mono-4]: started with pid [12957]
process[camera/rectify_color-5]: started with pid [12974]
process[camera/rectify_ir-6]: started with pid [12983]
process[camera/depth_rectify_depth-7]: started with pid [13003]
[ INFO] [1469709228.472179466]: Initializing nodelet with 4 worker threads.
process[camera/depth_metric_rect-8]: started with pid [13025]
process[camera/depth_metric-9]: started with pid [13051]
process[camera/depth_points-10]: started with pid [13060]
process[camera/register_depth_rgb-11]: started with pid [13081]
process[camera/points_xyzrgb_sw_registered-12]: started with pid [13097]
process[camera/depth_registered_sw_metric_rect-13]: started with pid [13114]
process[camera/disparity_depth-14]: started with pid [13117]
process[camera/disparity_registered_sw-15]: started with pid [13127]
[ERROR] [1469709229.063014833]: Failed to load nodelet [/camera/camera] of type [realsense_camera/R200Nodelet] even after refreshing the cache: Failed to load library /home/jack/catkin_ws/devel/lib//librealsense_camera_nodelet.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/jack/catkin_ws/devel/lib//librealsense_camera_nodelet.so: undefined symbol: _ZN12class_loader20class_loader_private30getCurrentlyLoadingLibraryNameEv)
[ERROR] [1469709229.063055053]: The error before refreshing the cache was: Failed to load library /home/jack/catkin_ws/devel/lib//librealsense_camera_nodelet.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/jack/catkin_ws/devel/lib//librealsense_camera_nodelet.so: undefined symbol: _ZN12class_loader20class_loader_private30getCurrentlyLoadingLibraryNameEv)
[FATAL] [1469709229.063154743]: Service call failed!
[camera/camera-2] process has died [pid 12955, exit code 255, cmd /opt/ros/kinetic/lib/nodelet/nodelet load realsense_camera/R200Nodelet camera_nodelet_manager camera/depth/image_raw:=depth/image_raw camera/color/image_raw:=rgb/image_raw camera/ir/image_raw:=ir/image_raw camera/ir2/image_raw:=ir2/image_raw camera/depth/points:=depth/points camera/depth/camera_info:=depth/camera_info camera/color/camera_info:=rgb/camera_info camera/ir/camera_info:=ir/camera_info camera/ir2/camera_info:=ir2/camera_info __name:=camera __log:=/home/jack/.ros/log/f7e33cf8-5444-11e6-86bf-086266340963/camera-camera-2.log].
log file: /home/jack/.ros/log/f7e33cf8-5444-11e6-86bf-086266340963/camera-camera-2*.log

Make folder for driver consistent with package name ("realsense_camera")

It is a (nearly) ubiquitously used convention within the ROS community to name a folder where a package resides the same as the package name in the package.xml file. Especially users new to ROS might be confused by the driver being placed in the "camera" folder instead of "realsense_camera" (which is the actual package name according to the package.xml.

Cannot see pointcloud in 1.3.0

Version Your Configuration
Operating System Ubuntu 14.04.4 LTS
Kernel 4.4.0-040400-generic
Backend video4linux
ROS indigo
ROS RealSense 1.3.0; 1.0.3
librealsense ?.?.?
R200 Firmware 1.0.72.06

I installed 1.3.0 version of ROS realsense and tried to run rviz, I can see all four images, but I cannot see the pointcloud. There was no error in the rviz. But under the PointCloud2->Status, the item Transform [...] was missing.
When I reverted to 1.0.3 version of ROS realsense, everything was OK. I can see the pointcloud, and the Transform[...] item was there.
Can someone help to tell me what was wrong?

Question: Pointclouds from multiple cameras - single launch file

System Configuration

Please complete Your Configuration detail below. Refer to the BKC at Configuration.

Version Your Configuration
Operating System Ubuntu 14.04.4 LTS
Kernel 4.4.0-040400-generic
Backend video4linux
ROS indigo
ROS RealSense 1.2.0
librealsense 0.9.1
R200 Firmware 1.0.72.04

Hi,

Is there already a launch file for getting registered pointclouds from multiple cameras?
I can see launch files for getting depth images from multiple cameras and also a launch file for getting registered pointclouds from a single file. I'm wondering if there is an under lying issue that makes this difficult at the moment or if it is simply a case of replacing the r200_nodelet_default xml code with the r200_nodelet_rgbd xml code.

Regards,

Christian

Change resolution of R200 for color and depth image

System Configuration

Please complete Your Configuration detail below.

Version Best Known Your Configuration
Operating System Ubuntu 14.04.4 LTS 14.04.4
Kernel 4.4.0-040400-generic 4.4.0-040400-generic
Backend video4linux video4linux
ROS indigo indigo
ROS RealSense Latest Release Latest Release
librealsense 0.9.1 0.9.1
R200 Firmware 1.0.72.06 ?.?.?.?

Hi there,

I would like to start the color and depth stream with a different resolution than default values (which are 640x480). My desired resolutions for both color and depth stream are 1920x1080.

I tried to change default values in realsense_camera_nodelet.h file but that didn't work.
As stated in the documentation, there are several static parameters for these values.

Can anyone help me out on this?

Thanks in advance

Black IR view for R200

Hello all,

Compilation and ROS driver make seemed to go well.

I can now see color video in rviz at roughly 60Hz.
However, the IR stream seems to be constantly black, no data to my view.
Consequently, no pointcloud, etc.
I have tried changing the dynamic params in the IR space, to no effect.
(they work, I can toggle the IR illumination on and off)

Is there possibly a configuration issue, or is my unit not working?

editing default parameters while using own ros-package

System Configuration

Please complete Your Configuration detail below.

Version Best Known Your Configuration
Operating System Ubuntu 14.04.4 LTS Ubuntu 14.04.4 LTS
Kernel 4.4.0-040400-generic ?-?-?
ROS indigo indigo
librealsense 0.9.1 newest (April-18-2016)
R200 Firmware 1.0.72.06 ?.?.?.?

Hello,

I am new to ROS / librealsense and I want to permanently change static parameters like texture_difference_threshold such that my ROS-node gets modified depth points for further image processing. Is there a way to do so?

With the provided librealsense/cpp-config-ui I was able to temporarily change those values, but found no way to save my edits.

Thanks in advance and regards,

PP.

Release realsense?

Are you planning to release the realsense repository as an official ROS package?

Thanks!

Consolidate publish_tf and enable_tf in launch/includes/realsense_r200_nodelet.launch.xml

This should just be a single variable "publish_tf", right now the behaviour is backwards from what is expected and the inclusion of "enable_tf" is confusing.

I have been using the launch file realsense_r200_rgbd.launch with publish_tf set to true from existing OpenNI compatible launch files and wondering why no tfs are being published from the nodelet.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.