Giter Club home page Giter Club logo

easy_handeye's People

Contributors

berkayalpcakal avatar lyh458 avatar marcoesposito1988 avatar mikramarc avatar ravijo avatar rhaschke avatar rowoflo avatar salvovirga avatar xiaohuits avatar yusukeurakami-sony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easy_handeye's Issues

Polaris_Spectra ndi is used as the camera to complete the calibration

Hi~, First of all, Thanks for your contributuion of this package.
Recently, I have got a equiment of ndi tracker which version is Polaris-Spectra, I have saw the picture in the instruction is ndi tracker, could give me some examples to show how to publish these two topics
<"/optical_origin"/> and <"/optical_target"/> using Polaris-Spectra. I haven't got any example showing the way to connect NDI through ROS.

chessboard target

Hi,

Is it possible to execute the eye in hand calibration with a chessboard target?

Best regards
Cesar

A small question about the AR Marker

I'm brandly new on the caibration. Before I run the demo,I want to know how the ar marker is made.
I have find the ArUco markers generator,but I don't know the dictionary and id.
Hope you can give me some information.
Thanks

Error when compiling

I downloaded aruco_ros,ros-kinetic-visp,vision-visp and easy_handeye to my workspace.

When I run catkin_make I got this error:

CMake Warning at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:76 (find_package):
Could not find a package configuration file provided by "image_proc" with
any of the following names:

image_procConfig.cmake
image_proc-config.cmake

Add the installation prefix of "image_proc" to CMAKE_PREFIX_PATH or set
"image_proc_DIR" to a directory containing one of the above files. If
"image_proc" provides a separate development package or SDK, be sure it has
been installed.
Call Stack (most recent call first):
vision_visp/visp_hand2eye_calibration/CMakeLists.txt:4 (find_package)

-- Could not find the required component 'image_proc'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found.
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "image_proc" with
any of the following names:

image_procConfig.cmake
image_proc-config.cmake

Add the installation prefix of "image_proc" to CMAKE_PREFIX_PATH or set
"image_proc_DIR" to a directory containing one of the above files. If
"image_proc" provides a separate development package or SDK, be sure it has
been installed.
Call Stack (most recent call first):
vision_visp/visp_hand2eye_calibration/CMakeLists.txt:4 (find_package)

-- Configuring incomplete, errors occurred!

Seems like system could not find package "image_proc",so I tried to install or apt-get this package from internet, but failed.

So how can I fix this problem?

Add a PDF with a marker and integrate it into the example launch file

This would be another step towards a "Calibration for dummies" package, which is in my opinion desirable.

The marker size could be the real size of the PDF printed in A4 at 100% scale, and the marker ID could be whatever the correct thing to enter there is. I would have made this a pull request if I had managed to look that up within 10 minutes.

eye-in-hand example ?

Hi

How to run the eye-in-hand example, using normal webcam

should I bring up the real hand first, then run calibrate.launch

or create a file with following details and launch it

<launch>
  <include file="$(find easy_handeye)/launch/calibrate.launch">
    <arg name="eye_on_hand" value="true"/>

    <arg name="robot_base_frame" value="/base_link"/>
    <arg name="robot_effector_frame" value="/ee_link"/>

    <arg name="tracking_base_frame" value="/optical_origin"/>
    <arg name="tracking_marker_frame" value="/optical_target"/>
  </include>
</launch>

Meaning of reference_frame, camera_frame, and marker_frame

Hello,

Thanks for this useful repo. However, I want to ask how should I specify the values for reference_frame, camera_frame, and marker_frame for the aruco config files and the meaning behind these keywords.

I have an Asus Xtion camera and I am trying to perform the eye on base application. I tried to calculate the coordinate transformation matrix in two positions of the manipulator but the result was different each time. I suspected the error might be in the way I configured aruco tags.

Here is my current config file:

<launch>

    <arg name="markerId"        default="26"/>
    <arg name="markerSize"      default="0.075"/>    <!-- in m -->
    <arg name="eye"             default="left"/>
    <arg name="marker_frame"    default="aruco_marker_frame"/>
    <arg name="ref_frame"       default=""/>  <!-- leave empty and the pose will be published wrt param parent_name -->
    <arg name="corner_refinement" default="LINES" /> <!-- NONE, HARRIS, LINES, SUBPIX -->


    <node pkg="aruco_ros" type="single" name="aruco_single">
        <remap from="/camera_info" to="/camera/rgb/camera_info" />
        <remap from="/image" to="/camera/rgb/image_rect_color" />
        <param name="image_is_rectified" value="True"/>
        <param name="marker_size"        value="$(arg markerSize)"/>
        <param name="marker_id"          value="$(arg markerId)"/>
        <param name="reference_frame"    value="$(arg ref_frame)"/>   <!-- frame in which the marker pose will be refered -->
        <param name="camera_frame"       value="camera_depth_frame"/>
        <param name="marker_frame"       value="$(arg marker_frame)" />
        <param name="corner_refinement"  value="$(arg corner_refinement)" />
    </node>

</launch>

How to save Calibration

Hey
Im using the eye in hand calibration and its working really well. But it seems like the save button doesn't work or its just me that cant find the yaml file with the calibration.
Hope you can help me :D

Eye-on-base calibration outcome - issue with publishing result

My eye-on-base calibration went really well, though a few little bumps, a big THANKS to the authors.

Here is my final yaml file:

_eye_on_hand: false
robot_base_frame: fixed_link
tracking_base_frame: camera_base_link
transformation: {qw: 0.7197444041856074, qx: -0.055009208985473554, qy: -0.07374779929687375,

qz: -0.6881157182256235, x: -0.006299578434856121, y: 0.4156843936451614, z: 0.32690430193807757}_

The transform seems correct.

However, I got an issue when I published this transform.

In /tf , the transforms, as shown below, are wrong:
_transforms:

header: 
  seq: 0
  stamp: 
    secs: 1567022266
    nsecs: 765403340
  frame_id: "fixed_link"
child_frame_id: "camera_base_link"
transform: 
  translation: 
    x: 1.0
    y: 1.0
    z: 1.0
  rotation: 
    x: 0.0
    y: 0.681638760023
    z: 0.0
    w: 0.731688868874_

However, in /tf_static , the transforms are correct, as shown below:

_transforms:

header: 
  seq: 0
  stamp: 
    secs: 1567011306
    nsecs: 311048030
  frame_id: "fixed_link"
child_frame_id: "camera_base_link"
transform: 
  translation: 
    x: -0.00629957843486
    y: 0.415684393645
    z: 0.326904301938
  rotation: 
    x: -0.0550092089855
    y: -0.0737477992969
    z: -0.688115718226
    w: 0.719744404186_

Could you please advise where I get it wrong? thanks!

Reading frames from tf or rviz

Hello,

I am using this package to calibrate a monocular camera eye_on_hand with a Kinova Jaco2.

I have tried several times but the final result is wrong. I know that the end effector frame in Rviz is not exactly the same of the real Kinova robot (there is an angle offset), so in rviz I can see the exact pose of the robot except for the gripper, that has a different orientation.

So I wanted to know if this can lead to the wrong result. I am not sure if the end_effector frame read from this package is the one represented in Rviz or the one (uniquely) published in tf.

Thank you very much

Error processing request: canTransform: source_frame marker does not exist.

[ERROR] [1548085881.240854]: Error processing request: canTransform: source_frame marker does not exist.
['Traceback (most recent call last):\n', ' File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 625, in _handle_request\n response = convert_return_to_response(self.handler(request), self.response_class)\n', ' File "/home/student/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_server.py", line 38, in take_sample\n self.calibrator.take_sample()\n', ' File "/home/student/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 140, in take_sample\n transforms = self._get_transforms()\n', ' File "/home/student/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 123, in _get_transforms\n time = self._wait_for_transforms()\n', ' File "/home/student/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 111, in _wait_for_transforms\n self.listener.waitForTransform(self.tracking_base_frame, self.tracking_marker_frame, now, rospy.Duration(10))\n', ' File "/opt/ros/kinetic/lib/python2.7/dist-packages/tf/listener.py", line 76, in waitForTransform\n raise tf2_ros.TransformException(error_msg or "no such transformation: \"%s\" -> \"%s\"" % (source_frame, target_frame))\n', 'TransformException: canTransform: source_frame marker does not exist.\n']
Traceback (most recent call last):
File "/home/student/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_easy_handeye.py", line 104, in handle_take_sample
sample_list = self.client.take_sample()
File "/home/student/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_client.py", line 34, in take_sample
return self.take_sample_proxy().samples
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 435, in call
return self.call(*args, *kwds)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 515, in call
responses = transport.receive_once()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 727, in receive_once
p.read_messages(b, msg_queue, sock)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 353, in read_messages
self._read_ok_byte(b, sock)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 336, in _read_ok_byte
raise ServiceException("service [%s] responded with an error: %s"%(self.resolved_name, str))
rospy.service.ServiceException: service [/iiwa_kinect_calibration_eye_hand_eye_on_base/take_sample] responded with an error: error processing request: canTransform: source_frame marker does not exist.
[iiwa_kinect_calibration_eye_hand_eye_on_base/namespace_student_ThinkPad_W541_7532_6188026501665740049_rqt-17] process has died [pid 7756, exit code 1, cmd /opt/ros/kinetic/lib/rqt_gui/rqt_gui --clear-config --perspective-file /home/student/catkin_ws/src/easy_handeye/easy_handeye/launch/rqt_easy_handeye.perspective __name:=namespace_student_ThinkPad_W541_7532_6188026501665740049_rqt __log:=/home/student/.ros/log/41fd7db0-1d94-11e9-8362-a4c494410bb3/iiwa_kinect_calibration_eye_hand_eye_on_base-namespace_student_ThinkPad_W541_7532_6188026501665740049_rqt-17.log].
log file: /home/student/.ros/log/41fd7db0-1d94-11e9-8362-a4c494410bb3/iiwa_kinect_calibration_eye_hand_eye_on_base-namespace_student_ThinkPad_W541_7532_6188026501665740049_rqt-17
.log

I can't load plugin "rqt_easy_handeye/Hand-eye Calibration automatic movement"

PluginManager._load_plugin() could not load plugin "rqt_easy_handeye/Hand-eye Calibration automatic movement":
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/plugin_handler.py", line 99, in load
self._load()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/plugin_handler_direct.py", line 54, in _load
self._plugin = self._plugin_provider.load(self._instance_id.plugin_id, self._context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui_py/ros_py_plugin_provider.py", line 60, in load
return super(RosPyPluginProvider, self).load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/ros_plugin_provider.py", line 101, in load
return class_ref(plugin_context)
File "/home/irec/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 296, in init
self._widget = CalibrationMovementsGUI()
File "/home/irec/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 149, in init
self.local_mover = CalibrationMovements(move_group_name, max_velocity_scaling, max_acceleration_scaling)
File "/home/irec/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 26, in init
self.mgc = MoveGroupCommander(move_group_name)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/moveit_commander/move_group.py", line 52, in init
self._g = _moveit_move_group_interface.MoveGroupInterface(name, robot_description, ns)
RuntimeError: Unable to connect to move_group action server 'move_group' within allotted time (5s)

[calibration_mover-35] process has died [pid 6368, exit code 1, cmd /home/irec/catkin_ws/src/easy_handeye/rqt_easy_handeye/scripts/rqt_calibrationmovements __name:=calibration_mover __log:=/home/irec/.ros/log/1aea8c3a-1edf-11e9-aaf0-7085c23896f7/calibration_mover-35.log].
log file: /home/irec/.ros/log/1aea8c3a-1edf-11e9-aaf0-7085c23896f7/calibration_mover-35*.log
[ WARN] [1548228569.887509930, 629.885000000]: Waiting for /follow_joint_trajectory to come up
[ERROR] [1548228575.893280482, 635.886000000]: Action client not connected: /follow_joint_trajectory

calibrate.launch, tracking_base and tracking_object frame

Hello,

I am working with the variety

In the readme.md it is explained to make handeye transform I should:

  1. frames

create a new handeye_calibrate.launch file, which includes the robot's and tracking system's launch files, as well as easy_handeye's calibrate.launch as illustrated below in the next section "Calibration"
in each of your launch files where you need the result of the calibration, include easy_handeye's publish.launch as illustrated below in the section "Publishing"

  • Calibration
    For both use cases, you can either launch the calibrate.launch launch file, or you can include it in another launchfile as shown below.
    <!-- fill in the following parameters according to your robot's published tf frames -->
    <arg name="robot_base_frame" value="/base_link"/>
    <arg name="robot_effector_frame" value="/ee_link"/>

In the robot_base_frame and in the robot_effector_frame I should add my robots base frame and end effector frame, these are published by the robot_state_publisher under /tf. This is clear to me, but the part with tracking_base_frame and tracking_marker_frame

    <!-- fill in the following parameters according to your tracking system's published tf frames -->
    <arg name="tracking_base_frame" value="/optical_origin"/>
    <arg name="tracking_marker_frame" value="/optical_target"/>

I do not understand where are these coming from?
Should I have some tracking node which is subscribed to the camera topic and it is publishing these frames under /tf like the robot frames?

  1. rqt_easy_handeye package is used to move the robot automatically or by hand. and in each position calibration will make the calculation/estimation?

  2. Publishing
    publish.launch is publishing transformation in the /tf what transform? Transform of end effector frame in the camera frame?
    if it is end effetor in the camera frame, what is this used for? to see the transform in each step? and how it is getting more and more accurate with each new pose?

Error when running AUTO Movement

How can I get the Automatic movement to work when I get this fault?

[ERROR] [1561381645.473038930]: Robot model parameter not found! Did you remap 'robot_description'?
[FATAL] [1561381645.473096094]: Unable to construct robot model. Please make sure all needed information is on the parameter server.
PluginManager._load_plugin() could not load plugin "rqt_easy_handeye/Hand-eye Calibration automatic movement":
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/plugin_handler.py", line 99, in load
self._load()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/plugin_handler_direct.py", line 54, in _load
self._plugin = self._plugin_provider.load(self._instance_id.plugin_id, self._context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui_py/ros_py_plugin_provider.py", line 60, in load
return super(RosPyPluginProvider, self).load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/composite_plugin_provider.py", line 71, in load
instance = plugin_provider.load(plugin_id, plugin_context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/ros_plugin_provider.py", line 101, in load
return class_ref(plugin_context)
File "/home/dmri/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 296, in init
self._widget = CalibrationMovementsGUI()
File "/home/dmri/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 149, in init
self.local_mover = CalibrationMovements(move_group_name, max_velocity_scaling, max_acceleration_scaling)
File "/home/dmri/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 26, in init
self.mgc = MoveGroupCommander(move_group_name)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/moveit_commander/move_group.py", line 52, in init
self._g = _moveit_move_group_interface.MoveGroupInterface(name, robot_description, ns)
RuntimeError: Unable to construct robot model. Please make sure all needed information is on the parameter server.

[calibration_mover-19] process has died [pid 8522, exit code 1, cmd /home/dmri/catkin_ws/src/easy_handeye/rqt_easy_handeye/scripts/rqt_calibrationmovements __name:=calibration_mover __log:=/home/dmri/.ros/log/fee5387c-9680-11e9-83d3-54ee75886762/calibration_mover-19.log].
log file: /home/dmri/.ros/log/fee5387c-9680-11e9-83d3-54ee75886762/calibration_mover-19*.log

arm[RRTConnectkConfigDefault]: Unable to sample any valid states for goal tree

Hi,

I am using ROS Kinetic, KInect V2 and Motoman MH5 Robot and trying to use the easy_handeye in my project.

When I click the check starting pose button, I got this error.

[ERROR] [1556533936.017165846]: arm[RRTConnectkConfigDefault]: Unable to sample any valid states for goal tree

I already check my urdf and didn't find any problem, and it works fine in demo.launch file.

2019-04-29 18-35-36 ็š„่žขๅน•ๆ“ทๅœ–

Hope you cloud help me with this trouble.

Thanks a lot !

question about the results of eye on base calibration

Thanks for your code.
when I use this to do my eye on base calibration, the results show

Computed calibration: effector_camera:

It seems like the eye in hand calibration results, what should I do to get the base_camere matrix๏ผŸ

How to get rqt_easy_handeye Gui to run???

I have this problem I'm not able to get my calibrator interface to pop up I only get this message, and tried to find a solution. The rviz GUI works fine, and the robot can be moved using the rviz program. Hope you can help:

Could not import "pyqt" bindings of qt_gui_cpp library - so C++ plugins will not be available:
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui_cpp/cpp_binding_helper.py", line 43, in
from . import libqt_gui_cpp_sip
ImportError: dynamic module does not define module export function (PyInit_libqt_gui_cpp_sip)

Traceback (most recent call last):
File "/opt/ros/kinetic/lib/rqt_gui/rqt_gui", line 13, in
sys.exit(main.main())
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/main.py", line 59, in main
return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH'])))
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/main.py", line 529, in main
perspective_manager.import_perspective_from_file(self._options.perspective_file, perspective_manager.HIDDEN_PREFIX + os.path.basename(self._options.perspective_file))
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/perspective_manager.py", line 342, in import_perspective_from_file
self._convert_values(data, self._import_value)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/perspective_manager.py", line 403, in _convert_values
self._convert_values(groups[group], convert_function)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/perspective_manager.py", line 400, in _convert_values
keys[key] = convert_function(keys[key])
File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/perspective_manager.py", line 410, in _import_value
return QByteArray.fromHex(eval(value['repr(QByteArray.hex)']))
File "", line 1, in
TypeError: arguments did not match any overloaded call:
QByteArray(): too many arguments
QByteArray(int, str): argument 1 has unexpected type 'str'
QByteArray(Union[QByteArray, bytes, bytearray]): argument 1 has unexpected type 'str'
[iiwa_kinect_handeyecalibration_eye_on_hand/namespace_dmri_w541_28293_3797793594356548394_rqt-18] process has died [pid 28475, exit code 1, cmd /opt/ros/kinetic/lib/rqt_gui/rqt_gui --clear-config --perspective-file /home/dmri/catkin_ws/src/easy_handeye/easy_handeye/launch/rqt_easy_handeye.perspective __name:=namespace_dmri_w541_28293_3797793594356548394_rqt __log:=/home/dmri/.ros/log/93e2877e-5b8d-11e9-acf4-54ee75886762/iiwa_kinect_handeyecalibration_eye_on_hand-namespace_dmri_w541_28293_3797793594356548394_rqt-18.log].
log file: /home/dmri/.ros/log/93e2877e-5b8d-11e9-acf4-54ee75886762/iiwa_kinect_handeyecalibration_eye_on_hand-namespace_dmri_w541_28293_3797793594356548394_rqt-18*.log

About eye_base calibration

Hello Sir,
I tried to do calibration while keeping kinect on base. I am not able to produce accurate results even though I tried lot of times. Can I know what is the procedure for calibration eye on base rather than eye on hand. I am trying for two months which delays my thesis.

Thanks in advance

Cannot detect the aruco marker and canTransform Error occurs

Hi,
Hi, I'm using easy_handeye to calibrate Kinect v2 camera with the ur5 robot.

I change the ur5_kinect_calibrate.lunch file as follow:

<arg name="namespace_prefix" default="ur5_kinect_handeyecalibration" />

<arg name="robot_ip" doc="The IP address of the UR5 robot" />
    <!--<arg name="marker_frame" default="aruco_marker_frame"/>-->
<arg name="marker_size" doc="Size of the ArUco marker used, in meters" default="0.2" />
<arg name="marker_id" doc="The ID of the ArUco marker used" default="100" />
<!-- start the Kinect -->
<include file="$(find kinect2_bridge)/launch/kinect2_bridge.launch" />
<!--    <arg name="depth_registration" value="true" />
</include>-->

<!-- start ArUco -->
<node name="aruco_tracker" pkg="aruco_ros" type="single">
    <remap from="/camera_info" to="/kinect2/hd/camera_info" />
    <remap from="/image" to="/kinect2/hd/image_color_rect" />
    <param name="image_is_rectified" value="true"/>
    <param name="marker_size"        value="$(arg marker_size)"/>
    <param name="marker_id"          value="$(arg marker_id)"/>
    <param name="reference_frame"    value="kinect2_rgb_optical_frame"/>
    <param name="camera_frame"       value="kinect2_rgb_optical_frame"/>
    <param name="marker_frame"       value="camera_marker" />
</node>

<!-- start the robot -->
<include file="$(find ur_modern_driver)/launch/ur5_bringup.launch">
    <arg name="limited" value="true" />
    <arg name="robot_ip" value="192.168.137.2" />
</include>
<include file="$(find ur5_moveit_config)/launch/ur5_moveit_planning_execution.launch">
    <arg name="limited" value="true" />
</include>

<!-- start easy_handeye -->
<include file="$(find easy_handeye)/launch/calibrate.launch" >
    <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
    <arg name="eye_on_hand" value="false" />

    <arg name="tracking_base_frame" value="kinect2_rgb_optical_frame" />
    <arg name="tracking_marker_frame" value="camera_marker" />
    <arg name="robot_base_frame" value="base_link" />
    <arg name="robot_effector_frame" value="wrist_3_link" />

    <arg name="freehand_robot_movement" value="false" />
    <arg name="robot_velocity_scaling" value="0.1" />
    <arg name="robot_acceleration_scaling" value="0.1" />
</include>

Just like issue#8 #8, I got the error "Error processing request: canTransform: source_frame camera_marker does not exist." when I try to take a sample.

['Traceback (most recent call last):\n', ' File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 625, in _handle_request\n response = convert_return_to_response(self.handler(request), self.response_class)\n', ' File "/home/yibing/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_server.py", line 38, in take_sample\n self.calibrator.take_sample()\n', ' File "/home/yibing/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 140, in take_sample\n transforms = self._get_transforms()\n', ' File "/home/yibing/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 123, in _get_transforms\n time = self._wait_for_transforms()\n', ' File "/home/yibing/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 111, in _wait_for_transforms\n self.listener.waitForTransform(self.tracking_base_frame, self.tracking_marker_frame, now, rospy.Duration(10))\n', ' File "/opt/ros/kinetic/lib/python2.7/dist-packages/tf/listener.py", line 76, in waitForTransform\n raise tf2_ros.TransformException(error_msg or "no such transformation: \"%s\" -> \"%s\"" % (source_frame, target_frame))\n', 'TransformException: canTransform: source_frame camera_marker does not exist.\n']
Traceback (most recent call last):
File "/home/yibing/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_easy_handeye.py", line 104, in handle_take_sample
sample_list = self.client.take_sample()
File "/home/yibing/catkin_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_client.py", line 34, in take_sample
return self.take_sample_proxy().samples
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 435, in call
return self.call(*args, **kwds)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 515, in call
responses = transport.receive_once()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 727, in receive_once
p.read_messages(b, msg_queue, sock)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 353, in read_messages
self._read_ok_byte(b, sock)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 336, in _read_ok_byte
raise ServiceException("service [%s] responded with an error: %s"%(self.resolved_name, str))
rospy.service.ServiceException: service [/ur5_kinect_handeyecalibration_eye_on_base/take_sample] responded with an error: error processing request: canTransform: source_frame camera_marker does not exist.

As showing in the ur5_kinect_calibrate.lunch file, I added the camera_marker to tf tree. Why it it doesn't exist? How to fix that?

Thanks very much!

Doubt regarding camera cordinate values after publishing transformation

Hi,
I am using UR5 robot and Zed camera for the calibration purpose.
Is it possible to mount the camera ,inverted in a stand and do the robot-camera calibration with eye_on_hand parameter set as false.

Also after the calibration process was done and publishing the transformation, I want the robot to move to a specific point within camera range.
On checking the camera coordinate values (Xc,Yc,Zc) at that point,it was very different from that of robot coordinate values (the one from teach pendant)at the very same point(Xr,Yr,Zr).

Is it expected to happen like this?? Even after calibration process ,will the camera coordinate values be different from that of robot coordinate values.??

I believe both the coordinate system should share same values for any point in workspace after calibration .That is for any point in camera coordinate system(Xc,Yc,Zc) it should be have same value of that of the Robot coordinate system (Xr,Yr,Zr).

I just want to know whether I am doing some other thing wrong or whether the package itself is apt for such a camera mounting condition.

Please correct me ,if I am wrong

Failed to fetch current robot state

when I check the staring pose in the rqt
image
some error appear
Failed to fetch current robot state
image

I am on Ubuntu 16.04 with ros kinetic and I am using a Kinova Jaco2 arm
Is there any other place that I need to change the robotic arm file information?
and the launch file is

<!-- start the robot -->
<include file="$(find kinova_bringup)/launch/kinova_robot.launch">
</include>
<include file="$(find j2n6s300_moveit_config)/launch/j2n6s300_demo.launch">
</include>

<!-- start easy_handeye -->
<include file="$(find easy_handeye)/launch/calibrate.launch" >
    <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
    <arg name="eye_on_hand" value="true" />

    <arg name="tracking_base_frame" value="pylon_camera" />
    <arg name="tracking_marker_frame" value="camera_marker" />
    <arg name="robot_base_frame" value="j2n6s300_link_base" />
    <arg name="robot_effector_frame" value="j2n6s300_end_effector" />

    <arg name="freehand_robot_movement" value="false" />
    <arg name="robot_velocity_scaling" value="0.5" />
    <arg name="robot_acceleration_scaling" value="0.2" />
</include>

A small question about handeye calibration for 3DoF robot

Hello

 Thank you very much for the great package.
 I want to do some picking task for the 3DoF ( x, y, z)  robot arm.
 I want to make the eye to hand calibration for 3DoF ( x, y, z ) robot by the package, rather than the normal 6 DoF robot.
 I got the following figure by rviz: 

AR

 and we can get the point cloud like that:

point_cloud

I want to know can I get good calibration result for the 3DoF robot by the package ?
For the case of 3DoF robot, could you please give some special suggestions for getting precision eye to hand calibration result ?

I am very appreciated for your kindness help.

Best Regards

The tf between ur3 and xtion is unfixed!

Hello~
I'm using your program to calibrate the ur3 and my xtion. And I use the way of eye on base.
I saw the program and I thought the result should be the tf between ur3 and xtion. So it should be a fixed value. But I found , when I add a/several samples , the result changed a lot.
I don't know what's wrong with it. Do you have any idea about my problem? Or can you tell me how can I test if the camera_marker_samples and the hand_world_samples are right?
Thanks a lot! Look forward to your reply!

'module' object has no attribute 'TransformListener'

hello, when I use the command "roslaunch easy_handeye calibrate.launch" to bring up the server, I meet an error as followed:
"AttributeError: 'module' object has no attribute 'TransformListener'"
Can you give me some advice? Thank you very much. I have installed the tf and tf2 by apt-get. My system is Ubuntu.

Can't compute the transformation from the base frame to the world frame

I am trying to calibrate a realsense camera using the easy_handeye package. I track some ar markers with the ar_track_alvar package. I want to do a eye on base calibration. Using the GUI from the easy_handeye package, I managed to take samples. The transformations are shown in the "samples" window and they are correct. However, if I want to compute the transformation between the world frame and the camera, the transformation shown in the "Result" window is always the following:

translation:
x: 0.0
y: 0.0
z: 0.0
rotation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0

For the frames I used the following.
Tracking Base Frame: The camera frame
Tracking marker Frame: The frame of the tracked markers, in the camera system
Robot Base Frame: The world frame
Robot Effector Frame: The robot gripper frame which holds the ar markers

Does anyone know what the issue is or what I did wrong?

launch file parameters

Hi,
I was trying to calibrate a ZED stereo camera with UR5 in eye_in_hand mode and there are a few parameters in launch file that I don't know how to modify.

I tried to modify example.launch and got stucked at these two lines:

 <arg name="tracking_base_frame" value="/optical_origin/" />
<arg name="tracking_marker_frame" value="/optical_object/" />

I want to verify what does "optical_origin" and "optical_object" refers to.

Thanks in advance for all your help.

What is meant by Calibrate the robot?

Hello,
in the README.md of this repo it is said, that a robot calibration would lead to better results.
I would really like to calibrate the kinematics of my industrial robot (correct the joint and the robot frame offsets) but haven't found a suitable package for that yet.

So as it is mentioned here in this repository, I would like to know, how you did your robot calibration in order to get better results?

Can't get aruco_maker frame

I run the launch file and look at the tf tree
frames.pdf
I can't get the aruco_marker frame. so I tried to run the aruco_ros package separately.

roslaunch kinect2_bridge kinect2_bridge.launch
roslaunch aruco_ros single.launch
rosrun image_view image_view image:=/aruco_single/result

aruco_ros

I can get the marker information.
2018-12-18 09 59 53

But I still can't get the pose.
2018-12-18 10 15 16

Thank you very much for your advice.

Plugin error

Hi,

After running Calibrate.launch, I see rviz and the GUI being launched but get the following error,

PluginManager._load_plugin() could not load plugin "rqt_easy_handeye/Hand-eye Calibration automatic movement":

How can I load the plugin?

Thanks,

Questions about number of samples

Thanks for your code firstly!
I use your code with realsense in eye-on-base section.
During my experiment, if I took around 15 images, the results looks good but not exactly correct.
So, I take around 25, 30 and 40 samples, the results go worse.
The result is confused me, in my opinion, the more samples, more data, the results more accurate, but actually not.
Do you have this experience and know the problem is.
I am appreciate for your explanation firstly.

Transformation seems to be wrong for eye-in-hand

Hi,

first of all thank you for this tool. It was easy to setup and it looks to be promising for my project.

I used the gui to get the eye-in-hand transformation for my Asus Xtion and the UR5. The camera is mounted on the end-effector. I am trying to transform camera coordinates from a tracked object to the end effector frame of my UR5 and the move the robot to those coordinates. But somehow the transformation does not give me the expected results. Here's my data:

Transformation.yaml


eye_on_hand: true
robot_effector_frame: ee_link
tracking_base_frame: primesense_link
transformation: {qw: 0.12315000298790331, qx: -0.989347167929719, qy: -0.07459459276842946,
qz: 0.02149197066897385, x: 0.04013691589594168, y: -0.13677498325817272, z: 0.08220400023605642}

Grasp_Camera_data

   position: 
       x: -0.0987664492927
       y: -0.0910226523694
       z: 0.560733250459
   orientation: 
       x: 0.631932686942
       y: 0.31727130216
       z: 0.631932686942
       w: -0.31727130216
  q_value: 0.678789377213
  grasp_type: 0
  center_px: [226.52753499783466, 153.77785122466588]
  angle: -0.640199793927
  depth: 0.560733250459
  ...

Camera_ee_transform:

header: 
seq: 0
frame_id: "primesense_link"
child_frame_id: "ee_link"
transform: 
translation: 
    x: 0.040137
    y: -0.136775
    z: 0.082204
rotation: 
    x: -0.989347006868672
    y: -0.0745944036699313
    z: 0.021491953634446583
    w: 0.12315016789268264

Then I used t2f to calcualte the grasp in the ee pose
grasp_ee_pose = tf2_geometry_msgs.do_transform_pose(grasp_camera_pose, camera_ee_transform)

The result is incorrect. Here are the actual results and my expected results for the translation:

grasp_ee_pose_translation_actual:
x = -0.104539997152
y  = 0.0702126802005
z = -0.436145472381

grasp_ee_pose_translation_expected:

x =  0.362407802611
y =  0.667193262606
z =  0.050243921518

What am I missing? Thank you!

canTransform Error

Hi, I'm using easy_handeye to calibrate my primesense camera with the robot. I got this error "Error processing request: canTransform: source_frame camera_marker does not exist."
Does the camera_marker need to change or need to be added to the TF tree?
Thanks very mush!

Hellow,I run easy_handeye with gazebo (a launch file like you another file named easy_hand_demo),and I have some problem as follow.Please help me solve it , and thank you provide this useful demo!

FIRST,THIS IS MY LAUNCH FILE.(You can copy this launch file to your /easy_hand/src/launch/ and run it as my steps as follow to copy this problem,thank you. )


<!-- start the Kinect -->
<include file="$(find freenect_launch)/launch/freenect.launch" >
    <arg name="depth_registration" value="true" />
</include>  

<!-- start ArUco -->
<node name="aruco_tracker" pkg="aruco_ros" type="single">
    <remap from="/camera_info" to="/camera/rgb/camera_info" />
    <remap from="/image" to="/camera/rgb/image_raw" />
    <param name="image_is_rectified" value="true"/>
    <param name="marker_size"        value="0.096"/>
    <param name="marker_id"          value="528"/>
    <param name="reference_frame"    value="camera_link"/>
    <param name="camera_frame"       value="camera_rgb_optical_frame"/>
    <param name="marker_frame"       value="camera_marker" />
</node>

<!-- start the robot -->
<include file="$(find ur5_moveit_config)/launch/demo.launch">
    <arg name="limited" value="true" />
</include>

<!-- start easy_handeye -->
<include file="$(find easy_handeye)/launch/calibrate.launch" >
    <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
    <arg name="eye_on_hand" value="false" />
    <arg name="tracking_base_frame" value="camera_link" />
    <arg name="tracking_marker_frame" value="camera_marker" />
    <arg name="robot_base_frame" value="base_link" />
    <arg name="robot_effector_frame" value="wrist_3_link" />

    <arg name="freehand_robot_movement" value="false" />
    <arg name="robot_velocity_scaling" value="0.5" />
    <arg name="robot_acceleration_scaling" value="0.2" />
</include>
___________________________________________________________________________________

I want use it to do the easy_hand calibration with gazebo,And I used kinect1.0 camera. The steps are as follow:
First I run command $ roslaunch freenect_launch freenect.launch to start the camera.
Next , I run command $ roslaunch ur_gazebo to establish a simulate environment.
Finally I run command $ roslaunch easy_handeye ur5_kinect_calibration.launch to do easy_hand calibration.

There are some warning in the $ roslaunch easy_handeye ur5_kinect_calibration.launch window as follow:**


[ WARN] [1548377886.673253632, 21.800000000]: Could not find any compatible image output mode for 1. Falling back to default image output mode 1.
[ WARN] [1548377886.673301001, 21.800000000]: Could not find any compatible depth output mode for 1. Falling back to default depth output mode 1.
[ WARN] [1548377886.687876734, 21.814000000]: Camera calibration file /home/irec/.ros/camera_info/rgb_A00365908893111A.yaml not found.
[ WARN] [1548377886.687926797, 21.814000000]: Using default parameters for RGB camera calibration.
[ WARN] [1548377886.687969401, 21.814000000]: Camera calibration file /home/irec/.ros/camera_info/depth_A00365908893111A.yaml not found.
[ WARN] [1548377886.688220450, 21.815000000]: Using default parameters for IR camera calibration.
[ WARN] [1548377887.219417598, 22.341000000]: Skipping virtual joint 'fixed_base' because its child frame
[ WARN] [1548377888.323194463, 23.447000000]: 'base_link' does not match the URDF frame 'world'.
Deprecation warning: Trajectory execution service is deprecated (was replaced by an action).
Replace 'MoveGroupExecuteService' with 'MoveGroupExecuteTrajectoryAction' in move_group.launch.
[ WARN] [1548377888.436476686, 23.558000000]: Interactive marker 'EE:goal_ee_link' contains unnormalized quaternions. This warning will only be output once but may be true for others; enable DEBUG messages for ros.rviz.quaternions to see more details.


And when the (0/17) GUI quited immediately there was a error in the $ roslaunch easy_handeye ur5_kinect_calibration.launch window as follow:


Traceback (most recent call last):
File "/home/irec/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 251, in handle_next_pose
self.local_mover.execute_plan(plan)
File "/home/irec/catkin_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 110, in execute_plan
raise RuntimeError("got crazy plan!")
RuntimeError: got crazy plan!
[calibration_mover-35] process has died [pid 15198, exit code 1, cmd /home/irec/catkin_ws/src/easy_handeye/rqt_easy_handeye/scripts/rqt_calibrationmovements __name:=calibration_mover __log:=/home/irec/.ros/log/e2fa2cb4-203b-11e9-aaf0-7085c23896f7/calibration_mover-35.log].
log file: /home/irec/.ros/log/e2fa2cb4-203b-11e9-aaf0-7085c23896f7/calibration_mover-35*.log


And I couldn't take sample at the same time . when I click "TAKE SAMPLE" , it will be no response ,then quit.

And these are my problems , please help me solve it, I can't find any example like it on the website.

Error processing request: "camera_marker" passed to lookupTransform argument source_frame does not exist.

I use the pachage easy_handeye.when I clicked "take sample",error occured:"[ERROR] [1575441599.802618]: Error processing request: "camera_marker" passed to lookupTransform argument source_frame does not exist. "
have anyone help me ? Thanks very much.
note:I use realsense D435 and elfin Robot.
my launch file is in here:

<arg name="robot_ip" doc="The IP address of the UR5 robot" />
	<!--<arg name="marker_frame" default="aruco_marker_frame"/>-->
<arg name="marker_size" doc="Size of the ArUco marker used, in meters" default="0.1" />
<arg name="marker_id" doc="The ID of the ArUco marker used" default="100" />
<!-- start the realsense -->
<!--<include file="$(find realsense2_camera)/launch/rs_camera.launch" />-->
<!--    <arg name="depth_registration" value="true" />
</include>-->

<!-- start ArUco -->
<node name="aruco_tracker" pkg="aruco_ros" type="single">
    <remap from="/camera_info" to="/camera/color/camera_info" />
    <remap from="/image" to="/camera/color/image_raw" />
    <param name="image_is_rectified" value="true"/>
    <param name="marker_size"        value="$(arg marker_size)"/>
    <param name="marker_id"          value="$(arg marker_id)"/>
    <param name="reference_frame"    value="camera_color_optical_frame"/>
    <param name="camera_frame"       value="camera_color_optical_frame"/>
    <!--<param name="marker_frame"       value="camera_marker" /> --> 
<param name="marker_frame"       value="camera_marker" />
</node>

<!-- start the robot -->
<!--<include file="/home/fuhong/elfinRobot_ws/src/elfin_robot/elfin_robot_bringup/launch/elfin5_bringup.launch"> </include>

<include file="/home/fuhong/elfinRobot_ws/src/elfin_robot/elfin_robot_bringup/launch/elfin_ros_control.launch"> </include>

<include file="/home/fuhong/elfinRobot_ws/src/elfin_robot/elfin5_moveit_config/launch/moveit_planning_execution.launch"> </include> -->

<!-- <include file="/home/fuhong/elfinRobot_ws/src/elfin_robot/elfin_basic_api/launch/elfin_basic_api.launch"> </include> -->
    <!--<arg name="limited" value="true" /> -->
    <!--<arg name="robot_ip" value="192.168.1.111" />-->
     <!--<arg name="limited" value="true" />-->

<!-- start easy_handeye -->
<include file="$(find easy_handeye)/launch/calibrate.launch" >
    <arg name="namespace_prefix" value="$(arg namespace_prefix)" />
    <arg name="eye_on_hand" value="true" />

    <arg name="tracking_base_frame" value="camera_color_optical_frame" />
    <arg name="tracking_marker_frame" value="camera_marker" />
    <!--<arg name="robot_base_frame" value="base_link" />-->
    <!--<arg name="robot_effector_frame" value="wrist_3_link" />-->
<arg name="robot_base_frame" value="elfin_base_link" />
<arg name="robot_effector_frame" value="elfin_end_link" />

    <arg name="freehand_robot_movement" value="false" />
    <arg name="robot_velocity_scaling" value="0.5" />
    <arg name="robot_acceleration_scaling" value="0.2" />
</include>

The program suddenly crashes

@marcoesposito1988 Hi ,I'm using it with a kinect v2,the program can start up, but it always suddenly crashes,and I don't konw where the problem is.

Terminal error message:

Traceback (most recent call last):
  File "/home/efun/work/kinect_test/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 251, in handle_next_pose
    self.local_mover.execute_plan(plan)
  File "/home/efun/work/kinect_test/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_calibrationmovements.py", line 110, in execute_plan
    raise RuntimeError("got crazy plan!")
RuntimeError: got crazy plan!
[calibration_mover-6] process has died [pid 29202, exit code -11, cmd /home/efun/work/kinect_test/src/easy_handeye/rqt_easy_handeye/scripts/rqt_calibrationmovements __name:=calibration_mover __log:=/home/efun/.ros/log/328d8954-f875-11e8-8d2c-4cedfbcaaae4/calibration_mover-6.log].
log file: /home/efun/.ros/log/328d8954-f875-11e8-8d2c-4cedfbcaaae4/calibration_mover-6*.log

*** Error in `python': double free or corruption (out): 0x00007f303c000e00 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f30c41fd7e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x8037a)[0x7f30c420637a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f30c420a53c]
/lib/x86_64-linux-gnu/libc.so.6(+0x39ff8)[0x7f30c41bfff8]
/lib/x86_64-linux-gnu/libc.so.6(+0x3a045)[0x7f30c41c0045]
python[0x51ea3f]
python[0x51bfd7]
python(PyErr_PrintEx+0x2d)[0x51b8fd]
/usr/lib/python2.7/dist-packages/PyQt5/QtCore.x86_64-linux-gnu.so(+0x1eaa68)[0x7f30be775a68]
/usr/lib/python2.7/dist-packages/PyQt5/QtCore.x86_64-linux-gnu.so(+0x127a52)[0x7f30be6b2a52]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(+0x91468)[0x7f30be146468]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZNK14QMessageLogger5fatalEPKcz+0xa9)[0x7f30be147f69]
/usr/lib/python2.7/dist-packages/PyQt5/QtCore.x86_64-linux-gnu.so(+0x1eab82)[0x7f30be775b82]
/usr/lib/python2.7/dist-packages/PyQt5/QtCore.x86_64-linux-gnu.so(+0x1e9d7d)[0x7f30be774d7d]
/usr/lib/python2.7/dist-packages/PyQt5/QtCore.x86_64-linux-gnu.so(+0x1ea657)[0x7f30be775657]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN11QMetaObject8activateEP7QObjectiiPPv+0x9a0)[0x7f30be369ee0]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN15QAbstractButton7clickedEb+0x42)[0x7f30b69b82b2]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(+0x257f44)[0x7f30b671af44]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(+0x259559)[0x7f30b671c559]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN15QAbstractButton17mouseReleaseEventEP11QMouseEvent+0xf4)[0x7f30b671c6d4]
/usr/lib/python2.7/dist-packages/PyQt5/QtWidgets.x86_64-linux-gnu.so(+0x3247eb)[0x7f30b6ec67eb]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN7QWidget5eventEP6QEvent+0x668)[0x7f30b6660fc8]
/usr/lib/python2.7/dist-packages/PyQt5/QtWidgets.x86_64-linux-gnu.so(+0x3265eb)[0x7f30b6ec85eb]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN19QApplicationPrivate13notify_helperEP7QObjectP6QEvent+0x8c)[0x7f30b661e05c]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN12QApplication6notifyEP7QObjectP6QEvent+0x959)[0x7f30b6623c19]
/usr/lib/python2.7/dist-packages/PyQt5/QtWidgets.x86_64-linux-gnu.so(+0x18aabe)[0x7f30b6d2cabe]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN16QCoreApplication14notifyInternalEP7QObjectP6QEvent+0xdb)[0x7f30be33b38b]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN19QApplicationPrivate14sendMouseEventEP7QWidgetP11QMouseEventS1_S1_PS1_R8QPointerIS0_Eb+0x3c2)[0x7f30b6622b32]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(+0x1b85bb)[0x7f30b667b5bb]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(+0x1bab7b)[0x7f30b667db7b]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN19QApplicationPrivate13notify_helperEP7QObjectP6QEvent+0x8c)[0x7f30b661e05c]
/usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5(_ZN12QApplication6notifyEP7QObjectP6QEvent+0x256)[0x7f30b6623516]
/usr/lib/python2.7/dist-packages/PyQt5/QtWidgets.x86_64-linux-gnu.so(+0x18aabe)[0x7f30b6d2cabe]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN16QCoreApplication14notifyInternalEP7QObjectP6QEvent+0xdb)[0x7f30be33b38b]
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5(_ZN22QGuiApplicationPrivate17processMouseEventEPN29QWindowSystemInterfacePrivate10MouseEventE+0x281)[0x7f30ba9cd4e1]
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5(_ZN22QGuiApplicationPrivate24processWindowSystemEventEPN29QWindowSystemInterfacePrivate17WindowSystemEventE+0x135)[0x7f30ba9cf1a5]
/usr/lib/x86_64-linux-gnu/libQt5Gui.so.5(_ZN22QWindowSystemInterface22sendWindowSystemEventsE6QFlagsIN10QEventLoop17ProcessEventsFlagEE+0x28)[0x7f30ba9b2f08]
/usr/lib/x86_64-linux-gnu/libQt5XcbQpa.so.5(+0x6b200)[0x7f30aea92200]
/lib/x86_64-linux-gnu/libglib-2.0.so.0(g_main_context_dispatch+0x2a7)[0x7f30bf6ab197]
/lib/x86_64-linux-gnu/libglib-2.0.so.0(+0x4a3f0)[0x7f30bf6ab3f0]
/lib/x86_64-linux-gnu/libglib-2.0.so.0(g_main_context_iteration+0x2c)[0x7f30bf6ab49c]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN20QEventDispatcherGlib13processEventsE6QFlagsIN10QEventLoop17ProcessEventsFlagEE+0x5f)[0x7f30be3917cf]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN10QEventLoop4execE6QFlagsINS_17ProcessEventsFlagEE+0x10a)[0x7f30be338b4a]
/usr/lib/x86_64-linux-gnu/libQt5Core.so.5(_ZN16QCoreApplication4execEv+0x9c)[0x7f30be340bec]
/usr/lib/python2.7/dist-packages/PyQt5/QtWidgets.x86_64-linux-gnu.so(+0x14dd1b)[0x7f30b6cefd1b]
python(PyEval_EvalFrameEx+0x5ca)[0x4bc4aa]
python(PyEval_EvalCodeEx+0x306)[0x4b9b66]
python(PyEval_EvalFrameEx+0x58e6)[0x4c17c6]
python(PyEval_EvalCodeEx+0x306)[0x4b9b66]
python(PyEval_EvalFrameEx+0x58e6)[0x4c17c6]
python(PyEval_EvalCodeEx+0x306)[0x4b9b66]

log file : calibration_mover-6-stdout.log

arguments:  Namespace(quiet=False)
unknowns:  []
^[[0m^[[0m
^[[33m^[[0m
^[[0m^[[0m
^[[0m^[[0m
^[[33m^[[0m
^[[0m^[[0m
^[[33m^[[0m
^[[0m^[[0m

log file : calibration_mover-6.log

[rospy.client][INFO] 2018-12-05 18:04:59,683: init_node, name[/calibration_mover], pid[29202]
[xmlrpc][INFO] 2018-12-05 18:04:59,683: XML-RPC server binding to 127.0.0.1:0
[xmlrpc][INFO] 2018-12-05 18:04:59,683: Started XML-RPC server [http://localhost:44619/]
[rospy.init][INFO] 2018-12-05 18:04:59,684: ROS Slave URI: [http://localhost:44619/]
[rospy.impl.masterslave][INFO] 2018-12-05 18:04:59,684: _ready: http://localhost:44619/
[rospy.registration][INFO] 2018-12-05 18:04:59,684: Registering with master node http://localhost:11311
[xmlrpc][INFO] 2018-12-05 18:04:59,685: xml rpc node: starting XML-RPC server
[rospy.init][INFO] 2018-12-05 18:04:59,784: registered with master
[rospy.rosout][INFO] 2018-12-05 18:04:59,784: initializing /rosout core topic
[rospy.rosout][INFO] 2018-12-05 18:04:59,786: connected to core topic /rosout
[rospy.simtime][INFO] 2018-12-05 18:04:59,786: /use_sim_time is not set, will not subscribe to simulated time [/clock] topic
[rospy.internal][INFO] 2018-12-05 18:05:00,088: topic[/rosout] adding connection to [/rosout], count 0
[rospy.core][INFO] 2018-12-05 18:06:07,418: signal_shutdown [atexit]
[rospy.internal][INFO] 2018-12-05 18:06:07,420: topic[/rosout] removing connection to /rosout
[rospy.impl.masterslave][INFO] 2018-12-05 18:06:07,420: atexit

Thanks for your code!

This issue is just to express my thanks to you.I have already completed this calibration after too many errrors.

Mobile robot extrinsics camera calibration

Would it be possible to use this package to calculate the extrinsics of a monocular RGB camera on a mobile robot. I am testing with this setup:

  <include file="$(find easy_handeye)/launch/calibrate.launch">
    <arg name="eye_on_hand" value="true"/>

    <!-- you can choose any identifier, as long as you use the same for publishing the calibration -->
    <arg name="namespace_prefix" value="my_eih_calib"/>

    <!-- fill in the following parameters according to your robot's published tf frames -->
    <arg name="robot_base_frame" value="odom"/>
    <arg name="robot_effector_frame" value="base_link"/>

    <!-- fill in the following parameters according to your tracking system's published tf frames -->
    <arg name="tracking_base_frame" value="camera_rgb_frame"/>
    <arg name="tracking_marker_frame" value="fid101"/>

    <arg name="freehand_robot_movement" default="true"/>

  </include>

And
odom

The mobile robot is only able to move in 3DoF: x,y,yaw

I tried with 5, 10, 30 and 40 samples and always ended up with the same result:

translation: 
  x: 0.0
  y: 0.0
  z: 0.0
rotation: 
  x: 0.0
  y: 0.0
  z: 0.0
  w: 1.0

rqt_calibrationmovement GUI having issue with Handeye client

Hi,
I would like to try integrate handeye client functions into calibration movement gui. However, the GUI did not show up after I enable the self.client = HandeyeClient() line 25 in rqt_calibrationmovements.py`. Do you have any suggestions to overcome this problem?

TF extrapolation into the past error - Solved

Hi,
thank you for your great code! I didn't got it working with your aruco_ros implementation somehow and used aruco_detect with a tf-broadcaster instead. I encountered the problem, that tf throws an "extrapolation into the past"-error because of timing issues. It could be solved by calling self._get_transforms() in handeye_calibrator.py (line 141) with the argument rospy.Time(0). I searched the web and found this: https://answers.ros.org/question/243179/whats-the-diff-between-rospytime0-and-rospytimenow/
saying "cause lookupTransform(..) and friends to return the latest available data for a specific transform, instead of the data at a specific point in time."
Maybe it would be good to change that in future versions?

Kinova camera calibration

Hello,

I am on Ubuntu 16.04 with ros kinetic and I am using a Kinova Jaco2 arm with a monocular camera mounted on the end effector. Is it possible to use this package for the camera calibration? Is it also possible to move the robot for the calibration without using a simulator?

Thank you very much

Cannot take samples

Hello, I am trying to calibrate eye_on_base in order to find the transform between the camera and the base of the robot. When I launch the launch file, the GUI interface comes up and I press 'Take samples' but this error message shows up.

[ERROR] [WallTime: 1528577063.870101] Error processing request: . canTransform returned after 10.0099 timeout was 10. ['Traceback (most recent call last):\n', ' File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 623, in _handle_request\n response = convert_return_to_response(self.handler(request), self.response_class)\n', ' File "/home/dimitriskld/thesis_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_server.py", line 38, in take_sample\n self.calibrator.take_sample()\n', ' File "/home/dimitriskld/thesis_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 140, in take_sample\n transforms = self._get_transforms()\n', ' File "/home/dimitriskld/thesis_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 123, in _get_transforms\n time = self._wait_for_transforms()\n', ' File "/home/dimitriskld/thesis_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_calibrator.py", line 110, in _wait_for_transforms\n self.listener.waitForTransform(self.robot_base_frame, self.robot_effector_frame, now, rospy.Duration(10))\n', 'Exception: . canTransform returned after 10.0099 timeout was 10.\n'] Traceback (most recent call last): File "/home/dimitriskld/thesis_ws/src/easy_handeye/rqt_easy_handeye/src/rqt_easy_handeye/rqt_easy_handeye.py", line 104, in handle_take_sample sample_list = self.client.take_sample() File "/home/dimitriskld/thesis_ws/src/easy_handeye/easy_handeye/src/easy_handeye/handeye_client.py", line 34, in take_sample return self.take_sample_proxy().samples File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 435, in __call__ return self.call(*args, **kwds) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 515, in call responses = transport.receive_once() File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py", line 696, in receive_once

I have already published a sample transform between robot_base_frame and robot_effector_frame using the command
rosrun tf static_transform_publisher x y z r p y robot_base_frame robot_effector_frame
Also, should I create initially the frames in the tf tree? But these frames always change.

Thanks in advance!

easy quesition

I'm sorry, I'm a ROS novice. But I want to learn to use your package, can give me a detailed look?For example,how it is integrated into my workspace?I use catkin_make to compile, but it does not work.Please!

Thank you!

Not really an Issue, just wanted to thank you. This package helped me with a crucial calibration that I needed on a tight schedule.

Image View Problem

Hi,

Thank you for the work.

The latest code runs well on my computer without any error. But one weird problem bother me, I can't find anywhere to turn on Image View sub-window. It supposes to be under Plugins -> Visualisation, but all I have is a Plot. And I can rosrun image_view solely from a new command window to see aruco_result.

Do I need to setup somewhere in launch file, or install some package related to rqt?

Thanks.

A question about calibration.launch

Hello,In order to use the realsense camera, I changed ur5_kinect_calibration.launch. When I run it, I get the following error:

ERROR: cannot launch node of type [easy_handeye/calibrate.py]: can't locate node [calibrate.py] in package [easy_handeye]
ERROR: cannot launch node of type [rqt_easy_handeye/rqt_calibrationmovements]: can't locate node [rqt_calibrationmovements] in package [rqt_easy_handeye]

Please help to solve this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.