mikeferguson / robot_calibration Goto Github PK
View Code? Open in Web Editor NEWGeneric calibration for robots
License: Apache License 2.0
Generic calibration for robots
License: Apache License 2.0
Hello Mike,
I am using robot_calibration for my robotic arm.
And I am using the Led_finder for that purpose.
I checked the implementation in Fetch_ros, but didn't find the implementation for gripper_led_action_server.
I assume this action server implementation will be required, and led_finder can't work without that. Can you please conform this?
I tried to implement my own gripper_led_action_server, but getting lots of errors in compilation.
Please find the attached code and compilation logs file.
gripper_led_action_server.zip
Saurabh
Requires:
Hi, I find that in the checkerboard_finder.cpp, the position of the corner seems returned by the depth information returned by the depth camera. This method, however, relay's on accurately calibrate the depth of the camera. Plus, this method also requires the checkerboard to be big enough for the camera to see them and meanwhile in the working range of the IR camera. And most importantly, the relative position of corner points are fixed, so they can only only reflex the orientation of the checkerboard but nothing else.
So I was thinking could we use the pose of the checkerboard returned from OpenCV's calibration program instead? Have you consider this method?
by using sensor_msgs::PointCloudIterator
As noted in #45
Hello,
I'm sorry in advance as this does not concern directly the calibration procedure, but more the visualization of results; however, I hope it can help others as well.
We are facing some headaches when calibrating. Currently using a combination of checkerboard and ground plane (now just plane) finders. Trying to optimize similarly to what is seen in https://github.com/fetchrobotics/fetch_ros/blob/indigo-devel/fetch_calibration/config/calibrate_with_ground.yaml , but we took care of the camera calibration before.
The results of our calibration are:
Ceres Solver Report: Iterations: 34, Initial cost: 4.190723e+00, Final cost: 4.540892e-01, Termination: CONVERGENCE
Parameter Offsets:
left_arm_joint0: -0.515459
left_arm_joint1: 0.0898604
left_arm_joint2: 0.167496
left_arm_joint3: 0.461221
left_arm_joint4: 0.166929
left_arm_joint5: -0.381634
left_arm_joint6: -2.05005
head_link_to_head_camera_link: 0.329103
We have tried to install the URDF and see no changes in rviz. All changes are made in terms of <calibration>
tags with rising
option. I have not successfuly found information on how this relates to robot_state_publisher
, for instance.
On the other hand, I have tried to set some free frames (which are defined by the joints), and these values are updated in the URDF and also seen in rviz after installation.
Did you guys ever face this problem? Any tips?
Thanks in advance
Plane finders lack any related tests
Hello,
I know that in your blog: https://www.robotandchisel.com/2020/04/25/compass-calibration/ you wrote that soft iron calibration is to come. Is that something you are still interested in. Is this package the best place to do it, or is it somewhere else?
If this package is the best place to do it, would you welcome a PR?
First time using this package. It looks really flexible!
I'm planning a hand-eye calibration. My checkerboard is on the robot arm, camera is stationary in the world.
Question - what frame is the "free_frames_initial_values" relative to? My guess is, it's relative to the base link of the kinematic chain.
Happy to make a PR to improve documentation if you give me a little guidance.
Current error blocks are:
All of these duplicate a lot of code, and could be better done if we simply cleanup the models a bit so we can use the reprojections with a few basic error blocks:
It would probably worth adding a "getPlane()" to the model -- for instance, a laser scanner could later be added and it's getPlane() could give the "wall" seen by the laser -- this could then be aligned with a getPlane() from a 3d camera that is pointing at the wall. We would need to somehow let the user know that things like arm chains can't properly compute getPlane().
Finally, it's worth looking at a camera2d_to_camera2d. I'd really like to add this at some point, so that we don't have to estimate the checkerboard->gripper, or so that we could calibrate two 2d cameras together with no other stuff. This will however, require encoding the 3d-points that are being back projected into both cameras somewhere in the state estimation of the free parameters. Mixing this block with camera2d_to_chain3d could also produce some wonky stuff, if the chain3d projection does not match the magic state estimation....
The LED finder can be improved in a number of ways:
YAML config should include:
I'm running Ubuntu 18.04, ROS Melodic, installed from the repos. I've found that when running calibrate
the following error pops up when this line of code in checkerboard_finder.cpp
gets run:
int found = cv::findChessboardCorners(bridge->image, checkerboard_size,
points, CV_CALIB_CB_ADAPTIVE_THRESH);
OpenCV Error: Assertion failed (blockSize % 2 == 1 && blockSize > 1) in adaptiveThreshold, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/imgproc/src/thresh.cpp, line 1447
terminate called after throwing an instance of 'cv::Exception'
what(): /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/imgproc/src/thresh.cpp:1447: error: (-215) blockSize % 2 == 1 && blockSize > 1 in function adaptiveThreshold
Hello Mark,
I am using robot_calibration
package to calibrate my robot. And its a very helpful.
I am using only 1 led for calibration. So I defined only 1 led in the .yaml file.
But it was not working. So I got the issue and would like to suggest a solution for it as below:
led_finder.cpp
file (link) at line number 188
, the code is as below:code_idx = (code_idx + 1) % 8;
So this can be replaced with
code_idx = (code_idx + 1) % codes_.size();
Can you please have a look over it and change in main code.
Regards,
Saurabh
/home/fergs/kinetic/src/robot_calibration/robot_calibration/src/ground_plane_finder.cpp: In member function ‘virtual bool robot_calibration::GroundPlaneFinder::find(robot_calibration_msgs::CalibrationData*)’:
/home/fergs/kinetic/src/robot_calibration/robot_calibration/src/ground_plane_finder.cpp:150:47: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (size_t i = step; i < cloud_.width && k < points_total; i +=step)
If the motion planning failed for any reason, should not try to capture points.
I have somehow implemented this calibration package. My goal is to calibrate an abb robot with the use of a realsense d415 camera and just one checkerboard. I do get some results, a calibrated URDF and some camera calibration yaml files. My results are probably not that good yet. I am planning to use more features in the future. Atleast, if i can get the calibration results working.
My problem here is that I do not know how to make use of the calibration rising tag that is inserted in to the updated URDF. Initially, i thought this was a standard URDF element and that it would be taken in account when planning. But, there is no difference at all in the movement when I try using the updated URDF. I've looked around but could not find much information regarding this. Can someone point me where I can find more information about this calibration tag and how it should affect positioning?
In case I have to implement this part myself. Do I simply add these offsets to the joint states? I saw something similar in the UR_driver here: https://github.com/ros-industrial/universal_robot/blob/hydro-devel/ur_driver/src/ur_driver/driver.py#L341
Currently the checkerboard "world" poses are poorly defined and not easily changed. The orientation is hard-coded for the Fetch configuration as to where the checkerboard points are relative to the kinematic chain tip link. Additionally, there is currently no way to define a starting position, so a number of cycles of the optimizer are spent simply moving the checkerboard to a relatively good starting position.
Hello Mike,
There was 1 more issue in the function waitToSettle(), if we use the manual mode to move the arm.
So here I would like to propose a solution:
Please let me know if you agree or find any issue with it.
Thanks,
Saurabh
Current system has assumption of one RGBD sensor, one "world" sensor. Once the DepthCameraInfo is updated, CalibrationData should be updated to be an array of messages which are each one "sample". Each Sample would then have joint_states + an array of observations
CalibrationData.msg
Observation.msg:
Needs to be updated for new messages
Right now, the plane finder assumes that all visible points, that are within the bounding box, are part of the plane. This works fine for things like "robot views the ground" -- but if anything else enters the frame, it's problematic. It also doesn't let us handle frames with other items/clutter in them, or do more interesting things like find the face plane of the gripper (which has long been something I've wanted to be able to do).
Proposed improvements to PlaneFinder:
Perhaps call it "ExtendedCameraInfo", switch to a key:value store rather than the hard coded z_scaling, z_offset, as future sensors likely will have different parameters.
As part of calibration:
Using a script to capture poses when a whole pipeline is built to perceive and calibrate seems out of place.
Not only that, but at least in my case, i needed to edit the script to add a line with self.last_state_.features = ['checkerboard_finder', 'ground_plane_finder']
or whatever my captured features will be, or else it wouldn't work during perception. This might be a bug in itself.
Nontheless, my suggestion is a --pose-capture BAG mode which would be very similar to manual mode but would not enable perception (since the person is likely moving the arm by itself and is in the FOV of the camera). In this mode a bag would be saved with /joint_states
.
This mode would enable a feature which I describe in another issue and will cross-ref to this one.
This is an extension of #41, but is quite a bit more involved. This will however, require encoding the 3d-points that are being back projected into both cameras somewhere in the state estimation of the free parameters. Mixing this block with camera2d_to_chain3d could also produce some wonky stuff, if the chain3d projection does not match the magic state estimation....
I've been preparing the kinetic-devel branch mainly on my indigo/14.04 machine (ceres 1.8.0) -- it works fine. I've been adding additional tests to help validate new changes -- in particular, I've added these lines:
robot_calibration/robot_calibration/test/error_block_tests.cpp
Lines 288 to 290 in 50bc057
From the buildfarm log:
/tmp/catkin_workspace/src/robot_calibration/robot_calibration/test/error_block_tests.cpp:288
Expected: (opt.summary()->final_cost) < (1e-26), actual: 1.67796e-25 vs 1e-26
/tmp/catkin_workspace/src/robot_calibration/robot_calibration/test/error_block_tests.cpp:289
Expected: (opt.summary()->final_cost) < (opt.summary()->initial_cost), actual: 1.67796e-25 vs 1.67796e-25
/tmp/catkin_workspace/src/robot_calibration/robot_calibration/test/error_block_tests.cpp:290
Expected: (opt.summary()->iterations.size()) > (10), actual: 1 vs 10
With ceres 1.8.0, I get:
initial_cost: 1.67796e-25
final_cost: 1.43174e-27
iterations: 17
when I catkin_make , there is an error:
make[2]: *** 没有规则可以创建“/home/james/blockly_ws/devel/lib/librobot_calibration.so”需要的目标“/usr/lib/x86_64-linux-gnu/libspqr.so”。 停止。 make[1]: *** [robot_calibration-indigo-devel/robot_calibration/CMakeFiles/robot_calibration.dir/all] 错误 2 make: *** [all] 错误 2 Invoking "make -j4 -l4" failed
I just have a libspqr.so.1.3.1 in /usr/lib/x86_64-linux-gnu。
Keeping them separate really doesn't gain us anything
Problem has manifested itself in the form of bagfiles that have incorrect joint_states -- the messages appear not to be processed. This is especially true with the checkerboard detector, it appears that many messages get dropped and not processed. c7bd551 is a temporary hack to add an extra callback processing thread, but may cause other issues.
Hello Mark,
I have seen in the code that joint states are being captured along with sensors observations.
Can you please let me know if there is any use joint states, when we use led-finder for calibration.
The reason I am asking this question is that, I was trying yo calibrate my Arm, but during optimisation, I am getting below INTIAL_COST also:
`[ INFO] [1548670250.287325912]: Done capturing samples
[ INFO] [1548670250.458293943]: Creating chain 'arm' from base_link to link9
[ INFO] [1548670250.458999058]: Creating chain 'camera' from base_link to camera_depth_optical_frame
[ INFO] [1548670250.459494857]: optimize: number of offsets: 18
[ INFO] [1548670250.460143902]: Diff. X: -0.014663, Y: -0.008728, Z: -0.013612
INITIAL COST (0)
x: -0.014663
y: -0.008728
z: -0.013612
[ INFO] [1548670255.473269566]: Diff. X: -0.010441, Y: -0.030350, Z: -0.009042
INITIAL COST (1)
x: -0.010441
y: -0.030350
z: -0.009042
[ INFO] [1548670256.158610420]: Diff. X: -0.018447, Y: -0.007326, Z: -0.012469
INITIAL COST (2)
x: -0.018447
y: -0.007326
z: -0.012469
[ INFO] [1548670256.755370759]: Diff. X: -0.011827, Y: -0.028897, Z: -0.006128
INITIAL COST (3)
x: -0.011827
y: -0.028897
z: -0.006128
[ INFO] [1548670257.349901199]: Diff. X: -0.016052, Y: -0.000944, Z: -0.014128
INITIAL COST (4)
x: -0.016052
y: -0.000944
z: -0.014128
[ INFO] [1548670268.770670017]: Diff. X: -0.015684, Y: -0.027633, Z: -0.005686
INITIAL COST (5)
x: -0.015684
y: -0.027633
z: -0.005686
[ INFO] [1548670270.316994017]: Diff. X: -0.017168, Y: -0.006394, Z: -0.012698
INITIAL COST (6)
x: -0.017168
y: -0.006394
z: -0.012698
[ INFO] [1548670270.805101483]: Diff. X: -0.009859, Y: -0.029202, Z: -0.009290
INITIAL COST (7)
x: -0.009859
y: -0.029202
z: -0.009290
[ INFO] [1548670271.236804588]: Diff. X: -0.014154, Y: -0.008500, Z: -0.014610
INITIAL COST (8)
x: -0.014154
y: -0.008500
z: -0.014610
[ INFO] [1548670271.692414772]: Diff. X: -0.012408, Y: -0.029938, Z: -0.006983
INITIAL COST (9)
x: -0.012408
y: -0.029938
z: -0.006983
[ INFO] [1548670272.160246525]: Diff. X: -0.018766, Y: -0.012255, Z: -0.012287
INITIAL COST (10)
x: -0.018766
y: -0.012255
z: -0.012287
[ INFO] [1548670272.694415553]: Diff. X: -0.011520, Y: -0.028422, Z: -0.010442
INITIAL COST (11)
x: -0.011520
y: -0.028422
z: -0.010442
[ INFO] [1548670273.410029176]: Diff. X: -0.020646, Y: -0.022069, Z: -0.005931
INITIAL COST (12)
x: -0.020646
y: -0.022069
z: -0.005931
[ INFO] [1548670273.905140344]: Diff. X: -0.012245, Y: -0.025323, Z: -0.011808
INITIAL COST (13)
x: -0.012245
y: -0.025323
z: -0.011808
[ INFO] [1548670275.668234342]: Diff. X: -0.023359, Y: -0.017796, Z: -0.004427
INITIAL COST (14)
x: -0.023359
y: -0.017796
z: -0.004427
`
The INITIAL_COST is quite low only, but I am getting huge variations in my generated URDF.
Even though I checked the Joint states, they seems to be correct only.
Can you please, give me a hint, why I should be getting so huge variations in generated URDF.
Regards,
Saurabh
I'm running the chain_manager_tests.py after manually loading the yaml files in the tests (after encountering the same issue with my own yaml files I created), and I get:
[FATAL] [1583526733.187227715]: ASSERTION FAILED
file = /home/joshua/catkin_ws/src/robot_calibration/robot_calibration/src/chain_manager.cpp
line = 37
cond = chains.getType() == XmlRpc::XmlRpcValue::TypeArray
Any guidance would be helpful,
Thanks
The function defines p and point (both PointStamped). We set point to the initial centroid, and then later compare each additional point to the centroid -- except, that we use p instead of point (which is probably initialized to 0,0,0). Since the gripper is usually > 0.5m from the origin of the camera, this probably always fails: https://github.com/mikeferguson/robot_calibration/blob/indigo-devel/robot_calibration/src/led_finder.cpp#L478
Related issue: #54 as that is where this would make the most sense, but this can also apply to automatic and from-bag modes.
Since we have access to the URDF, that gives us knowledge of the joint limits. During a "pose capture" mode (#54) or even during calibration feedback could be given as to how much of that joint's available space has been explored already. You mentioned this as being critical in #17.
Ideas for feedback:
I've been thinking about how to create a wizard for this, so that people don't have to create/update lots of YAML files manually (which is also error-prone). In the mean-time, this ticket serves as somewhat of a brief HOW-TO to manually create the files.
As with the MoveIt wizard, we would want to be able to reload the YAML files exported and edit them.
Hello Mike,
In the process of calibrating my robot, I am facing one more problem.
The scenario is like as below:
But while optimising, I am getting below logs:
[ INFO] [1530684518.983065179]: Done capturing samples
[ INFO] [1530684519.215928914]: Creating chain 'arm' from base_link to target
[ INFO] [1530684519.216580523]: Creating camera3d 'camera' in frame camera_depth_optical_frame
[ INFO] [1530684519.217042632]: optimize: number of offsets: 7
INITIAL COST (0)
x: -nan
y: -nan
z: -nan
INITIAL COST (1)
x: -nan
y: -nan
z: -nan
INITIAL COST (2)
x: -nan
y: -nan
z: -nan
INITIAL COST (3)
x: -nan
y: -nan
z: -nan
INITIAL COST (4)
x: -nan
y: -nan
z: -nan
INITIAL COST (5)
x: -nan
y: -nan
z: -nan
INITIAL COST (6)
x: -nan
y: -nan
z: -nan
INITIAL COST (7)
x: -nan
y: -nan
z: -nan
INITIAL COST (8)
x: -nan
y: -nan
z: -nan
INITIAL COST (9)
x: -nan
y: -nan
z: -nan
INITIAL COST (10)
x: -nan
y: -nan
z: -nan
INITIAL COST (11)
x: -nan
y: -nan
z: -nan
Solver output:
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0704 11:38:39.255892 10933 residual_block.cc:131]
Error in evaluating the ResidualBlock.
There are two possible reasons. Either the CostFunction did not evaluate and fill all
residual and jacobians that were requested or there was a non-finite value (nan/infinite)
generated during the or jacobian computation.
Residual Block size: 1 parameter blocks x 3 residuals
For each parameter block, the value of the parameters are printed in the first column
and the value of the jacobian under the corresponding residual. If a ParameterBlock was
held constant then the corresponding jacobian is printed as 'Not Computed'. If an entry
of the Jacobian/residual array was requested but was not written to by user code, it is
indicated by 'Uninitialized'. This is an error. Residuals or Jacobian values evaluating
to Inf or NaN is also an error.
Residuals: -nan -nan -nan
Parameter Block 0, size: 7
0 | -nan -nan -nan
0 | -nan -nan -nan
0 | -nan -nan -nan
0 | -nan -nan -nan
0 | -nan -nan -nan
0 | -nan -nan -nan
0 | -nan -nan -nan
E0704 11:38:39.256040 10933 trust_region_minimizer.cc:73] Terminating: Residual and Jacobian evaluation failed.
Ceres Solver Report: Iterations: 0, Initial cost: -1.000000e+00, Final cost: -1.000000e+00, Termination: FAILURE
Parameter Offsets:
joint5: 0
joint6_x: 0
joint6_y: 0
joint6_z: 0
joint6_a: 0
joint6_b: 0
joint6_c: -1.49012e-08
[ INFO] [1530684519.258365438]: Done calibrating
================================================================================REQUIRED process [robot_calibration-1] has died!
process has finished cleanly
log file: /home/saurabh/.ros/log/bdc8d9aa-7f4f-11e8-a77c-a86bad217c79/robot_calibration-1*.log
Initiating shutdown!
================================================================================
[calibration_bagger-2] killing on exit
[robot_calibration-1] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
I think nan values in INITIAL_COSTS seems to be the problem. And reason for it seems to be the cost function has not done the evaluation.
Can you please give some input, for possible reason of this error.
Please find the attached calibrate.yaml, capture.yaml, and calibration_data.bag file attached.
Also please let me know if you need some more information.
config.zip
Saurabh
Once ros/rosdistro#18155 is merged, need to add melodic badges to README in master branch
Hello,
I currently work with UR robot and I am focused on HandEyeCalibration and then TagDetection.
Is there any clear tutorial, how to use robot_calibration? Does it works with live transformation capturing or could I load a specific transformations from files?
Best regards,
Kuba
/home/ubr/noetic/src/robot_calibration/robot_calibration/src/finders/checkerboard_finder.cpp: In member function ‘bool robot_calibration::CheckerboardFinder::findInternal(robot_calibration_msgs::CalibrationData*)’:
/home/ubr/noetic/src/robot_calibration/robot_calibration/src/finders/checkerboard_finder.cpp:177:49: error: ‘CV_CALIB_CB_ADAPTIVE_THRESH’ was not declared in this scope
177 | points, CV_CALIB_CB_ADAPTIVE_THRESH);
I am trying to use this package to calibrate a 6 DOF robot arm. I have made different configurations and had some time to play around with this package. I got some results using a fixed depth camera, a fixed checkerboard and a blinking led mounted on the robot arm. (Implemented the blinking led finder myself, not yet sure if the observations that I add are correct). Heed this issue for more background..
Currently, i am trying to use multiple checkerboards. I have three checkerboards with different sizes and a depth camera mounted on the robot arm. The checkerboards are fixed. One to a wall on the left of the robot, one to the right and one on the ground. The problem with this is that I can not get any results. This is probably because for each pose, all features need to be captured as seen here. Simply removing break and continue will most probably result in what I want, the program continuing to capture other features, despite one or more features not being found. As well as the joint states always being added. My question here is, will this be handled correctly by the optimizer? If not, what should i modify to enable this functionality?
I assume that i will have to add some things in the capture phase, that said, i do not know where to start.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.