Giter Club home page Giter Club logo

robot_calibration's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robot_calibration's Issues

Gripper_led_action_server implementation

Hello Mike,
I am using robot_calibration for my robotic arm.
And I am using the Led_finder for that purpose.

I checked the implementation in Fetch_ros, but didn't find the implementation for gripper_led_action_server.
I assume this action server implementation will be required, and led_finder can't work without that. Can you please conform this?

I tried to implement my own gripper_led_action_server, but getting lots of errors in compilation.
Please find the attached code and compilation logs file.
gripper_led_action_server.zip

Saurabh

Add support for 2d cameras and robot-held checkerboards

Requires:

  • CheckerboardFinder that puts 2d points in data rather than 3d
  • chain3d_to_camera2d_error block - whereas all the other error blocks use the project() function of both models to move all the points into 3d and then compare them, this error block would project() the chain into 3d points, and the project those 3d points into the 2d camera using the optical model. The residual would be the difference between the projected 2d camera points and the observed points from the 2d camera. Requires adding inverse projection function to models.

Using opencv calibration function to return checkerboard finder's depth information

Hi, I find that in the checkerboard_finder.cpp, the position of the corner seems returned by the depth information returned by the depth camera. This method, however, relay's on accurately calibrate the depth of the camera. Plus, this method also requires the checkerboard to be big enough for the camera to see them and meanwhile in the working range of the IR camera. And most importantly, the relative position of corner points are fixed, so they can only only reflex the orientation of the checkerboard but nothing else.

So I was thinking could we use the pose of the checkerboard returned from OpenCV's calibration program instead? Have you consider this method?

Document how to apply calibration offsets

Hello,

I'm sorry in advance as this does not concern directly the calibration procedure, but more the visualization of results; however, I hope it can help others as well.

We are facing some headaches when calibrating. Currently using a combination of checkerboard and ground plane (now just plane) finders. Trying to optimize similarly to what is seen in https://github.com/fetchrobotics/fetch_ros/blob/indigo-devel/fetch_calibration/config/calibrate_with_ground.yaml , but we took care of the camera calibration before.

The results of our calibration are:

Ceres Solver Report: Iterations: 34, Initial cost: 4.190723e+00, Final cost: 4.540892e-01, Termination: CONVERGENCE
Parameter Offsets:
left_arm_joint0: -0.515459
left_arm_joint1: 0.0898604
left_arm_joint2: 0.167496
left_arm_joint3: 0.461221
left_arm_joint4: 0.166929
left_arm_joint5: -0.381634
left_arm_joint6: -2.05005
head_link_to_head_camera_link: 0.329103

We have tried to install the URDF and see no changes in rviz. All changes are made in terms of <calibration> tags with rising option. I have not successfuly found information on how this relates to robot_state_publisher, for instance.

On the other hand, I have tried to set some free frames (which are defined by the joints), and these values are updated in the URDF and also seen in rviz after installation.

Did you guys ever face this problem? Any tips?

Thanks in advance

What is the "base frame" where poses are defined?

First time using this package. It looks really flexible!

I'm planning a hand-eye calibration. My checkerboard is on the robot arm, camera is stationary in the world.

Question - what frame is the "free_frames_initial_values" relative to? My guess is, it's relative to the base link of the kinematic chain.

Happy to make a PR to improve documentation if you give me a little guidance.

Clean up error blocks, add some new ones

Current error blocks are:

  • camera3d_to_arm
  • camera3d_to_ground
  • camera_to_camera (this doesn't actually do what you think...)

All of these duplicate a lot of code, and could be better done if we simply cleanup the models a bit so we can use the reprojections with a few basic error blocks:

  • chain3d_to_chain3d - residual is a 3N values, corresponding to X, Y, and Z error between all N points. Each of X, Y, Z would have their own scaling factor in the residual. This would work for camera3d-armchain and camera3d-camera3d (when both view something like a checkerboard).
  • chain3d_to_plane - residual is N values, corresponding to the distance each point is from the plane. Need a single scaling value for the residuals. This would be used for ground-plane calibration, or camera-to-wall calibration, etc. Only the chain3d would be a "sensor", the plane would a parameter.
  • plane_to_plane - residual is 4 values, corresponding to the X, Y, Z angle-axis difference of the normal vectors, plus a residual for the plane-to-plane distance. As above, each of the 4 values would have their own scaling factor. This would be used to align two 3d cameras which cannot see particular feature points, for instance, if a pair of 3d cameras need ground/wall alignment, or if we want to align a 3d camera and a laser to a wall.
  • outrageous_error - remains as-is.
  • camera2d_to_chain3d - (NEW) - whereas all the other error blocks use the project() function of both models to move all the points into 3d and then compare them, this error block would project() the chain into 3d points, and the project those 3d points into the 2d camera using the optical model. The residual would be the difference between the projected 2d camera points and the observed points from the 2d camera.

It would probably worth adding a "getPlane()" to the model -- for instance, a laser scanner could later be added and it's getPlane() could give the "wall" seen by the laser -- this could then be aligned with a getPlane() from a 3d camera that is pointing at the wall. We would need to somehow let the user know that things like arm chains can't properly compute getPlane().

Finally, it's worth looking at a camera2d_to_camera2d. I'd really like to add this at some point, so that we don't have to estimate the checkerboard->gripper, or so that we could calibrate two 2d cameras together with no other stuff. This will however, require encoding the 3d-points that are being back projected into both cameras somewhere in the state estimation of the free parameters. Mixing this block with camera2d_to_chain3d could also produce some wonky stuff, if the chain3d projection does not match the magic state estimation....

improve LED finder accuracy

The LED finder can be improved in a number of ways:

  • we drop samples if the single most changed pixel has NAN coordinates -- should do a small search in the region, or even better, fit a circle to the highly-rated points and then compute a centroid off their collective positions (this would help with noise).
  • currently, the LED finder only works on the RGB image. There are several ways in which the depth data may be used (or even better, in which the depth data and RGB data can be better aligned).

Finish improving PlaneToPlaneError

  • Instead of computing planes in the error block, push it down into models (which can be specialized)
  • Add support for residual based on the minimum distance between the resulting planes. Still need to determine if this is based on the plane models, or directly on the projected points (which might make more sense), but this seems like a very important aspect.

openCV adaptive threshhold error

I'm running Ubuntu 18.04, ROS Melodic, installed from the repos. I've found that when running calibrate

the following error pops up when this line of code in checkerboard_finder.cpp gets run:

  int found = cv::findChessboardCorners(bridge->image, checkerboard_size,
                                        points, CV_CALIB_CB_ADAPTIVE_THRESH);
OpenCV Error: Assertion failed (blockSize % 2 == 1 && blockSize > 1) in adaptiveThreshold, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/imgproc/src/thresh.cpp, line 1447
terminate called after throwing an instance of 'cv::Exception'
  what():  /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/imgproc/src/thresh.cpp:1447: error: (-215) blockSize % 2 == 1 && blockSize > 1 in function adaptiveThreshold

LED finder not work for any other number of led than 4

Hello Mark,

I am using robot_calibration package to calibrate my robot. And its a very helpful.
I am using only 1 led for calibration. So I defined only 1 led in the .yaml file.

But it was not working. So I got the issue and would like to suggest a solution for it as below:

  • In led_finder.cpp file (link) at line number 188, the code is as below:

code_idx = (code_idx + 1) % 8;

So this can be replaced with

code_idx = (code_idx + 1) % codes_.size();

Can you please have a look over it and change in main code.

Regards,
Saurabh

Kinetic: Warning in Ground Plane Finder

/home/fergs/kinetic/src/robot_calibration/robot_calibration/src/ground_plane_finder.cpp: In member function ‘virtual bool robot_calibration::GroundPlaneFinder::find(robot_calibration_msgs::CalibrationData*)’:
/home/fergs/kinetic/src/robot_calibration/robot_calibration/src/ground_plane_finder.cpp:150:47: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for (size_t i = step; i < cloud_.width && k < points_total; i +=step)

How to use the calibration tag?

I have somehow implemented this calibration package. My goal is to calibrate an abb robot with the use of a realsense d415 camera and just one checkerboard. I do get some results, a calibrated URDF and some camera calibration yaml files. My results are probably not that good yet. I am planning to use more features in the future. Atleast, if i can get the calibration results working.

My problem here is that I do not know how to make use of the calibration rising tag that is inserted in to the updated URDF. Initially, i thought this was a standard URDF element and that it would be taken in account when planning. But, there is no difference at all in the movement when I try using the updated URDF. I've looked around but could not find much information regarding this. Can someone point me where I can find more information about this calibration tag and how it should affect positioning?

In case I have to implement this part myself. Do I simply add these offsets to the joint states? I saw something similar in the UR_driver here: https://github.com/ros-industrial/universal_robot/blob/hydro-devel/ur_driver/src/ur_driver/driver.py#L341

need to parameterize checkerboard starting offset

Currently the checkerboard "world" poses are poorly defined and not easily changed. The orientation is hard-coded for the Fetch configuration as to where the checkerboard points are relative to the kinematic chain tip link. Additionally, there is currently no way to define a starting position, so a number of cycles of the optimizer are spent simply moving the checkerboard to a relatively good starting position.

Problem in waitToSettle() function in manual mode

Hello Mike,

There was 1 more issue in the function waitToSettle(), if we use the manual mode to move the arm.

  1. At line 269 in function waitToSettle() in file, ros::spinOnce() is being called. Because of this spin call, the callback will be called and joint state values will be updated.
  2. It will be working without any problem, if arm is moved as per poses in calibration_poses.bag file, as after the movement, it will take sometime to settle the arm.
  3. So at first iteration in while(true) loop at line 236, the variable settle will be false and this loop will not break at line 266.
  4. But if we are moving the arm manually like mine case, I am first moving the arm to some desired position and then pressing enter to capture the pose.
  5. In this case, the arm is already settled, when it comes in waitToSettle() function.
  6. So loop will break at line 266, and ros::spinOnce() will never called.
  7. So we will not get the joint states.

So here I would like to propose a solution:

  1. We can place ros::spinOnce() before line 266.
  2. Or we can place ros::spinOnce() at the beginning or at last only, so that inspite of whether arm has been settled or not, ros::spinOnce() should be called.
  3. I tried by putting ros::spinOnce() at the beginning and it is working fine.

Please let me know if you agree or find any issue with it.

Thanks,
Saurabh

Support multiple sensors

Current system has assumption of one RGBD sensor, one "world" sensor. Once the DepthCameraInfo is updated, CalibrationData should be updated to be an array of messages which are each one "sample". Each Sample would then have joint_states + an array of observations

CalibrationData.msg

  • sensors_msgs/JointState joint_states
  • Observation[] sensor_data

Observation.msg:

  • string name (of sensor used)
  • geometry_msgs/PointStamped[] features
  • ExtendedCameraInfo camera_info
  • sensor_msgs/PointCloud2 cloud
  • sensor_msgs/Image image

Allow PlaneFinder to find planes in complex frames

Right now, the plane finder assumes that all visible points, that are within the bounding box, are part of the plane. This works fine for things like "robot views the ground" -- but if anything else enters the frame, it's problematic. It also doesn't let us handle frames with other items/clutter in them, or do more interesting things like find the face plane of the gripper (which has long been something I've wanted to be able to do).

Proposed improvements to PlaneFinder:

  • Actually have it "find planes" by segmenting the image into N planes, each of which are larger than some minimum_points_per_plane (a new parameter).
  • Provide a new set of parameters that allow you to specify reference point & normal vector, in an arbitrary frame, that you would like to find the "closest" plane to. If these parameters are not set, the largest plane will be selected (which is basically backwards compatible with today). For something like "find the ground", you could specify a plane in the base_link. For gripper calibration, you could specify the plane to be nearest the center of the gripper, and provide the normal so we can't pick up the "side" of the gripper.

Update DepthCameraInfo

Perhaps call it "ExtendedCameraInfo", switch to a key:value store rather than the hard coded z_scaling, z_offset, as future sensors likely will have different parameters.

Remove outliers

As part of calibration:

  • Cycle through all samples to determine distribution of per-point error.
  • Throw out any samples which are outliers from that distribution

Add a "pose capture" mode

Using a script to capture poses when a whole pipeline is built to perceive and calibrate seems out of place.

Not only that, but at least in my case, i needed to edit the script to add a line with self.last_state_.features = ['checkerboard_finder', 'ground_plane_finder'] or whatever my captured features will be, or else it wouldn't work during perception. This might be a bug in itself.

Nontheless, my suggestion is a --pose-capture BAG mode which would be very similar to manual mode but would not enable perception (since the person is likely moving the arm by itself and is in the FOV of the camera). In this mode a bag would be saved with /joint_states.

This mode would enable a feature which I describe in another issue and will cross-ref to this one.

How do we support 2d camera to 2d camera calibration?

This is an extension of #41, but is quite a bit more involved. This will however, require encoding the 3d-points that are being back projected into both cameras somewhere in the state estimation of the free parameters. Mixing this block with camera2d_to_chain3d could also produce some wonky stuff, if the chain3d projection does not match the magic state estimation....

Optimization not happening on Kinetic

I've been preparing the kinetic-devel branch mainly on my indigo/14.04 machine (ceres 1.8.0) -- it works fine. I've been adding additional tests to help validate new changes -- in particular, I've added these lines:

EXPECT_LT(opt.summary()->final_cost, 1e-26); // Actual is about 1.43174e-27
EXPECT_LT(opt.summary()->final_cost, opt.summary()->initial_cost);
EXPECT_GT(opt.summary()->iterations.size(), 10); // expect more than 10 iterations
, which work fine under ceres 1.8.0, but fail under 1.12.0 (which is the version in 16.04).

From the buildfarm log:

/tmp/catkin_workspace/src/robot_calibration/robot_calibration/test/error_block_tests.cpp:288
Expected: (opt.summary()->final_cost) < (1e-26), actual: 1.67796e-25 vs 1e-26

/tmp/catkin_workspace/src/robot_calibration/robot_calibration/test/error_block_tests.cpp:289
Expected: (opt.summary()->final_cost) < (opt.summary()->initial_cost), actual: 1.67796e-25 vs 1.67796e-25

/tmp/catkin_workspace/src/robot_calibration/robot_calibration/test/error_block_tests.cpp:290
Expected: (opt.summary()->iterations.size()) > (10), actual: 1 vs 10

With ceres 1.8.0, I get:

initial_cost: 1.67796e-25
final_cost: 1.43174e-27
iterations: 17

catkin_make error

when I catkin_make , there is an error:
make[2]: *** 没有规则可以创建“/home/james/blockly_ws/devel/lib/librobot_calibration.so”需要的目标“/usr/lib/x86_64-linux-gnu/libspqr.so”。 停止。 make[1]: *** [robot_calibration-indigo-devel/robot_calibration/CMakeFiles/robot_calibration.dir/all] 错误 2 make: *** [all] 错误 2 Invoking "make -j4 -l4" failed
I just have a libspqr.so.1.3.1 in /usr/lib/x86_64-linux-gnu。

Callback queues are not properly processed

Problem has manifested itself in the form of bagfiles that have incorrect joint_states -- the messages appear not to be processed. This is especially true with the checkerboard detector, it appears that many messages get dropped and not processed. c7bd551 is a temporary hack to add an extra callback processing thread, but may cause other issues.

Use of Joint States in Led Finder

Hello Mark,

I have seen in the code that joint states are being captured along with sensors observations.
Can you please let me know if there is any use joint states, when we use led-finder for calibration.

The reason I am asking this question is that, I was trying yo calibrate my Arm, but during optimisation, I am getting below INTIAL_COST also:
`[ INFO] [1548670250.287325912]: Done capturing samples
[ INFO] [1548670250.458293943]: Creating chain 'arm' from base_link to link9
[ INFO] [1548670250.458999058]: Creating chain 'camera' from base_link to camera_depth_optical_frame
[ INFO] [1548670250.459494857]: optimize: number of offsets: 18
[ INFO] [1548670250.460143902]: Diff. X: -0.014663, Y: -0.008728, Z: -0.013612
INITIAL COST (0)
x: -0.014663
y: -0.008728
z: -0.013612

[ INFO] [1548670255.473269566]: Diff. X: -0.010441, Y: -0.030350, Z: -0.009042
INITIAL COST (1)
x: -0.010441
y: -0.030350
z: -0.009042

[ INFO] [1548670256.158610420]: Diff. X: -0.018447, Y: -0.007326, Z: -0.012469
INITIAL COST (2)
x: -0.018447
y: -0.007326
z: -0.012469

[ INFO] [1548670256.755370759]: Diff. X: -0.011827, Y: -0.028897, Z: -0.006128
INITIAL COST (3)
x: -0.011827
y: -0.028897
z: -0.006128

[ INFO] [1548670257.349901199]: Diff. X: -0.016052, Y: -0.000944, Z: -0.014128
INITIAL COST (4)
x: -0.016052
y: -0.000944
z: -0.014128

[ INFO] [1548670268.770670017]: Diff. X: -0.015684, Y: -0.027633, Z: -0.005686
INITIAL COST (5)
x: -0.015684
y: -0.027633
z: -0.005686

[ INFO] [1548670270.316994017]: Diff. X: -0.017168, Y: -0.006394, Z: -0.012698
INITIAL COST (6)
x: -0.017168
y: -0.006394
z: -0.012698

[ INFO] [1548670270.805101483]: Diff. X: -0.009859, Y: -0.029202, Z: -0.009290
INITIAL COST (7)
x: -0.009859
y: -0.029202
z: -0.009290

[ INFO] [1548670271.236804588]: Diff. X: -0.014154, Y: -0.008500, Z: -0.014610
INITIAL COST (8)
x: -0.014154
y: -0.008500
z: -0.014610

[ INFO] [1548670271.692414772]: Diff. X: -0.012408, Y: -0.029938, Z: -0.006983
INITIAL COST (9)
x: -0.012408
y: -0.029938
z: -0.006983

[ INFO] [1548670272.160246525]: Diff. X: -0.018766, Y: -0.012255, Z: -0.012287
INITIAL COST (10)
x: -0.018766
y: -0.012255
z: -0.012287

[ INFO] [1548670272.694415553]: Diff. X: -0.011520, Y: -0.028422, Z: -0.010442
INITIAL COST (11)
x: -0.011520
y: -0.028422
z: -0.010442

[ INFO] [1548670273.410029176]: Diff. X: -0.020646, Y: -0.022069, Z: -0.005931
INITIAL COST (12)
x: -0.020646
y: -0.022069
z: -0.005931

[ INFO] [1548670273.905140344]: Diff. X: -0.012245, Y: -0.025323, Z: -0.011808
INITIAL COST (13)
x: -0.012245
y: -0.025323
z: -0.011808

[ INFO] [1548670275.668234342]: Diff. X: -0.023359, Y: -0.017796, Z: -0.004427
INITIAL COST (14)
x: -0.023359
y: -0.017796
z: -0.004427

`
The INITIAL_COST is quite low only, but I am getting huge variations in my generated URDF.
Even though I checked the Joint states, they seems to be correct only.

Can you please, give me a hint, why I should be getting so huge variations in generated URDF.

Regards,
Saurabh

ASSERTION FAILED during chain_manager_tests

I'm running the chain_manager_tests.py after manually loading the yaml files in the tests (after encountering the same issue with my own yaml files I created), and I get:

[FATAL] [1583526733.187227715]: ASSERTION FAILED
	file = /home/joshua/catkin_ws/src/robot_calibration/robot_calibration/src/chain_manager.cpp
	line = 37
	cond = chains.getType() == XmlRpc::XmlRpcValue::TypeArray

Any guidance would be helpful,
Thanks

Give feedback of the joint space exploration

Related issue: #54 as that is where this would make the most sense, but this can also apply to automatic and from-bag modes.

Since we have access to the URDF, that gives us knowledge of the joint limits. During a "pose capture" mode (#54) or even during calibration feedback could be given as to how much of that joint's available space has been explored already. You mentioned this as being critical in #17.

Ideas for feedback:

  • Simple percentage of explored space
  • A plot (histogram, violin) that shows the distribution of saved positions over the joint space.

Workflow for Wizard

I've been thinking about how to create a wizard for this, so that people don't have to create/update lots of YAML files manually (which is also error-prone). In the mean-time, this ticket serves as somewhat of a brief HOW-TO to manually create the files.

  • Load and parse the URDF (from parameter server?)
  • User selects the "base_link" which is the frame into which all points will be projected during calibration step.
  • User defines the "chains" by:
    • Entering a name for each
    • Selecting set of active joints that constitute the chain
    • Selecting the follow_joint_trajectory action namespace (introspect options from running system?)
    • Selecting the planning_group to use with MoveIt (if any)
  • User defines the models by adding a model of a given type and then configuring the following:
    • For "chain" - select end effector frame.
    • For "camera3d" - select camera frame, select topic (introspect options from running system?)
  • User defines the finders by adding a finder of a given type and then configuring the following:
    • PlaneFinder
      • Select "camera_sensor_name" from available "models"
      • Select "topic" to subscribe to
      • Select "transform_frame" (defaults to base_link selected above)
      • Set min/max x/y/z
      • Set "points_max"
      • Select true/false for "debug"
    • CheckerboardFinder
      • Select "camera_sensor_name" from available "models"
      • Select "chain_sensor_name" from available "models"
      • Select "topic" to subscribe to
      • Set points_x, points_y, size
      • Select true/false for "debug"
    • LedFinder - probably not worth adding to wizard (just preserve if it exists)
  • User defines error_blocks by selecting/adding by type first:
    • chain3d_to_chain3d - Select model_a, model_b from models.
    • chain3d_to_plane - Select model_a from models. Set a, b, c, d and scale.
    • plane_to_plane - Select model_a, model_b from models. Set scale_normal, scale_offset.
    • outrageous - Select param from list of possible free params. Set joint_scale, position_scale, rotation_scale.
  • User selects free params through a series of checkboxes
    • Available names are each joint name, fx/fy/cx/cy/z_offset/z_scaling for each camera)
  • User selects free frames through a series of checkboxes
    • For each free frame, select x/y/z/r/p/y. Also do check that 0, 1, or 3 of r/p/y are set (setting two to true doesn't do what you think -- these are axis-magnitude internally).
  • Finally have user manually move arm while both capturing data for the first calibration and creating the bag file for export.

As with the MoveIt wizard, we would want to be able to reload the YAML files exported and edit them.

Showing INITIAL_COSTS as nan while Optimizing

Hello Mike,
In the process of calibrating my robot, I am facing one more problem.
The scenario is like as below:

  1. There was a problem in using led_finder with Stereo camera (it is not able to measure the depth), so we created one more finder, that we call as mark_finder (concentric circles with different colors).
    So as a whole, we are using our own finder, which seems to be working fine.
  2. I am using manual method to move the arm at different locations.
  3. I am not using it for camera calibration, so I assume, depth_camera_info is not needed. Please correct me if I am wrong.
  4. So from finder, I am providing only 2 points i.e. rgbd_pt (position of mark wrt depth camera frame) and world_pt (position of mark wrt gripper frame)
  5. At now for simplicity, I am using only 1 free_frame and 1 free_param.

But while optimising, I am getting below logs:

[ INFO] [1530684518.983065179]: Done capturing samples
[ INFO] [1530684519.215928914]: Creating chain 'arm' from base_link to target
[ INFO] [1530684519.216580523]: Creating camera3d 'camera' in frame camera_depth_optical_frame
[ INFO] [1530684519.217042632]: optimize: number of offsets: 7
INITIAL COST (0)
x: -nan
y: -nan
z: -nan

INITIAL COST (1)
x: -nan
y: -nan
z: -nan

INITIAL COST (2)
x: -nan
y: -nan
z: -nan

INITIAL COST (3)
x: -nan
y: -nan
z: -nan

INITIAL COST (4)
x: -nan
y: -nan
z: -nan

INITIAL COST (5)
x: -nan
y: -nan
z: -nan

INITIAL COST (6)
x: -nan
y: -nan
z: -nan

INITIAL COST (7)
x: -nan
y: -nan
z: -nan

INITIAL COST (8)
x: -nan
y: -nan
z: -nan

INITIAL COST (9)
x: -nan
y: -nan
z: -nan

INITIAL COST (10)
x: -nan
y: -nan
z: -nan

INITIAL COST (11)
x: -nan
y: -nan
z: -nan

Solver output:
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0704 11:38:39.255892 10933 residual_block.cc:131]

Error in evaluating the ResidualBlock.

There are two possible reasons. Either the CostFunction did not evaluate and fill all
residual and jacobians that were requested or there was a non-finite value (nan/infinite)
generated during the or jacobian computation.

Residual Block size: 1 parameter blocks x 3 residuals

For each parameter block, the value of the parameters are printed in the first column
and the value of the jacobian under the corresponding residual. If a ParameterBlock was
held constant then the corresponding jacobian is printed as 'Not Computed'. If an entry
of the Jacobian/residual array was requested but was not written to by user code, it is
indicated by 'Uninitialized'. This is an error. Residuals or Jacobian values evaluating
to Inf or NaN is also an error.

Residuals: -nan -nan -nan

Parameter Block 0, size: 7

       0 |         -nan         -nan         -nan 
       0 |         -nan         -nan         -nan 
       0 |         -nan         -nan         -nan 
       0 |         -nan         -nan         -nan 
       0 |         -nan         -nan         -nan 
       0 |         -nan         -nan         -nan 
       0 |         -nan         -nan         -nan 

E0704 11:38:39.256040 10933 trust_region_minimizer.cc:73] Terminating: Residual and Jacobian evaluation failed.

Ceres Solver Report: Iterations: 0, Initial cost: -1.000000e+00, Final cost: -1.000000e+00, Termination: FAILURE
Parameter Offsets:
joint5: 0
joint6_x: 0
joint6_y: 0
joint6_z: 0
joint6_a: 0
joint6_b: 0
joint6_c: -1.49012e-08

[ INFO] [1530684519.258365438]: Done calibrating
================================================================================REQUIRED process [robot_calibration-1] has died!
process has finished cleanly
log file: /home/saurabh/.ros/log/bdc8d9aa-7f4f-11e8-a77c-a86bad217c79/robot_calibration-1*.log
Initiating shutdown!
================================================================================
[calibration_bagger-2] killing on exit
[robot_calibration-1] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done

I think nan values in INITIAL_COSTS seems to be the problem. And reason for it seems to be the cost function has not done the evaluation.
Can you please give some input, for possible reason of this error.

Please find the attached calibrate.yaml, capture.yaml, and calibration_data.bag file attached.
Also please let me know if you need some more information.
config.zip

Saurabh

Noetic Issues

  • orocos_kdl is now liborocos-kdl(-dev) for rosdep key in package.xml - Fixed in #87
  • Build errors for led finder - fixed in #88
/home/ubr/noetic/src/robot_calibration/robot_calibration/src/finders/checkerboard_finder.cpp: In member function ‘bool robot_calibration::CheckerboardFinder::findInternal(robot_calibration_msgs::CalibrationData*)’:
/home/ubr/noetic/src/robot_calibration/robot_calibration/src/finders/checkerboard_finder.cpp:177:49: error: ‘CV_CALIB_CB_ADAPTIVE_THRESH’ was not declared in this scope
  177 |                                         points, CV_CALIB_CB_ADAPTIVE_THRESH);

Is it possible to capture multiple features without capturing all features?

I am trying to use this package to calibrate a 6 DOF robot arm. I have made different configurations and had some time to play around with this package. I got some results using a fixed depth camera, a fixed checkerboard and a blinking led mounted on the robot arm. (Implemented the blinking led finder myself, not yet sure if the observations that I add are correct). Heed this issue for more background..

Currently, i am trying to use multiple checkerboards. I have three checkerboards with different sizes and a depth camera mounted on the robot arm. The checkerboards are fixed. One to a wall on the left of the robot, one to the right and one on the ground. The problem with this is that I can not get any results. This is probably because for each pose, all features need to be captured as seen here. Simply removing break and continue will most probably result in what I want, the program continuing to capture other features, despite one or more features not being found. As well as the joint states always being added. My question here is, will this be handled correctly by the optimizer? If not, what should i modify to enable this functionality?

I assume that i will have to add some things in the capture phase, that said, i do not know where to start.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.