Giter Club home page Giter Club logo

apriltags_ros's People

Contributors

23pointsnorth avatar brentyi avatar kyesh avatar mikaelarguedas avatar mitchellwills avatar nicolacastaman avatar velinddimitrov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apriltags_ros's Issues

Only recognizes h11 tags

I think based on the code, the current release only recognizes 36h11 tags, and it definitely would not recognize 16h5 tags. I'm using Ros indigo. Release 0.1.1 of apriltags_ros

AprilTag2 : Someone is looking into the C++/ROS wrapping?

Hi all,

I don't know if this is the right place to ask something like that.

I am in contact with the main people behind aprilTags and they strongly suggested me to migrate to the new aprilTag2 which is described here.

I was wondering if someone of you already started looking into it and especially how to wrap it in C++/ROS.

Thanks for your time,
Fabrizio.

Center and Perimeter in Pixels Would be Handy

The current information available in the ROS message doesn't allow combining April Tag recognition with e.g. color blob tracking or pedestrian detection to allow a robot to determine where non-tagged things are in the image relative to tracked things.

I'm attempting to use the tags on top of small mobile robots in combination with tape markings on the floor to serve as virtual walls.

I would fork and add it myself, but there already seem to be a lot of forks of AprilTag detection running around, and this one is clearly maintained and functional.

Transformation to display in rviz

Hello, I'm really new to all the ros and apriltag stuff, so please excuse if i've got a dump question.

I think tf could solve my problem, but I don't really get how to solve this problem in regard to the specific topics from apriltags.

I made a world in gazebo with a camera and a few markers. I can easily display them in rviz as well, with the gazebo2rviz package. With the apriltags package, I could easily display the relativ marker position to the camera as well.

What I want to do is transform the relative positions from the tag_detections to the rest of the world, published by the gazebo2rviz. Gazebo2rviz publishes the position of the camera, and it's named nearly the same as the camera is named by the tag_detections, with the only diffrence that tag_detections uses the real name, and gazebo2rviz uses underscores instead of slashes.

It would be great if someone of you could help me solving my issue.

Does not detect

I have a situation where I have a calibrated camera that works properly on rosrun image_view. I show the camera the right kind of apriltags, there are no errors or warnings, but no detections are reported.

detections : []

Not sure where the issue is. What would be the best steps to debug from here?

Process died while subscribing /tag_detection topic

Dear Sir,

I am subscribing topic "tag_detection" from where i need tag id. i got tag id if there was apriltag image in front of camera but if there was no image in front of camera then directly process died.

so i don't won't process killed. please provide solution.

topic as input

Hello everyone, I'm new to this project, so please excuse me if I got anything wrong.
I want to use the Detector of apriltags not with a usb camera, but with a simulated camera in gazebo and later with a camera attached to another system, publishing topics over ros.
The published images are of the type sensor_msgs/Image - published by /camera/duo3d_left/image_raw (/compressed).
How do I set up this node to use these images? And do I need to do a camera calibration with a simulated camera as well?
Thanks in advance

Process has died, Unhandled exception

Hi,

I trying to use the apriltag_ros package for a robotics project and I keep getting an error when running the example.launch file

`$ roslaunch apriltags_ros example.launch
... logging to /home/hayward_transbot/.ros/log/98ff82a4-72cb-11e7-a9f5-506583e696c6/roslaunch-arm-14974.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://192.168.137.5:39794/

SUMMARY

PARAMETERS

  • /apriltag_detector/image_transport: compressed
  • /apriltag_detector/projected_optics: True
  • /apriltag_detector/tag_descriptions: [{'id': 0, 'size'...
  • /apriltag_detector/tag_family: 36h11
  • /rosdistro: kinetic
  • /rosversion: 1.12.7

NODES
/
apriltag_detector (apriltags_ros/apriltag_detector_node)

ROS_MASTER_URI=http://192.168.137.5:11311/

core service [/rosout] found
process[apriltag_detector-1]: started with pid [14992]
[ INFO] [1501167511.554384042]: Loaded tag config: 0, size: 0.163513, frame_name: tag_0
[ INFO] [1501167511.567628783]: Loaded tag config: 1, size: 0.163513, frame_name: a_frame
[ INFO] [1501167511.574866464]: Loaded tag config: 2, size: 0.163513, frame_name: tag_2
[ INFO] [1501167511.582110436]: Loaded tag config: 3, size: 0.163513, frame_name: tag_3
[ INFO] [1501167511.590937514]: Loaded tag config: 4, size: 0.163513, frame_name: tag_4
[ INFO] [1501167511.597694922]: Loaded tag config: 5, size: 0.163513, frame_name: tag_5
[apriltag_detector-1] process has died [pid 14992, exit code -11, cmd /home/hayward_transbot/catkin_ws/devel/lib/apriltags_ros/apriltag_detector_node image_rect:=/webcam/image camera_info:=/webcam/camera_info __name:=apriltag_detector __log:=/home/hayward_transbot/.ros/log/98ff82a4-72cb-11e7-a9f5-506583e696c6/apriltag_detector-1.log].
log file: /home/hayward_transbot/.ros/log/98ff82a4-72cb-11e7-a9f5-506583e696c6/apriltag_detector-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done
Unhandled exception in thread started by
sys.excepthook is missing
lost sys.stderr
$ `

Does anyone else have this same problem? I need help debugging this, if there's anything else I can provide to help please let me know.

Thanks,
Tri

I can't get "/tag_detections_pose" rate

I tried to command "rostopic echo /tag_detections_pose".
But Coordinates are not output.

header:
seq: 1519
stamp:
secs: 1558535045
nsecs: 635062520
frame_id: "camera"
poses:

position: 
  x: 0.0
  y: 0.0
  z: 0.0
orientation: 
  x: 0.0
  y: 0.0
  z: 0.0
  w: 1.0

Please help me.

how to run this package with usb camera

  I can run  this package and usb_cam  separately, image topic is published in /usb_cam/image_raw, how do I edit the launch file can this package get the image from the usb camera ?

Installing and using

Hey guys...
Would you have guidelines on how to install this on ROS? I haven't been able to install this package by any means. I am using ROS Indigo on 14.04.

Transform poses in world frame to pixel coordinates

Hello. I'm working on a mixed reality project with apriltags and I need to transform (x,y,z) poses to (u,v) pixel coordinates.

I've already use the P matrix from camera_info topic, but the pixel coordinates do not make sense.
Can anybody help me?

Thank's in advance.

Low publishing rate

I am using this package to identify some apriltags in a Gazebo Simulation. The simulation is publishing the camera image in 15 hz and the image size is 640 x 480.

The issue is that I am getting a 2 hz publishing rate from tag_detections and it is too low for my application.

I am running the gazebo simulation and the apriltags_ros in an Ubuntu 14.04 pure in a core i7 laptop with 8gb ram. Is that publishing rate normal? Does anyone know how to increase it?

Camera Info Topic not Synchronised

I was having a few issues with the image transport when using a Baxter robot, as the info topic and image topic are not synchronised. To get around this, I have swapped the image transport for two separate subscribers and a publisher. If this issue is affecting other users, you can find my fix here.

image_pipeline: issue with the decimation

Hi everyone,

I am trying to use apriltags with a flea3 USB camera by pointgrey. The resolution of the camera is too big and therefore I need to reduce it via the image_pipeline stack of ROS. I can't reduce it through the official SDK FlyCapture (It should be a known problem of the SDK).

Anyway, I use the image_pipeline to decimate the image via the following launch file

<launch>
	<arg name="qcID" default="5"/>
	<arg name="bond" value="" />

	<node pkg="nodelet" type="nodelet" name="decimator_$(arg qcID)"
	      args="load image_proc/crop_decimate camera_manager_$(arg qcID)">
	  <param name="decimation_x" type="int" value="4" />
	  <param name="decimation_y" type="int" value="4" />

	  <remap from="/camera_$(arg qcID)/image_raw" to="camera/image_raw"/>
	  <remap from="camera/image_info" to="/camera_$(arg qcID)/camera_info"/>

	  <remap from="camera_out/image_raw" to="/camera_$(arg qcID)_dec/image_raw"/>
	  <remap from="camera_out/image_info" to="/camera_$(arg qcID)_dec/camera_info"/>
	</node>

	<node name="drop_cam_$(arg qcID)" pkg="topic_tools" type="drop"
		args="/camera_$(arg qcID)_dec/image_raw/compressed 1 15 /camera_$(arg qcID)_drop/image_raw/compressed" />
	<node name="drop_info_$(arg qcID)" pkg="topic_tools" type="drop"
		args="/camera_$(arg qcID)_dec/camera_info 1 15 /camera_$(arg qcID)_drop/camera_info" />
</launch>

After doing this the pose estimated through the apriltag gets wrong, in the sense that I have a z which is 4 times the real one, and the same goes for x and y.

I can solve that by dividing the size of the marker by 4 in the example.launch file which I use to launch the apriltag ros node, which looks like that:

<launch>
  <node pkg="apriltags_ros" type="apriltag_detector_node" name="apriltag_detector" output="screen">
    <!-- Remap topic required by the node to custom topics -->
    <remap from="image_rect" to="/camera_5_drop/image_raw" /> 
    <remap from="camera_info" to="/camera_5_drop/camera_5_info" />

    <!-- Optional: Subscribe to the compressed stream-->
    <param name="image_transport" type="str" value="compressed" />

    <!-- Select the tag family: 16h5, 25h7, 25h9, 36h9, or 36h11(default) -->
    <param name="tag_family" type="str" value="36h11" />

    <!-- Enable projected optical measurements for more accurate tag transformations -->
    <!-- This exists for backwards compatability and should be left true for new setups -->
    <param name="projected_optics" type="bool" value="true" />

    <!-- Describe the tags -->
    <rosparam param="tag_descriptions">[
      {id: 0, size: 0.0215},
      {id: 1, size: 0.086, frame_id: a_frame},
      {id: 2, size: 0.167,frame_id: a_frame},
      {id: 3, size: 0.167},
      {id: 4, size: 0.167},
      {id: 5, size: 0.167}]
    </rosparam>
  </node>
</launch>

I don't think this is a SOLUTION but just a workaround. Could you please help me to understand this?

I think that the problem has to be found in the binning parameter sent on the camera_info topic. If the apriltags_ros node does not support the camera_info binning parameter , for sure this will not work.

If you look at the camera_info in the case of the original image I have:

rostopic echo -n 1 /camera/camera_info 
header: 
  seq: 29157
  stamp: 
    secs: 1478102548
    nsecs: 328822432
  frame_id: camera
height: 1552
width: 2080
distortion_model: plumb_bob
D: [-0.001312, 0.003408, -0.004684, 0.014525, 0.0]
K: [687.216761, 0.0, 1111.575057, 0.0, 673.787664, 747.109306, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [706.72345, 0.0, 1172.617419, 0.0, 0.0, 713.655701, 732.345523, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 1
binning_y: 1
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: True

---

While for the decimated image is:

rostopic echo -n 1 /camera_5_dec/camera_info 
header: 
  seq: 29025
  stamp: 
    secs: 1478102543
    nsecs: 931340616
  frame_id: camera
height: 1552
width: 2080
distortion_model: plumb_bob
D: [-0.001312, 0.003408, -0.004684, 0.014525, 0.0]
K: [687.216761, 0.0, 1111.575057, 0.0, 673.787664, 747.109306, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [706.72345, 0.0, 1172.617419, 0.0, 0.0, 713.655701, 732.345523, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 4
binning_y: 4
roi: 
  x_offset: 0
  y_offset: 0
  height: 1552
  width: 2080
  do_rectify: True

---

Thanks in advance.

Returns Empty Pose on Odroid

I am using an Odroid XU-4 with ubuntu 14.04 and ROS indigo. I had apriltags_ros working on my laptop (also running ubuntu 14.04 and ROS indigo), and I was getting tag detections with correct pose outputs and tf transforms, as expected, however, when I ran the same code on an Odroid, the code no longer returned correct pose estimates, instead it would just output a pose of all zeros (except w was 1). There were no errors thrown, the tag_detections image was being published and had the tag outlined, and the pose message was only published when the tag was in view. Also, it correctly identified the tag's id number, so a large portion of the code is still running, but the 3D matrix transformations are just outputting zeros. Does anyone have an idea on why this could be happening? am I missing a dependency? Is there any reason part of the code wouldn't work on an Odroid when it is fine on a laptop?

Synchronization problem between image and camera info using apriltags

Hi there,

I am trying to use apriltags_ros package with Kinect v2. Keep getting the same warning saying image and camera info are not synchroniized. I'm using libfreenect2 and ros kinetic.

[ WARN] [1516297878.437969359]: [image_transport] Topics '/kinect2/hd/image_color' and '/kinect2/hd/camera_info' do not appear to be synchronized. In the last 10s:
Image messages received: 189
CameraInfo messages received: 189
Synchronized pairs: 0

Having a look at apriltag_detector.cpp it is not entering the image callback, I think thats at least the immediate problem but I don't understand why. Could use some help, thank you

Error quantification

Hi,
I was wondering if you have any means of estimating the error of some pose estimation.

What I mean is: given that aprilTags gave me a full pose for a tag, how much should I trust this estimation? In the aprilTags paper (Olson, E., 2011), there is a result obtained in simulation that shows that the error varies with the 'off-axis angle'. This is cool, but I would need to collect my own data to get a covariance matrix that depends on the off-axis angle, perspective and distance. I think this is somewhat impractical.

Alternatively, it is said in section 3-C that, for the rotation matrix, it is calculated such that it minimizes the Frobenius matrix norm of the error. Therefore, if we calculate this norm after estimating, we should get something that would translate into some confidence measure, right?

I was wondering what do you guys think about this. Would you suggest anything different? What do you think about adding this norm as an output topic of the algorithm?

How to use this package?

When I rosrun this node,it warns that: No apriltags specified.
So how could I load the picture that I want to detecte?

solvePnP uses camera focal length, not projection focal length

I noticed that the apriltag detector uses the camera intrinsic focal lengths and principal point, rather than the projection matrix in solvePnP :
https://github.com/RIVeR-Lab/apriltags_ros/blob/indigo-devel/apriltags_ros/src/apriltag_detector.cpp#L83-L84
https://github.com/RIVeR-Lab/apriltags_ros/blob/indigo-devel/apriltags/src/TagDetection.cc#L95-L98

Is this correct? We're using rectified images here (thus the zero'd distortion coefficients), so it seems the projection matrix values make more sense. Often this isn't much of a problem, but for wide FOV cameras and/or april tags on the edge of the image, there is some difference.

In my own experiments, I found using projection matrix values gave substantially more accurate pose values - especially in distance to the markers.

The change to fix this would be simple - just switch from using:

  double fx = cam_info->K[0];
  double fy = cam_info->K[4];
  double px = cam_info->K[2];
  double py = cam_info->K[5];

to

double fx = cam_info->P[0];
  double fy = cam_info->P[5];
  double px = cam_info->P[2];
  double py = cam_info->P[6];

release for kinecik

Hi!

Sorry to bother you, but are you planning on making a release for ros kinetic?

Thank you very much in advance!

cv::Exception during Tag detection

Hi Guys,

I am addressing and issue using the last version of the opencv in ROS repositories (v3.3.1) and apriltags. What happen is that, when a tag has been detected by my camera, the following error is shown:

OpenCV Error: Assertion failed (mtype == type0 || (((((mtype) & ((512 - 1) << 3)) >> 3) + 1) == 1 && ((1 << type0) & fixedDepthMask) != 0)) in create, file /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/matrix.cpp, line 2542 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/matrix.cpp:2542: error: (-215) mtype == type0 || (((((mtype) & ((512 - 1) << 3)) >> 3) + 1) == 1 && ((1 << type0) & fixedDepthMask) != 0) in function create
I am using Ubuntu 16.04.3 and ROS kinetic
I tryed to investigate a bit and the problem seems to be generated in the cv::SolvePnp function in TagDetection.cc source file.

Any suggestion?

Cheers,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.