This is a fiducial marker system designed for LiDAR sensors. Different visual fiducial marker systems (Apriltag, ArUco, CCTag, etc.) can be easily embedded. The usage is as convenient as that of the visual fiducial marker. The system shows potential in SLAM, multi-sensor calibration, augmented reality, and so on.
Dear Author, thanks for sharing such excellent work! Recently we desire a stable marker detection between 4 ~ 30 meter using livox mid70.
I was wondering if you have tested the detection performance within such distance range? Greatly appreciated if you could share your insight!
In your code, you use the intensity of point cloud to render image:
valid_cloud_i.points[p].x = (points[p][3] != 0) ? points[p][0] * points[p][3] : points[p][0];
valid_cloud_i.points[p].y = (points[p][3] != 0) ? points[p][1] * points[p][3] : points[p][1];
valid_cloud_i.points[p].z = (points[p][3] != 0) ? points[p][2] * points[p][3] : points[p][2];
How did this expression come from?
thanks for your open source, but I have a question: you said all the process run in real-time (approx 40 hz on livox Mid-40) in you paper, does that including Point cloud data collection, and what is the parameter setting of publish_freq when you using livox mid 40 to get a high resolution mapping image for AR marker identification?
In your code, you use the intensity of point cloud to render image:
valid_cloud_i.points[p].x = (points[p][3] != 0) ? points[p][0] * points[p][3] : points[p][0];
valid_cloud_i.points[p].y = (points[p][3] != 0) ? points[p][1] * points[p][3] : points[p][1];
valid_cloud_i.points[p].z = (points[p][3] != 0) ? points[p][2] * points[p][3] : points[p][2];
How did this expression come from?
Hi,
First, thanks for sharing this code along with your paper. It looks amazing, and I can't wait to be using it.
However, would it be possible to add some example bags to the Readme? It would help me in understanding how things work and figuring out why it doesn't work with the bag I am using (VLP-16).
I would greatly appreciate!