Giter Club home page Giter Club logo

vision_msgs's Introduction

ROS Vision Messages

Introduction

This package defines a set of messages to unify computer vision and object detection efforts in ROS.

Overview

The messages in this package are to define a common outward-facing interface for vision-based pipelines. The set of messages here are meant to enable 2 primary types of pipelines:

  1. "Pure" Classifiers, which identify class probabilities given a single sensor input
  2. Detectors, which identify class probabilities as well as the poses of those classes given a sensor input

The class probabilities are stored with an array of ObjectHypothesis messages, which is essentially a map from integer IDs to float scores and poses.

Message types exist separately for 2D and 3D. The metadata that is stored for each object is application-specific, and so this package places very few constraints on the metadata. Each possible detection result must have a unique numerical ID so that it can be unambiguously and efficiently identified in the results messages. Object metadata such as name, mesh, etc. can then be looked up from a database.

The only other requirement is that the metadata database information can be stored in a ROS parameter. We expect a classifier to load the database (or detailed database connection information) to the parameter server in a manner similar to how URDFs are loaded and stored there (see [6]), most likely defined in an XML format. This expectation may be further refined in the future using a ROS Enhancement Proposal, or REP [7].

We also would like classifiers to have a way to signal when the database has been updated, so that listeners can respond accordingly. The database might be updated in the case of online learning. To solve this problem, each classifier can publish messages to a topic signaling that the database has been updated, as well as incrementing a database version that's continually published with the classifier information.

Source data that generated a classification or detection are not a part of the messages. If you need to access them, use an exact or approximate time synchronizer in your code, as the message's header should match the header of the source data.

Semantic segmentation pipelines should use sensor_msgs/Image messages for publishing segmentation and confidence masks. This allows systems to use standard ROS tools for image processing, and allows choosing the most compact image encoding appropriate for the task. To transmit the metadata associated with the vision pipeline, you should use the /vision_msgs/LabelInfo message. This message works the same as /sensor_msgs/CameraInfo or /vision_msgs/VisionInfo:

  1. Publish LabelInfo to a topic. The topic should be at same namespace level as the associated image. That is, if your image is published at /my_segmentation_node/image, the LabelInfo should be published at /my_segmentation_node/label_info. Use a latched publisher for LabelInfo, so that new nodes joining the ROS system can get the messages that were published since the beginning. In ROS2, this can be achieved using a transient local QoS profile.

  2. The subscribing node can get and store one LabelInfo message and cancel its subscription after that. This assumes the provider of the message publishes it periodically.

Messages

  • Classification: pure classification without pose
  • Detection2D and Detection3D: classification + pose
  • BoundingBox2D, BoundingBox3D: orientable rectangular bounding boxes, specified by the pose of their center and their size.
  • XArray messages, where X is one of the message types listed above. A pipeline should emit XArray messages as its forward-facing ROS interface.
  • VisionInfo: Information about a classifier, such as its name and where to find its metadata database.
  • ObjectHypothesis: An class_id/score pair.
  • ObjectHypothesisWithPose: An ObjectHypothesis/pose pair. This accounts for the fact that a single input, say, a point cloud, could have different poses depdending on its class. For example, a flat rectangular prism could either be a smartphone lying on its back, or a book lying on its side.

By using a very general message definition, we hope to cover as many of the various computer vision use cases as possible. Some examples of use cases that can be fully represented are:

  • Bounding box multi-object detectors with tight bounding box predictions, such as YOLO [1]
  • Class-predicting full-image detectors, such as TensorFlow examples trained on the MNIST dataset [2]
  • Full 6D-pose recognition pipelines, such as LINEMOD [3] and those included in the Object Recognition Kitchen [4]
  • Custom detectors that use various point-cloud based features to predict object attributes (one example is [5])

Please see the vision_msgs_examples repository for some sample vision pipelines that emit results using the vision_msgs format.

RVIZ Plugins

The second package enables the visualisation of different detectors in RVIZ2. For more information about the capabilities, please see the README file.

Bounding Box Array

References

vision_msgs's People

Contributors

dusty-nv avatar fruchtzwerg94 avatar gorghino avatar hakuturu583 avatar ijnek avatar jmm-slamcore avatar kukanani avatar leroyr avatar marqrazz avatar mikaelarguedas avatar mintar avatar mistermult avatar novog93 avatar peci1 avatar pepisg avatar procopiostein avatar ronaldensing avatar roshambo919 avatar sloretz avatar stevemacenski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vision_msgs's Issues

Galactic Release

Like in #55, we also need a Galactic release

BTW, thank you for the Rolling release :)

Latest apt release for ROS Noetic on Ubuntu 20.04 has int64 id instead of string id

OS: Ubuntu 20.04 (focal)

Installed a fresh ROS Noetic following the official instructions but installing vision_msgs via apt doesn't install the latest version

sudo apt update
sudo apt install ros-noetic-vision-msgs

apt show ros-noetic-vision-msgs

Package: ros-noetic-vision-msgs
Version: 0.0.1-1focal.20210112.080130
Priority: optional
Section: misc
Maintainer: Adam Allevato <REDACTED>
Installed-Size: 833 kB
Depends: ros-noetic-geometry-msgs, ros-noetic-message-generation, ros-noetic-message-runtime, ros-noetic-sensor-msgs, ros-noetic-std-msgs
Download-Size: 48.2 kB
APT-Manual-Installed: yes
APT-Sources: http://packages.ros.org/ros/ubuntu focal/main amd64 Packages
Description: Messages for interfacing with various computer vision pipelines, such as object detectors.

rosmsg show vision_msgs/ObjectHypothesisWithPose

int64 id
float64 score
...

Which is different to the latest Noetic API reference, which shows the correct string id.

class std::basic_ostream<char>’ has no member named ‘str’

I tried to compile vision_msgs in my ROS2 Humble (branch ros2) workspace but I got this error while it was building vision_msgs_rviz_plugins:

--- stderr: vision_msgs_rviz_plugins                                                                                                                                                                 
In file included from /home/nvidia/drone-racing/build/vision_msgs_rviz_plugins/include/vision_msgs_rviz_plugins/../../../../src/vision_msgs/vision_msgs_rviz_plugins/include/vision_msgs_rviz_plugins/detection_3d_array.hpp:33,
                 from /home/nvidia/drone-racing/build/vision_msgs_rviz_plugins/include/vision_msgs_rviz_plugins/moc_detection_3d_array.cpp:9:
/home/nvidia/drone-racing/src/vision_msgs/vision_msgs_rviz_plugins/include/vision_msgs_rviz_plugins/detection_3d_common.hpp: In member function ‘void rviz_plugins::Detection3DCommon<MessageType>::ShowScore(vision_msgs::msg::Detection3D, double, size_t)’:
/home/nvidia/drone-racing/src/vision_msgs/vision_msgs_rviz_plugins/include/vision_msgs_rviz_plugins/detection_3d_common.hpp:450:90: error: ‘std::basic_ostream<char>::__ostream_type’ {aka ‘class std::basic_ostream<char>’} has no member named ‘str’
  450 |     marker->text = (std::ostringstream{} << std::fixed << std::setprecision(2) << score).str();
      |                                                                                          ^~~
make[2]: *** [CMakeFiles/vision_msgs_rviz_plugins.dir/build.make:128: CMakeFiles/vision_msgs_rviz_plugins.dir/include/vision_msgs_rviz_plugins/moc_detection_3d_array.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:138: CMakeFiles/vision_msgs_rviz_plugins.dir/all] Error 2
make: *** [Makefile:146: all] Error 2

https://github.com/ros-perception/vision_msgs/blob/ros2/vision_msgs_rviz_plugins/include/vision_msgs_rviz_plugins/detection_3d_common.hpp#L450

I fixed it dividing the one-line command into multiple <<

    std::ostringstream oss;
    oss << std::fixed;
    oss << std::setprecision(2);
    oss << score;
    marker->text = oss.str();

add unique object ID for tracking

Hi, I want to make object detection (https://github.com/OUXT-Polaris/nnabla_vision_detection) and tracking ROS package and visialization tool(https://github.com/OUXT-Polaris/vision_msgs_visualization) using vision_msgs.
However, there is no object unique id in this package.
So,I want to modify this message like below.

Detection2D.msg

# Defines a 2D detection result.
#
# This is similar to a 2D classification, but includes position information,
#   allowing a classification result for a specific crop or image point to
#   to be located in the larger image.

Header header

# Class probabilities
ObjectHypothesisWithPose[] results

# 2D bounding box surrounding the object.
BoundingBox2D bbox

# The 2D data that generated these results (i.e. region proposal cropped out of
#   the image). Not required for all use cases, so it may be empty.
sensor_msgs/Image source_img

# If this message was tracking result, this field set true.
bool is_tracking

# Unique ID which was set by tracker ROS node.
uuid_msgs/UniqueID tracking_id

Detection3D.msg

# Defines a 3D detection result.
#
# This extends a basic 3D classification by including position information,
#   allowing a classification result for a specific position in an image to
#   to be located in the larger image.

Header header

# Class probabilities. Does not have to include hypotheses for all possible
#   object ids, the scores for any ids not listed are assumed to be 0.
ObjectHypothesisWithPose[] results

# 3D bounding box surrounding the object.
BoundingBox3D bbox

# The 3D data that generated these results (i.e. region proposal cropped out of
#   the image). This information is not required for all detectors, so it may
#   be empty.
sensor_msgs/PointCloud2 source_cloud

# If this message was tracking result, this field set true.
bool is_tracking

# Unique ID which was set by tracker ROS node.
uuid_msgs/UniqueID tracking_id

I want to use same message for tracking and detection, in order to use same ROS node for the visialization, so I do not want to add new message type.

Query regarding usage of vision_msgs::msg::Detection2DArray

I can see the darknet detector values (bbox, object id) is being published using ros2 topic echo /topic. But I'm having issue pulling the detector values.

At the subscriber:
const vision_msgs::msg::Detection2DArray::SharedPtr det_msg

I can pull the header info as below.
std::cout << det_msg->header.frame_id << std::endl;

Is there any example that uses Detection2DArray at subscription side?

use uuid_in tracking ID.

In order to discuss more in open Isses.
I made this Issue.

I add tracking field in Detection2D.
I understand the human readable, but I strongly disagree with use string in tracking ID field.

I have oppositions about these issues.

Galactic BoundingBox3DArray message not generated for Python

In Galactic, BoundingBox3DArray does not show up as a generated message for python, for neither the apt binary, nor when building from source:

>>> from vision_msgs.msg import BoundingBox3DArray
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: cannot import name 'BoundingBox3DArray' from 'vision_msgs.msg'

This occurs in a fresh docker container too, and could be replicated there: https://asciinema.org/a/vtbiRQtrlSXIvhR25SM7cMLS7

If you look at the __init__.py for the message (for binary located here: /opt/ros/galactic/lib/python3.8/site-packages/vision_msgs/msg/__init__.py) you can see that the message is not generated. I am not sure why that may be yet.

from vision_msgs.msg._bounding_box2_d import BoundingBox2D  # noqa: F401
from vision_msgs.msg._bounding_box3_d import BoundingBox3D  # noqa: F401
from vision_msgs.msg._classification import Classification  # noqa: F401
from vision_msgs.msg._detection2_d import Detection2D  # noqa: F401
from vision_msgs.msg._detection2_d_array import Detection2DArray  # noqa: F401
from vision_msgs.msg._detection3_d import Detection3D  # noqa: F401
from vision_msgs.msg._detection3_d_array import Detection3DArray  # noqa: F401
from vision_msgs.msg._object_hypothesis import ObjectHypothesis  # noqa: F401
from vision_msgs.msg._object_hypothesis_with_pose import ObjectHypothesisWithPose  # noqa: F401
from vision_msgs.msg._vision_info import VisionInfo  # noqa: F401

Discussion about UUIDs

As requested by @SteveMacenski, here's a new issue for continuing the discussion around UUIDs. As a starter, I'm going to summarize my comment from here: #43 (comment)

Main Question

Should we change the type of Detection*D/tracking_id to uuid_msgs/UniqueID?

Background

Previous discussion has happened in #25 and #43.

As a basis for this discussion, we'll use the ROS2 version of the messages from the ros2 branch. In Noetic and ROS2, there are two ID fields (see below):

  • ObjectHypothesisWithPose.id: This is the detected class (human, mug, ...). I think we can all agree that this should remain a string.
  • tracking_id: This is the ID of the object when tracked across multiple frames. The discussion is whether we should make this a UUID.
  • The single detections (without tracking across frames) don't have an ID. There was a discussion in the issues, but we decided against it, since each detection is already uniquely identified by the time stamp of its Detection*DArray message and its array index within that message.

Detection3D.msg

Header header
vision_msgs/ObjectHypothesisWithPose[] results
  # The unique ID of the object class. To get additional information about
  #   this ID, such as its human-readable class name, listeners should perform a
  #   lookup in a metadata database. See vision_msgs/VisionInfo.msg for more detail.
  string id

  float64 score
  geometry_msgs/PoseWithCovariance pose
vision_msgs/BoundingBox3D bbox
sensor_msgs/PointCloud2 source_cloud

# If this message was tracking result, this field set true.
bool is_tracking

# ID used for consistency across multiple detection messages. This value will
#   likely differ from the id field set in each individual ObjectHypothesis.
# If you set this field, be sure to also set is_tracking to True.
string tracking_id

Pros and Cons

The arguments for and against UUIDs boil down to:

Pro UUID:

  • If there are multiple object detector nodes, and some aggregator node just simply throws all their results together, UUIDs will prevent collisions between tracking IDs because they are globally unique. If there is only one object detector node, there won't be any collisions, since the publisher of the Detection*D message has to ensure that the tracking_id is locally unique.

Contra UUID:

  • Having the possibility of having human-readable strings as tracking_ids is much more user and developer friendly; apple-3456 is easier on the eyes than ec1cf114-7aa0-4402-8bf9-eee858118b7d when used in log outputs, RViz markers etc.
  • Perhaps there's not just a one-way flow of information from object detectors into the environment model, but perhaps the object detector re-aquires an object that is already known (by some ID) in the environment model and continues to track it. Having a string here gives us the flexibility of using the existing ID for tracking, no matter what format it is in.

Presumably, if you run multiple object detector nodes, they will all publish their results on separate topics. If you are worried about collisions, the aggregator node could simply prefix each tracking_id with the topic name, which would guarantee unique strings. In my view, this negates the only advantage of UUIDs. So I'm also opposed to changing this field to a UUID.

vision_msgs standardization

Hi, I come from the Discourse post, and I expected there will be some prepared space for discussion. I haven't found it, so I'm creating it here :) If I misunderstood how ideas/comments should be added, please, tell me where to post. Generally, I'm not a fan of all-in-one issues, but as you asked more for opinions than for resolute statements/bug reports, it would be complicated to post them one-at-a-time. So I'll try to separate ideas by paragraphs, but the thoughts are intertwined sometimes.

2D?: One general idea - do you think the 2D cases are so important to deserve their own messages? There are suggestions that full 3D information should be used whenever possible. And even now you're using e.g. the 3D ObjectHypothesisWithCovariance in Detection2D. I bet that bit-size is not a limiting factor in most vision detection applications. I know there will be the hassle theta vs quaternion, but if you think it through, it's not that bad. An angle theta representing yaw around z boils down to quaternion (0, 0, sin(theta), cos(theta)). And in a lot of further processing, you would anyways turn theta into the sin/cos values. See also my further points about helper functions and decoupling from source data.

Helper functions: If you fear users would need to write boilerplate code (as was expressed in #25), this can be fixed by suitable helper functions inside vision_msgs. You can provide these functions in libraries (as you do here for boundingbox construction, or as sensor_msgs does). gencpp can go one step further and provide the functionalities right in the generated header files (via the plugin infrastructure: it can add constructors, custom class method and operators and other free functions. Some off-the-shelf examples: BoundingBox(Point, Vector3, theta) (would do the trig to correctly fill the quaternion in 2D case), double BoundingBox::theta() (would do the inverse), BoundingBox(Point, Vector3) (construct 2D/3D axis-aligned box from center and extents), BoundingBox(Point, Point) (construct 2D/3D axis-aligned box from min point and max point), std::string ObjectHypothesis::stringID() (convert UUID to string; see my point on UUID), static UniqueID ObjectHypothesis::generateID(). AFAIK, genpy doesn't have such plugin mechanism, but providing a helper module still seems like a sufficient way.

Decouple from data: Some messages here include a field carrying some source data. I understand it is very practical on one hand (we do it too in our custom messages), but on the other hand it generates issues like #20 (and possibly future issues like sensor_msgs/PointCloud support etc.). And these have no good solution with the data inside the message except for composition on a higher level. I propose to remove the source data fields from vision_msgs, and possibly leave a note in the message definition that the user should either use a message synchronizer to get the pairs of image-detection, or create a custom type that contains both the image (cloud...) and the detection message. It's easy to use topic_tools/relay_field to extract the desired parts afterwards. There are situations where you do not want to relate the detection to the whole image, but you want to e.g. have the cutout from the image where the detection occured. This would work the same in case 1 detection per image is published (you'd just publish the cutout on a new topic). But it would become a bit more complicated in case the detector can return multiple detections per image and you would use DetectionArray. Here, matching by header would no longer be enough. Or - if taken rigorously - it could be, because making a cutout from an image formally invalidates its frame_id. So each cutout should have its own frame_id, and then you could make the detection-image mapping. Some more rationale: someone might want to run detections on an organized 3D pointcloud from a depth camera and return 2D detections (because he's e.g. only interested in bounding boxes in the RGB in the end). And a practical issue we faced: on a low-bandwidth link, we wanted to send detections, but as I mentioned earlier, the image is already a part of our messages (similarly to how it is done here now). But we figured out we do not want to transmit the image over the slow link. So we ended up not filling any values in the image fields, and now we have detections which lie to be detected on 0x0 px images. That doesn't really seem like a conceptual approach. Still, I'm not really 100% sure on this point, but it seems to me decoupling is the "more correct" way, but has its pros and cons. Bad thing is that using composition to create a DetectionsWithImagesArray message would not subsequently allow to use topic_tools/relay_field, because you would need to relay primitive-array fields, which I guess is not supported.

UUID: Maybe it's time to revisit the proposal to use UUIDs (#25). Recently, ros-perception/radar_msgs#3 agreed on using them for a very similar thing (RadarTrack, which is the same as a tracked detection in terms of vision_msgs). (actually, there's still one review pending for it to be merged)

geometry_msgs: I'd really like to see both BoundingBox classes in geometry_msgs. That makes much more sense (but is a much longer run :) ). Bounding boxes are used outside of vision, too, e.g. in laser filtering. Eigen also has its AlignedBox class in the Geometry module. And, taking into account my point about 2D and helper functions, it might be sufficient to transfer only the 3D box and get rid of the 2D box. I could help with this one because I already did some PRs regarding bounding boxes to MoveIt and Eigen...

Duplicate headers: The *Array messages contain a header, but then they contain the array of Detections, which also each carries a header. That seems either unncessary, or not well documented (to help the user distinguish what values to fill in each header). You could either remove headers from the non-array versions and tell that even single-object detectors have to publish an array, or you could make stamped and unstamped versions of the messages.

ObjectHypothesisWithPose should use composition: The ObjectHypothesisWithPose message should rather use composition to include an ObjectHypothesis object. I know this adds a bit more typing when creating and filling the message, but allows to use tools specialized at processing ObjectHypothesis messages, whereas the current implementation would require the tool to explicitly support both the variant with pose and without it, even though it might not be interested in the pose at all. I know there are some counter-examples in common_msgs (mainly the stamped/unstamped message versions), but I somehow always disliked the fact they were not interchangable when I was not interested in the timestamps or frames.

VisionInfo latching: I suggest to explicitly mention that VisionInfo should be published as a latched topic.

That's it for now. Do not treat these suggestions as some must-haves, it's just feedback which should help this package converge closer to something we could really use instead of our custom messages.

Suggestion for new msg type: vision_msgs/Point

Along with the changes in #64, I want to suggest separating the position (x, y) from vision_msgs/Pose into a new msg vision_msgs/Position.

Often, there are cases where I want to store a point in an image (where heading doesn't make sense), and there is no appropriate message type for it. vision_msgs seems like the appropriate place for this sort of msg.

This is similar to how position is a field in geometry_msgs/Pose.msg

Add ObjectHypothesisArray

What's the reason for all this messages? We want to detect objects in images or point clouds to build a model of the real world. The real world exists even if there are there is no image taken of it. So we need a message that only contains objects but does not reference images or point clouds.
Applications:

  • We put objects on the table and measure the coordinates. This is the ground truth data without images. Later we do object detection and compare the result with the ground truth.
  • We have ground truth data of fixed objects and want to visualize them alongside of detected objects.

Therefore I suggest ObjectHypothesisArray.msg

Header header
ObjectHypothesisWithPose[] objects

Keypoints and polygons instead of bounding boxes

Hi,

currently I'm integrating a face detector. It does not emit a bounding boxes, but key points, e.g. one point in the center of the left eye, one point in the center for the right eye, one point for the center of the mouth (simplified). There are two possible ideas to model it in vision_msg:

  1. Use one detection for each keypoint with one hypothesis. Make a bounding box of size 0x0 at the key point and set hypothesis.id = "left_eye"/"right_eye".
    ++ No need to extend vision_msg
    -- How to group multiple faces in one image? Maybe by hypothesis.id="left_eye_1", but this would generate ids that are not in the database. Maybe by tracking_id?
    -- If faces are somehow grouped by some id, clients have to iterate through all detections and group the to find all key points for some face.
    -- Does not describe the domain. Clients need more implicit knowledge.
    -- Cannot describe polygons.

  2. In the annotation tool CVAT, which is connected to OpenCV, you can use either use bounding boxes or an array of points. The latter is used to annotate key points, or arbitrary polygon shapes. If the points are meant to be key points, the order implicitly defines what each point describes, e.g. the first point is always the left eye etc. This would yield new messages:

#Detection2D.msg
Header header

# Class probabilities
ObjectHypothesisWithPose[] results

# 2D bounding box surrounding the object.
BoundingBox2D bbox
# Keypoints in the image or arbitrary polygon. 
geometry_msg/Point2D[] points 

# The 2D data that generated these results (i.e. region proposal cropped out of
#   the image). Not required for all use cases, so it may be empty.
sensor_msgs/Image source_img

# If true, this message contains object tracking information.
bool is_tracking

# ID used for consistency across multiple detection messages. This value will
#   likely differ from the id field set in each individual ObjectHypothesis.
# If you set this field, be sure to also set is_tracking to True.
string tracking_id

-- Need to extend vision_msg
++ Can also describe key points and polygons.
++ Describes the domain with key points.
++ One array of points can model all the other shapes of other annotation tools.
-- More complexity, especially if clients want to support polygons. Key points should be OK. Clients might just ignore them.
-- Clients might get a message with a empty/default bounding box if only key points are set. Maybe add a field like shape_type (0 = boundingBox, 1 = key point, 2 = polygon) or has_points.
** There are other ways to model it, e.g. by a completely new message KeypointDetectionXD.

Let me know what you think about support for key points/polygons and the above extensions of Detection2D.

Incompatibility between messages for different ROS distros

I noticed that some messages (in my case, Detection2D) is different on ROS Melodic and Noetic:

Melodic Noetic
std_msgs/Header header
vision_msgs/ObjectHypothesisWithPose[] results
vision_msgs/BoundingBox2D bbox
sensor_msgs/Image source_img


std_msgs/Header header
vision_msgs/ObjectHypothesisWithPose[] results
vision_msgs/BoundingBox2D bbox
sensor_msgs/Image source_img
bool is_tracking
string tracking_id

Is there a reason for omitting is_tracking and tracking_id on Melodic? I installed the package using APT, but I could also clone this repo in my workspace in case I need the tracking messages in the future.

[ros2] ros1_bridge can't map ObjectHypothesis between before-noetic ROS 1 and ROS 2

ros1_bridge can't map ObjectHypothesis between before-noetic ROS 1 and ROS 2 because of two conflicting types in ObjectHypothesis' id field. The field is int64 in until ROS 1 melodic and string in noetic and ROS 2 (thanks @mintar for the clarification).
During compilation of ros1_bridge, the automatic mapping attempt will fail with the following error:

.../ros1_bridge/generated/vision_msgs__msg__ObjectHypothesis__factories.cpp: In static member function ‘static void ros1_bridge::Factory<ROS1_T, ROS2_T>::convert_2_to_1(const ROS2_T&, ROS1_T&) [with ROS1_T = vision_msgs::ObjectHypothesis_<std::allocator<void> >; ROS2_T = vision_msgs::msg::ObjectHypothesis_<std::allocator<void> >]’:
.../ros1_bridge/generated/vision_msgs__msg__ObjectHypothesis__factories.cpp:79:26: error: cannot convert ‘const _id_type {aka const std::__cxx11::basic_string<char>}’ to ‘vision_msgs::ObjectHypothesis_<std::allocator<void> >::_id_type {aka long int}’ in assignment
   ros1_msg.id = ros2_msg.id;
                          ^~
make[2]: *** [CMakeFiles/ros1_bridge.dir/generated/vision_msgs__msg__ObjectHypothesis__factories.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
.../ros1_bridge/generated/vision_msgs__msg__ObjectHypothesisWithPose__factories.cpp: In static member function ‘static void ros1_bridge::Factory<ROS1_T, ROS2_T>::convert_2_to_1(const ROS2_T&, ROS1_T&) [with ROS1_T = vision_msgs::ObjectHypothesisWithPose_<std::allocator<void> >; ROS2_T = vision_msgs::msg::ObjectHypothesisWithPose_<std::allocator<void> >]’:
.../ros1_bridge/generated/vision_msgs__msg__ObjectHypothesisWithPose__factories.cpp:87:26: error: cannot convert ‘const _id_type {aka const std::__cxx11::basic_string<char>}’ to ‘vision_msgs::ObjectHypothesisWithPose_<std::allocator<void> >::_id_type {aka long int}’ in assignment
   ros1_msg.id = ros2_msg.id;
                          ^~

This probably requires a custom mapping rule (which is possible for ros1_bridge).

A message for features ?

Wouldn't it be interesting to have a ros message to pass features around (think SIFT, SURF, ORB, ED-line etc) ?
Including some helper functions to handle the back and forth conversion with OpenCV.

Interest in moving this to be official vision_msgs in ros-perception org?

Hi,

I see this package being used as a standard interface for detection / AI models in ROS. It would be great if this was more "official" then as a ros-perception/vision_msgs package so that it can be "official" vision interfaces for ROS. I have the permissions required to help make this happen if you like!

Steve

About catkin_make question

Hello, I am sorry to disturb you. The following error occurred when I used catkin_make in a workspace. I am sure that there is a vision_msgs folder in my src folder. How should I solve it? Thank you so much!
Selection_017

Segmentation array option in Galactic

Dear maintainers,

vision_msgs provides communication options for classification and detection tasks, but what of segmentation tasks? Up till ROS2 Foxy I (mis)used the DetectionXDArray messages for this, where I put the segmented binary mask and segmented pointcloud in source_img and source_cloud respectively. I noticed that these fields have disappeared in the Galactic branch. What would be the recommended method for segmentation arrays in Galactic?

I could think of two ways to do this:

  1. Add SegmentationXDArray messages, with the SegmentationXD message being:
  • std_msgs/Header header
  • ObjectHypothesisWithPose[] results
  • sensor_msgs/Image mask (2D) and sensor_msgs/PointCloud2 cloud (3D)
  • string id
  1. As you state in the Readme, you could use a time synchroniser. For a single segmentation that would work fine: you could use vision_msgs::msg::Detection2D with sensor_msgs::msg::Image. But what happens if your input results in multiple segmentations? Would you then use vision_msgs::msg::Detection2DArray in syncronisation with a custom message containing an array of sensor_msgs::msg::Image instances with id's? And then manually match the id's of the Detection2D instances with the Image instances? This feels a bit tedious.

Please let me what you think.

BoundingBox2DArray doesn't exist in noetic release (apt)

The wiki shows BoundingBox2DArray as being an available message in noetic.

However, when installing ros-noetic-vision-msgs via apt, the message definition is missing. It looks like the release repo last updated the noetic branch in 2020, which was before the array msg was added to the development repo.

Could the maintainer of the release repo update the noetic release to include this message?

ObjectHypothesisWithPose array

Why is there no array for ObjectHypotesisWithPose? I understand anybody can do it themselve, but this doesn't add it to ROS standard msgs.

ros2 launch vision_msgs_rviz_plugins test_all.launch.py is error

hi, my env ros2-galactic, I run :

ros2 launch vision_msgs_rviz_plugins test_all.launch.py , the command show as follow, how I fix it?


[INFO] [launch]: All log files can be found below /home/lin/.ros/log/2023-08-22-15-17-03-064993-PC-80592
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [BoundingBox3D.py-1]: process started with pid [80594]
[INFO] [BoundingBox3DArray.py-2]: process started with pid [80596]
[INFO] [Detection3D.py-3]: process started with pid [80598]
[INFO] [Detection3DArray.py-4]: process started with pid [80600]
[INFO] [rviz2-5]: process started with pid [80602]
[Detection3D.py-3] Traceback (most recent call last):
[Detection3D.py-3]   File "/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/Detection3D.py", line 23, in <module>
[Detection3D.py-3]     from rclpy.node import Node
[Detection3D.py-3]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/node.py", line 43, in <module>
[Detection3D.py-3]     from rclpy.client import Client
[Detection3D.py-3]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/client.py", line 22, in <module>
[Detection3D.py-3]     from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy
[Detection3D.py-3]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/impl/implementation_singleton.py", line 32, in <module>
[Detection3D.py-3]     rclpy_implementation = import_c_library('._rclpy_pybind11', package)
[Detection3D.py-3]   File "/opt/ros/galactic/lib/python3.8/site-packages/rpyutils/import_c_library.py", line 39, in import_c_library
[Detection3D.py-3]     return importlib.import_module(name, package=package)
[Detection3D.py-3]   File "/home/lin/software/miniconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module
[Detection3D.py-3]     return _bootstrap._gcd_import(name[level:], package, level)
[Detection3D.py-3] ModuleNotFoundError: No module named 'rclpy._rclpy_pybind11'
[Detection3D.py-3] The C extension '/opt/ros/galactic/lib/python3.8/site-packages/_rclpy_pybind11.cpython-39-x86_64-linux-gnu.so' isn't present on the system. Please refer to 'https://index.ros.org/doc/ros2/Troubleshooting/Installation-Troubleshooting/#import-failing-without-library-present-on-the-system' for possible solutions
[BoundingBox3DArray.py-2] Traceback (most recent call last):
[BoundingBox3DArray.py-2]   File "/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/BoundingBox3DArray.py", line 22, in <module>
[BoundingBox3DArray.py-2]     from rclpy.node import Node
[BoundingBox3DArray.py-2]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/node.py", line 43, in <module>
[BoundingBox3DArray.py-2]     from rclpy.client import Client
[BoundingBox3DArray.py-2]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/client.py", line 22, in <module>
[BoundingBox3DArray.py-2]     from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy
[BoundingBox3DArray.py-2]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/impl/implementation_singleton.py", line 32, in <module>
[BoundingBox3DArray.py-2]     rclpy_implementation = import_c_library('._rclpy_pybind11', package)
[BoundingBox3DArray.py-2]   File "/opt/ros/galactic/lib/python3.8/site-packages/rpyutils/import_c_library.py", line 39, in import_c_library
[BoundingBox3DArray.py-2]     return importlib.import_module(name, package=package)
[BoundingBox3DArray.py-2]   File "/home/lin/software/miniconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module
[BoundingBox3DArray.py-2]     return _bootstrap._gcd_import(name[level:], package, level)
[BoundingBox3DArray.py-2] ModuleNotFoundError: No module named 'rclpy._rclpy_pybind11'
[BoundingBox3DArray.py-2] The C extension '/opt/ros/galactic/lib/python3.8/site-packages/_rclpy_pybind11.cpython-39-x86_64-linux-gnu.so' isn't present on the system. Please refer to 'https://index.ros.org/doc/ros2/Troubleshooting/Installation-Troubleshooting/#import-failing-without-library-present-on-the-system' for possible solutions
[BoundingBox3D.py-1] Traceback (most recent call last):
[BoundingBox3D.py-1]   File "/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/BoundingBox3D.py", line 22, in <module>
[BoundingBox3D.py-1]     from rclpy.node import Node
[BoundingBox3D.py-1]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/node.py", line 43, in <module>
[BoundingBox3D.py-1]     from rclpy.client import Client
[BoundingBox3D.py-1]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/client.py", line 22, in <module>
[BoundingBox3D.py-1]     from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy
[BoundingBox3D.py-1]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/impl/implementation_singleton.py", line 32, in <module>
[BoundingBox3D.py-1]     rclpy_implementation = import_c_library('._rclpy_pybind11', package)
[BoundingBox3D.py-1]   File "/opt/ros/galactic/lib/python3.8/site-packages/rpyutils/import_c_library.py", line 39, in import_c_library
[BoundingBox3D.py-1]     return importlib.import_module(name, package=package)
[BoundingBox3D.py-1]   File "/home/lin/software/miniconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module
[BoundingBox3D.py-1]     return _bootstrap._gcd_import(name[level:], package, level)
[BoundingBox3D.py-1] ModuleNotFoundError: No module named 'rclpy._rclpy_pybind11'
[BoundingBox3D.py-1] The C extension '/opt/ros/galactic/lib/python3.8/site-packages/_rclpy_pybind11.cpython-39-x86_64-linux-gnu.so' isn't present on the system. Please refer to 'https://index.ros.org/doc/ros2/Troubleshooting/Installation-Troubleshooting/#import-failing-without-library-present-on-the-system' for possible solutions
[ERROR] [Detection3D.py-3]: process has died [pid 80598, exit code 1, cmd '/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/Detection3D.py --ros-args -r __node:=detection3d_test'].
[ERROR] [BoundingBox3DArray.py-2]: process has died [pid 80596, exit code 1, cmd '/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/BoundingBox3DArray.py --ros-args -r __node:=boundingbox3darray_test'].
[Detection3DArray.py-4] Traceback (most recent call last):
[Detection3DArray.py-4]   File "/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/Detection3DArray.py", line 22, in <module>
[Detection3DArray.py-4]     from rclpy.node import Node
[Detection3DArray.py-4]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/node.py", line 43, in <module>
[Detection3DArray.py-4]     from rclpy.client import Client
[Detection3DArray.py-4]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/client.py", line 22, in <module>
[Detection3DArray.py-4]     from rclpy.impl.implementation_singleton import rclpy_implementation as _rclpy
[Detection3DArray.py-4]   File "/opt/ros/galactic/lib/python3.8/site-packages/rclpy/impl/implementation_singleton.py", line 32, in <module>
[Detection3DArray.py-4]     rclpy_implementation = import_c_library('._rclpy_pybind11', package)
[Detection3DArray.py-4]   File "/opt/ros/galactic/lib/python3.8/site-packages/rpyutils/import_c_library.py", line 39, in import_c_library
[Detection3DArray.py-4]     return importlib.import_module(name, package=package)
[Detection3DArray.py-4]   File "/home/lin/software/miniconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module
[Detection3DArray.py-4]     return _bootstrap._gcd_import(name[level:], package, level)
[Detection3DArray.py-4] ModuleNotFoundError: No module named 'rclpy._rclpy_pybind11'
[Detection3DArray.py-4] The C extension '/opt/ros/galactic/lib/python3.8/site-packages/_rclpy_pybind11.cpython-39-x86_64-linux-gnu.so' isn't present on the system. Please refer to 'https://index.ros.org/doc/ros2/Troubleshooting/Installation-Troubleshooting/#import-failing-without-library-present-on-the-system' for possible solutions
[ERROR] [BoundingBox3D.py-1]: process has died [pid 80594, exit code 1, cmd '/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/BoundingBox3D.py --ros-args -r __node:=boundingbox3d_test'].
[ERROR] [Detection3DArray.py-4]: process has died [pid 80600, exit code 1, cmd '/home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/lib/vision_msgs_rviz_plugins/Detection3DArray.py --ros-args -r __node:=detection3darray_test'].
[rviz2-5] [rospack] Error: package 'rviz_common' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] [INFO] [1692688623.686536338] [rviz2]: Stereo is NOT SUPPORTED
[rviz2-5] [INFO] [1692688623.686627491] [rviz2]: OpenGl version: 4.6 (GLSL 4.6)
[rviz2-5] [rospack] Error: package 'rviz_common' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] [rospack] Error: package 'rviz_common' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] [INFO] [1692688623.771178453] [rviz2]: Stereo is NOT SUPPORTED
[rviz2-5] [rospack] Error: package 'rviz_default_plugins' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] [rospack] Error: package 'rviz_default_plugins' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] [rospack] Error: package 'rviz_common' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] [rospack] Error: package 'rviz_common' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] [rospack] Error: package 'rviz_common' not found
[rviz2-5] [librospack]: error while executing command
[rviz2-5] /opt/ros/galactic/lib/rviz2/rviz2: symbol lookup error: /opt/ros/galactic/lib/librviz_common.so: undefined symbol: _ZNK12class_loader23MultiLibraryClassLoader27getAllAvailableClassLoadersEv
[ERROR] [rviz2-5]: process has died [pid 80602, exit code 127, cmd '/opt/ros/galactic/lib/rviz2/rviz2 -d /home/lin/ros2_code/pillars_ws/install/vision_msgs_rviz_plugins/share/vision_msgs_rviz_plugins/conf/conf.rviz --ros-args -r __node:=rviz2'].

Rolling Release

Are there any plans on releasing this package to Rolling?

Using vision_msgs with ROS2 Humble

I have been having some difficulties integrating vision_msgs to my ROS2 project(it worked great on ROS1).

After cloning and checking out the ROS2 project, I was able to build vision msgs(located on ros2_ws/src).

To my Package.xm, I added: <depend>vision_msgs</depend>

And to CmakeLists.txt:
find_package(vision_msgs REQUIRED) ament_target_dependencies(${PROJECT_NAME} ... vision_msgs )
and even to my target_link_libraries.

When trying to include #include <vision_msgs/Detection2DArray.h> I am getting an error saying it cannot be located.

I also tried to remove my dependent code and simply build it with the dependency, but getting(even after source install/setup.bash):

/usr/bin/ld: cannot find -lvision_msgs: No such file or directory

Any help would be greatly appreciated.

Thanks!

fatal error: vision_msgs/msg/detail/header__struct.hpp:

In file included from /home/tianbot/yolov5_ws/build/vision_msgs/rosidl_typesupport_cpp/vision_msgs/msg/detection2_d__type_support.cpp:7:
/home/tianbot/yolov5_ws/build/vision_msgs/rosidl_generator_cpp/vision_msgs/msg/detail/detection2_d__struct.hpp:19:10: fatal error: vision_msgs/msg/detail/header__struct.hpp: 没有那个文件或目录
19 | #include "vision_msgs/msg/detail/header__struct.hpp"

Ros2 Rolling has a breaking Pose2D msgs

Im so confused.

I just upgraded to ros2 rolling on a machine and my Pose2D msgs is different from galactic. The reason why I am opening this here is that according to the path, the Pose2D reference is on vison_msgs not in geometry_msgs

New:

`class Pose2D(metaclass=Metaclass_Pose2D):
    """Message class 'Pose2D'."""

    __slots__ = [
        '_position',
        '_theta',
    ]

    _fields_and_field_types = {
        'position': 'vision_msgs/Point2D',
        'theta': 'double',
    }

As you can see this class has a position element which wraps x,y. These two values are supposed to be inline? Where does this come from. I cant see the change in the Humble branch?

Release for ROS2 Galactic

Hi,

Thank you for your package.

I need this package as a dependency for installing yolo-darknet detection in ROS2-Galactic.
However, I could not find "vision_msgs" for ROS2-Galactic. Seems there was a discussion on this in #55 & #59, but I could not solve the problem.
Could you please share (a link to the) implementation of your package for ROS2?
Thanks.

Clarify size in BoundingBox2D, BoundingBox3D.

The code says that it is:

# The size of the bounding box, in meters, surrounding the object's center
#   pose.
geometry_msgs/Vector3 size

# The size (in pixels) of the bounding box surrounding the object relative
#   to the pose of its center.
float64 size_x
float64 size_y

Is this size_x/size[0] the distance to the center (halfsize) or the extent in x-direction?

ROS 2 RVIZ plugin

Hey guys,
just wanted to share my RVIZ2 plugin to visiualize Detection3dArray msgs. At the moment only for foxy as the msg definition changed afterwards.
Will update it as described in the readme when I find time.

See here: https://github.com/NovoG93/vision_msgs_rviz_plugins

Open for any suggestions if you have some.

Best

Cannot find vision_msgs/msg/detail/bounding_box2_d__struct.hpp

HI,
I am having a similar issue with this:
/home/jainish/ros2_ws/build/webots_ros2_msgs/rosidl_generator_cpp/webots_ros2_msgs/msg/detail/camera_recognition_object__struct.hpp:23:10: fatal error: vision_msgs/msg/detail/bounding_box2_d__struct.hpp: No such file or directory
23 | #include "vision_msgs/msg/detail/bounding_box2_d__struct.hpp"

The other packages like geometric_msgs works fine though.

displaying results of vision_msgs

I am sorry, but how can i display a topic of the type: vision_msgs?
If i use: rostopic echo /yolov7/yolov7
I get a wild bunch of numbers.
If i use: rostopic echo /yolov7/yolov7/result
WARNING: topic [/yolov7/yolov7/result] does not appear to be published yet

There is nothing to display.

Improper use of Pose2D datatype in BoundingBox2D

The semantic definition of Pose2D datatype represents metric position in meters (As per the default units REP 103) while the BoundingBox2D says that it's in pixels.

# The 2D position (in pixels) and orientation of the bounding box center.
geometry_msgs/Pose2D center

This should not be redefining the units in an established submessage. And furthermore the Pose2D message is deprecated thus should not be being used.

A pixel space representation of this should be created. It could have the same payload. And even the same message name in vision_msgs namespace. Ala vision_msgs/Pose2D.msg but with the units appropriately defined as pixels instead of meters.

Also the axes and direction of the angle should be defined. There are a couple common but different conventions in image space.

Messages to contain uncertainty information

In reviewing the messages, I see they are missing measures of uncertainty in the detections / output values. It would be valuable information to communicate to downstream uses of these messages.

ROS2 Humble build error

Hi,
I am having problem building the plugin from the source. It failed with linker error:
Screenshot 2023-08-18 120937

I am working on Windows, ROS2, Humble. Rviz2 is totally build from source. ROS2 setup is working fine.

Now, I have no idea from which source this linkage error is coming. Can you help please?
P.S: vision_msg is built fine.

ObjectHypothesisWithPose with label

Hi,
I'm working on wrapping edgeimpulse on ROS2.

It could be useful to have a string label parameter in the message type ObjectHypothesisWithPose because it often happen that you want to send also the label.

What do you think about that?

Thank you in advance!

Some questions about vision_msgs project

@Kukanani
Based on your kindly suggestion here, we are now evaluating the efforts and benefits of moving current object_msgs to vision_msgs. This migration may influence some existing projects(like ros2_intel_movidius_ncs , ros2_object_analytics, etc.) and most future projects. We still need your help for some questions before we could make final decision:

  1. We found that vision_msgs is already available for ROS. Is there any schedule to merge this message type to ROS2 core, since most of current projects we are working on are for ROS2?

  2. Is the debian package of this project available in the future? Some of our projects are being integrated into ROS2 core and the message project is one of the dependencies. Since OSRF only accepts debian package dependencies in ROS2 core project, we'd like to know your plan about this.

Vision_msgs is really a valuable project to help unify the interface of computer vision and object detection tasks. Thanks very much for working it out and hope we have more chances to co-work on CV and ROS/ROS2 :)

Support compressed image messages

It'd be useful to support CompressedImage, as opposed to just Images. This is convenient for a few applications e.g. in our case we're sending a Detection2DArray over a socket to roslibpy to display on a website and we'd need to re-encode the data as a jpeg/png otherwise. In general if you're sending tons of bounding box cutouts, sending the raw pixel data can add up to more bytes than the entire compressed image.

I've tested this as a simple modification to Detection2D, which works fine. I'm happy to submit a PR for this if it'd be useful. Would a new message be preferable e.g. CompressedDetection2D, or would an optional parameter to Detection2D work instead?

Published message is not received

Hello!
Tested with ROS melodic on Ubuntu 18.04

rostopic info shows that the topic is published and subscibed as expected.
rostopic hz shows that the topic is being published with a frequency of ~10hz, as expected.
rostopic echo is completely silent, so is the subscriber node that I wrote.

It works if I change the type of the published topic to String or to Image or to Pose2D, it does not work with Detection2D or Detection2DArray or BoundingBox2D.

My code looks like this:

    from time import sleep
    import rospy
    rospy.init_node('t1')
    pub = rospy.Publisher('detection', Detection2DArray, queue_size=10)
    while True:
        message = Detection2DArray()
        pub.publish(message)
        sleep(0.1)

BoundingBox2DArray missing for galactic

Hi,

I was wondering why is BoundingBox2DArray message missing from galactic, and thus not available to install through

apt-get install ros-galactic-vision-msgs

thanks

libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection3D_<std::allocator<void> >

Though this question is not directly related to the vision_msgs, I hope someone here might know the answer for this error.

Hello,
I build ROS1 noetic and ROS2 galactic both from source.
I am facing the below error while building ROS1_bridge:

Starting >>> ros1_bridge
[Processing: ros1_bridge]                              
[Processing: ros1_bridge]                                      
--- stderr: ros1_bridge                                          
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection3D_<std::allocator<void> >, vision_msgs::msg::Detection3D_<std::allocator<void> > >::convert_1_to_2(vision_msgs::Detection3D_<std::allocator<void> > const&, vision_msgs::msg::Detection3D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection2D_<std::allocator<void> >, vision_msgs::msg::Detection2D_<std::allocator<void> > >::convert_2_to_1(vision_msgs::msg::Detection2D_<std::allocator<void> > const&, vision_msgs::Detection2D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection2D_<std::allocator<void> >, vision_msgs::msg::Detection2D_<std::allocator<void> > >::convert_1_to_2(vision_msgs::Detection2D_<std::allocator<void> > const&, vision_msgs::msg::Detection2D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection3D_<std::allocator<void> >, vision_msgs::msg::Detection3D_<std::allocator<void> > >::convert_2_to_1(vision_msgs::msg::Detection3D_<std::allocator<void> > const&, vision_msgs::Detection3D_<std::allocator<void> >&)'
collect2: error: ld returned 1 exit status
make[2]: *** [dynamic_bridge] Error 1
make[1]: *** [CMakeFiles/dynamic_bridge.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection3D_<std::allocator<void> >, vision_msgs::msg::Detection3D_<std::allocator<void> > >::convert_1_to_2(vision_msgs::Detection3D_<std::allocator<void> > const&, vision_msgs::msg::Detection3D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection2D_<std::allocator<void> >, vision_msgs::msg::Detection2D_<std::allocator<void> > >::convert_2_to_1(vision_msgs::msg::Detection2D_<std::allocator<void> > const&, vision_msgs::Detection2D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection2D_<std::allocator<void> >, vision_msgs::msg::Detection2D_<std::allocator<void> > >::convert_1_to_2(vision_msgs::Detection2D_<std::allocator<void> > const&, vision_msgs::msg::Detection2D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection3D_<std::allocator<void> >, vision_msgs::msg::Detection3D_<std::allocator<void> > >::convert_2_to_1(vision_msgs::msg::Detection3D_<std::allocator<void> > const&, vision_msgs::Detection3D_<std::allocator<void> >&)'
collect2: error: ld returned 1 exit status
make[2]: *** [parameter_bridge] Error 1
make[1]: *** [CMakeFiles/parameter_bridge.dir/all] Error 2
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection3D_<std::allocator<void> >, vision_msgs::msg::Detection3D_<std::allocator<void> > >::convert_1_to_2(vision_msgs::Detection3D_<std::allocator<void> > const&, vision_msgs::msg::Detection3D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection2D_<std::allocator<void> >, vision_msgs::msg::Detection2D_<std::allocator<void> > >::convert_2_to_1(vision_msgs::msg::Detection2D_<std::allocator<void> > const&, vision_msgs::Detection2D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection2D_<std::allocator<void> >, vision_msgs::msg::Detection2D_<std::allocator<void> > >::convert_1_to_2(vision_msgs::Detection2D_<std::allocator<void> > const&, vision_msgs::msg::Detection2D_<std::allocator<void> >&)'
libros1_bridge.so: undefined reference to `ros1_bridge::Factory<vision_msgs::Detection3D_<std::allocator<void> >, vision_msgs::msg::Detection3D_<std::allocator<void> > >::convert_2_to_1(vision_msgs::msg::Detection3D_<std::allocator<void> > const&, vision_msgs::Detection3D_<std::allocator<void> >&)'
collect2: error: ld returned 1 exit status
make[2]: *** [static_bridge] Error 1
make[1]: *** [CMakeFiles/static_bridge.dir/all] Error 2
make: *** [all] Error 2
---
Failed   <<< ros1_bridge [1min 19s, exited with code 2]

Summary: 0 packages finished [1min 20s]
  1 package failed: ros1_bridge
  1 package had stderr output: ros1_bridge

I build ROS1 and ROS2 both from souce, and build each vision_msgs in both versions.
TIA

IDs for detections?

How about adding an ID field to the Classification/Detection Msgs.

While using Detection3D msgs works fine in the visual pipeline we struggle with connecting the 'Objects' (BB+cloud+[hypothesis]) to other parts of the 'world'.

Example would be a node that creates grasping objects or shape primitives from the box or cloud of the Detection result. This need some id/name to reference them -> now we also need to store the mapping from world objects to detection results. ID field helps in this case by allowing them to just share the same id.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.