clydemcqueen / opencv_cam Goto Github PK
View Code? Open in Web Editor NEWROS2 OpenCV camera driver that supports intra-process communication
ROS2 OpenCV camera driver that supports intra-process communication
Hello Guys,
I am using ubuntu 20.04 and ros2-galactic running in virtualbox. I am using logitech C615 camera for calibration etc.
I am trying to build the files as mentioned in readme, but cannot run it. I tried everything. I would appreciate your prompt solution for this.
Failed <<< opencv_cam [13.8s, exited with code 2]
Summary: 1 package finished [19.4s]
1 package failed: opencv_cam
1 package had stderr output: opencv_cam**
When publishing a movie, the playback happens as fast as a single thread can process and publish the frames. Provide an option to slow it down to the rate that the movie was recorded.
i get this message when i tried to build the package :
Failed <<< opencv_cam [18.3s, exited with code 2]
Summary: 1 package finished [22.0s]
1 package failed: opencv_cam
1 package had stderr output: opencv_cam
`
Hi, glad to see this project that support intra process transport!
zero-copy IPC through rclcpp loaned-api is the other option to avoid copy thus reducing the tansport latency and cpu usage.
https://github.com/ZhenshengLee/ros2_v4l2_camera has implement the intra-process transport and zero-copy transport.
ros2_v4l2_camera depends on https://github.com/ZhenshengLee/ros2_shm_msgs which is a lib for a more easy way to use zero copy transport with image(opencv) and pointcloud(pcl and open3d)
Feel free to check the readme
https://github.com/ZhenshengLee/ros2_v4l2_camera/blob/outdoor/rolling/README.md
https://github.com/ZhenshengLee/ros2_shm_msgs/blob/master/README.md
For more info of zero-copy IPC
https://design.ros2.org/articles/zero_copy.html
https://github.com/ros2/demos/blob/master/demo_nodes_cpp/src/topics/talker_loaned_message.cpp
opencv_cam runs on Foxy (except perhaps for IPC), but there are a few deprecation warnings. Add a branch for eloquent, and migrate the default branch to Foxy.
I want to run the node ros2 run opencv_cam opencv_cam_main
with a csi camera in a jetson nano and it is publishing a green screen.
file = false filename = fps = 0 height = 0 index = 1 width = 0 [INFO] [1699894211.469284797] [opencv_cam]: OpenCV version 4 [ WARN:0] global /opt/opencv/modules/videoio/src/cap_gstreamer.cpp (1761) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error. [ WARN:0] global /opt/opencv/modules/videoio/src/cap_gstreamer.cpp (888) open OpenCV | GStreamer warning: unable to start pipeline [ WARN:0] global /opt/opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created [ WARN:0] global /opt/opencv/modules/videoio/src/cap_v4l.cpp (1914) getProperty VIDEOIO(V4L2:/dev/video1): Unable to get camera FPS [INFO] [1699894211.937151307] [opencv_cam]: device 1 open, width 3264, height 2464, device fps -1 [ERROR] [1699894211.937501568] [camera_calibration_parsers]: Failed to detect content in .ini file [ERROR] [1699894211.937577193] [opencv_cam]: cannot get camera info, will not publish [INFO] [1699894211.960939068] [opencv_cam]: start publishing
... or possibly just camera_info_parsers
OpenCV allows callers to control fps, width, height and other parameters on device capture. Expose these as parameters.
If streaming from a video camera: use get(CAP_PROP_POS_MSEC) as the time stamp of the message. (If this works).
If playing back a file: Instead of now() use some calculation based on the recorded fps and the playback fps. The original stream came from a hardware device that probably is more regular than the interrupted software loop that is generating the image messages.
Since it is a bit arbitrary in the file playback case what the timestamp is, pick stamps that end up on whole seconds where they can. In other words, if playing back at 5 fps, pick the stamp of the first messages to be the nearest whole second to now(). Every 5th frame after that will have a whole second timestamp. Obviously not very important, it might be helpful when debugging.
It would be nice to have the equivalent of imshow for debugging.
Export a header file to support manual node composition. The header file need only contain a factory method, ala https://github.com/ptrmu/fiducial_vlam_sam/blob/master/fiducial_vlam/include/fiducial_vlam/fiducial_vlam.hpp
/work/opencv_cam_ws/src/opencv_cam/src/opencv_cam_node.cpp: In member function ‘void opencv_cam::OpencvCamNode::loop()’:
/work/opencv_cam_ws/src/opencv_cam/src/opencv_cam_node.cpp:177:80: warning: ‘rclcpp::Duration::Duration(rcl_duration_value_t)’ is deprecated: Use Duration::from_nanoseconds instead or std::chrono_literals. For example:rclcpp::Duration::from_nanoseconds(int64_variable);rclcpp::Duration(0ns); [-Wdeprecated-declarations]
177 | next_stamp_ = next_stamp_ + rclcpp::Duration{1000000000L / publish_fps_};
| ^
In file included from /opt/ros/galactic/include/rclcpp/qos.hpp:20,
from /opt/ros/galactic/include/rclcpp/node_interfaces/node_graph_interface.hpp:31,
from /opt/ros/galactic/include/rclcpp/client.hpp:34,
from /opt/ros/galactic/include/rclcpp/callback_group.hpp:23,
from /opt/ros/galactic/include/rclcpp/any_executable.hpp:20,
from /opt/ros/galactic/include/rclcpp/memory_strategy.hpp:25,
from /opt/ros/galactic/include/rclcpp/memory_strategies.hpp:18,
from /opt/ros/galactic/include/rclcpp/executor_options.hpp:20,
from /opt/ros/galactic/include/rclcpp/executor.hpp:36,
from /opt/ros/galactic/include/rclcpp/executors/multi_threaded_executor.hpp:26,
from /opt/ros/galactic/include/rclcpp/executors.hpp:21,
from /opt/ros/galactic/include/rclcpp/rclcpp.hpp:156,
from /work/opencv_cam_ws/src/opencv_cam/include/opencv_cam/opencv_cam_node.hpp:6,
from /work/opencv_cam_ws/src/opencv_cam/src/opencv_cam_node.cpp:1:
/opt/ros/galactic/include/rclcpp/duration.hpp:46:12: note: declared here
46 | explicit Duration(rcl_duration_value_t nanoseconds);
| ^~~~~~~~
ROS2 Eloquent is expected 11/22, betas are out now. Create a dashing branch, then port to master to Eloquent.
The easy and effective work-around is to specify the fps and slow the message generation down to a rate that the subscriber can keep up with.
An explanation is printed before the playback is terminated. I don't know if the node is terminated. The termination doesn't happen if there is no subscriber. (not a very useful scenario, but a clue as to what is going on at a lower level)
It would be interesting to investigate if there is an API that the publisher can use to figure out if its publish queue is filled. The publisher could then automatically wait until the queue empties a bit before publishing new frames.
Best effort publishing might solve the crash problem and could be an option. But for development/debug there will probably be cases where every frame it desired.
Current: empty images are published at a high fps rate at EOF
Desired: stop publishing at EOF.
Changing a parameter currently has no effect. Restart the video capture whenever a parameter changes.
I am trying to get the intra process communication working.
I am just printing the address of published and subscribed messages in the opencv_cam_node
and subscriber_node
and I am getting different addresses, e.g.:
[opencv_cam]: Image message address [PUBLISH]: 0x7f5574002800
[image_subscriber]: Image message address [SUB]: 0x7f55740018e0
[opencv_cam]: Image message address [PUBLISH]: 0x7f5574002800
[image_subscriber]: Image message address [SUB]: 0x7f55740018e0
[opencv_cam]: Image message address [PUBLISH]: 0x7f5574002800
[image_subscriber]: Image message address [SUB]: 0x7f55740018e0
How do I enable the intra process communication such that the messages are shared between both nodes?
Maybe there is no calibration because this video file/stream is going to do the calibration. Maybe also an option to publish a default (anything) calibration in-case the consumer requires a camera_info message. (Even if only for getting a system running)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.