osrf / ovc Goto Github PK
View Code? Open in Web Editor NEWthe Open Vision Computer
License: Apache License 2.0
the Open Vision Computer
License: Apache License 2.0
Hi,
We are trying to reproduce the ovc3b these days. The project ovc3b provide us the lens holder PT-LH010M and the CMOS image sensor AR0144CS. However, we still don't know the version of the camera lens. Can you share the information about the camera lens with us?
Waiting for your reply!
Hello! I'm sorry to leave this as an issue, but it seems like it might be the best place for others to find information if they are searching later, rather than resorting to email.
Are there any ROS bags available for download with captured data from the OVC2? I'm extremely interested in exploring this platform. I've looked through the OSRF website and other standard places, but I can't seem to find more info.
Hi Guys,
I am working on the integration of ROS [ moveit, rviz, etc] with Nvidia IsaaC SDK/ Jetson devices. I was advised to study your project as it uses nvidia hardware.
Could you let me know if your project supports nvidia jetson Xavier AGX/ nvidia jetson NX, nvidia jetson Nano hardware, please?
If it supports - is there a guide how to implement ovc with it? ovc3? ovc4?
Regards & Thanks
Hi,
We want to reproduce the ovc3b but I have some problem on the hardware of ovc3b as following. I need your help.
Thanks for your responding.
Documenting the issue so I don't forget to look into it.
OVC3 is currently configured to use jumbo frames in its ethernet interface:
ubuntu@arm:~$ cat /etc/dhcp/dhcpd.conf | grep -n 13500
45: option interface-mtu 13500;
This can cause some instabilities: the interface stops sending data in an unpredictable way. Commenting this line solves the issue.
Hi,
The model you used in the ovc3b camera board TE0820-03-3BE21FA is no longer available. I want to know if it can be replaced with 4DE21FA without changing the circuit (because our PCB board has been made, only this chip is missing.)
Cheers,
Jingjing
Brought about by this suggestion: #61 (comment)
I'm debating working on using JSON in #61 or another PR. Here's some sample code that I think I'll be basing the implementation off of:
#include <jsoncpp/json/json.h>
#include <iostream>
#include <sstream>
int main() {
Json::Value root;
root["name"] = "my_name";
Json::StreamWriterBuilder builder;
builder["indentation"] = "";
std::string output = Json::writeString(builder, root);
std::cout << output << std::endl;
std::stringstream ss(output);
Json::CharReaderBuilder rbuilder;
Json::Value root_read;
std::string errs;
Json::parseFromStream(rbuilder, ss, &root_read, &errs);
std::cout << errs << std::endl;
std::cout << root_read["name"] << std::endl;
}
Output:
$ ./a.out
{"name":"my_name"}
"my_name"
Still not sure if I should change the serialization (struct vs an actual library). I could just replace the whole host -> client message with a length field followed by the JSON string. Something about it doesn't feel right but it would work... ๐คท
@luca-della-vedova thougths?
Hello,
I am currently working on your OVC3, more specifically "OPEN VISION COMPUTER REV 3A (MARCH 2019)"
I have a board with me.
My plan is to develop an FPGA system to perform my own application, but it looks like OVC3 does not have JTAG connectivity.
When I tried to connect USB-C port to the PC, Vivado's hardware manager did not detect any device.
What are the available options to configure FPGA if I want to load my logic design to OVC3?
I read Trenz's specsheet and manual and found this.
According to this, only boot from SD/flash(eMMC) is possible. (Assuming you are using default firmware in CPLD.)
If that's true, what are the available debug options to probe signals in the FPGA?
I am not asking for specific tutorial for any of these question.
(Though if you can briefly explain me how to do that it would be appreciated.)
Also I want to know how this switch is mapped to the boot mode selection functionality.
Clearly, JTAGEN is for selecting a USB connection to either FPGA/CLPD.
MODE is for SD/eMMC selection.
NOSEQ looks like doing the same functionality that is specified in the above screenshot.
(Selecting JTAG mode when the firmware in the last column is installed I guess?)
But then, what is the USER pin used for?
So my question is...
Thank you!
Best regards,
Han-sok
We are facing an issue with the RGB sensor. In indoor environments, the colors in the images are ok, as shown by the following picture.
However, when we take the sensor outside, pictures look like these:
Yellow and green boxes are indistinguishable. This issue seems to be more than just white balancing the image.
Another example during data collection:
In the documentation, there are specific channel gains for each Bayer color channel. I tried modifying this, but I was not able to solve the problem.
Any hints are appreciated! Thanks!
We find many repositories in https://osrf.github.io/ovc/software.html can not be open, such as Real-time Semantic Segmentation, Stereo Image Splitter, GPU Accelerated Dense Stereo Disparity Estimation. More about the visual algorithms used in your experiment The Open Vision Computer: An Integrated Sensing and Compute System for Mobile Robots are also not found in http://open.vision.computer.
Thus, we wonder if we could get your visual algorithm to duplicate your experiment (vision-based autonomous navigation of uav) after our hardware is finished?
Looking forward to your reply.
Do simple mechanical drawings (dimensions, camera centers, hole placement) exist for either the ovc3a board or perhaps a reference design housing which would have the same information?
I can manually click around and extract this information from the KiCAD project -- is there a better way?
If such a document were created, where should it live in the repository?
Hi there,
Are there any plans for osrf to have a manufacturing partner to build/sell/distribute this project? Alternatively, is anyone here looking into manufacturing these?
Cheers,
Hello,
I recently cloned the OVC3b module with great success. However, when I connect the module to my laptop with USB-C, the image rate is lower than what I get on the OVC (15-20Hz instead of 33Hz). Have you ever experienced such a result? I would like to avoid compressing the image to avoid additional latency.
I really appreciate the effort put into this project.
Thank you !
Hi
What would be the recommended method to attach an IMU to this board? Before this version, your designs usually includes some IMU integration. But in OVC5 it's unclear.
Another question:
Is the OVC5 in early stages or near to finish?
Thanks.
When looking over the imu time stamps recorded for a data set, it looks like the time stamps are not all equally spaced. The IMU ran at 200Hz, so one would expect 0.005s between consecutive imu messages. But that's not what we are seeing.
For the Falcam, the recording was done on an Intel i5 laptop. It was reasonably busy, but not excessively, and I don't recall warnings about message drops from the rosbag record. Drivers and rosbag record were all running on the same host.
These are the timestamp-to-timestamp differences observed for the Falcam, aka ovc 0
Same, but now for the OVC1. Here, the driver was running on the TX2 which was connected via ethernet cable to the recording laptop.
Do you have any explanation as to why we are seeing time stamps like this? Could it be CPU load on the host that is running the driver? The OVC1 data is harder to explain, with what appears to be 400Hz spacing between some data.
Dear sir it seems like the ovc v4 is using Sony imx219 sensor so it is a rolling shutter camera won't it affect running visual odometry algorithms or visual inertial how do you solve the issue of rolling shutter or is my knowledge about this tech and camera very vague.
I hope you will reply
Thank you
First of all, I would like to thank the whole team for the great work done on this project.
I intend to reproduce the OVC4, I would like to get some feedback on the project.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.