limelightvision / limelightdocs Goto Github PK
View Code? Open in Web Editor NEWOfficial documentation for the Limelight Smart Camera for FRC.
Home Page: http://docs.limelightvision.io
Official documentation for the Limelight Smart Camera for FRC.
Home Page: http://docs.limelightvision.io
We've been testing the new 2023.0.1 build of the Limelight software on our Limelight 2+ against April Tag targets for this year's FRC season game.
In doing so, we've found that the recommended value for marker size, 203.2 mm (8 inches), seems to result in distance and position calculations that are off by about a third. That value seems to correspond to a measurement of the target including the white border surrounding it. If we instead set the marker size to 152.4 mm (6 inches), which corresponds to the size of the marker from black edge to black edge, excluding the white border, we get accurate distance and positioning measurements.
Is there some configuration we should change to make this work with the recommended value, or should that value be corrected to exclude the 1 inch thick white border on the markers?
Hi
I'm trying to get up to speed with Limelight and trying to work with cubes rather than Retro-Reflectors.
So colors matter....
These were added earlier this year...
Red-Balance slider
Blue-Balance slider
Better default color balance settings
Can they be discussed in the online docs please?
eg: How should we be using them?
Thanks
Phil.
Thanks for all of this, our FRC team is really looking forward to using Limelight this year!
I was looking at some of the Case Study code and found a small error in the Aim + Range, instead of adding the distance adjustment, it subtracted it from the right motor.
`
if (joystick->GetRawButton(9))
{
float heading_error = -tx;
float distance_error = -ty;
float steering_adjust = 0.0f;
if (tx > 1.0)
{
steering_adjust = KpAim*heading_error - min_aim_command;
}
else if (tx < 1.0)
{
steering_adjust = KpAim*heading_error + min_aim_command;
}
float distance_adjust = KpDistance * distance_error;
left_command += distance_adjust + steering_adjust;
right_command += distance_adjust - steering_adjust; // here
}
`
The new documentation format looks amazing, but it looks like there is no search bar?
I have looked around a good amount within the documentation, but could not find anything on current draw for the limelight. Is there anywhere I can look to in order to find this information?
Minor issue but the URL for the 254 vision talk on this page:
http://docs.limelightvision.io/en/latest/additional_resources.html
is not correct.
You have:
https://www.youtube.com/watch?v=rLwOkAJqlmo
And it is:
https://www.youtube.com/watch?v=rLwOkAJqImo
It should be a capital i and you have a lower case L in the 3 from the last position.
On the getting started page of the documentation the Limelight 1 tab of the imaging instructions tells users to "Apply power to your limelight" directly after plugging in the usb cable. However, the warning underneath instructs "Only connect the microUSB cable while imaging." When imaging a Limelight 1, which instructions should be used?
I was reading through the Java sample programs shown on this page:
http://docs.limelightvision.io/en/latest/cs_drive_to_goal_2019.html
(Much appreciated BTW)
In the Java limelight tracking section, it looks like some key lines are missing... namely the else block around the code that calculates the driving commands.
I've included them in the sample below and just commented them as "Missing".
if (tv < 1.0)
{
m_LimelightHasValidTarget = false;
m_LimelightDriveCommand = 0.0;
m_LimelightSteerCommand = 0.0;
return;
}
else // -------------------------- Missing
{ // -------------------------Missing
m_LimelightHasValidTarget = true;
// Start with proportional steering
double steer_cmd = tx * STEER_K;
m_LimelightSteerCommand = steer_cmd;
// try to drive forward until the target area reaches our desired area
double drive_cmd = (DESIRED_TARGET_AREA - ta) * DRIVE_K;
// don't let the robot drive too fast into the goal
if (drive_cmd > MAX_DRIVE)
{
drive_cmd = MAX_DRIVE;
}
m_LimelightDriveCommand = drive_cmd;
} // ------------------------- Missing
The docs are a bit confusing.
The say "1 | Take two snapshots per second"
But the s/w change log says: "Setting the snapshot value to “1” will only take a single snapshot and reset the value to 0. "
So if I want to take a series of snapshots during auto can I just set "Snaphots" to 1 at the beginning, and 0 at the end, or do I need to set it to 30, or do I need to monitor the value and set it to 1 whenever it goes to 0?
Thanks
Limelight 2, newest update.
When driving the robot aggressively, the limelight will reliably stop updating frames and "hang". The green light on the limelight goes solid when it hangs. It will hand for a few seconds then come back. While not hanging, it remains at a solid 85-90fps.
https://techcrunch.com/2023/07/05/gfycat-shuts-down-on-september-1/
The documentation stores some gifs on gfycat, but now those links are broken.
In the first python code block, at https://docs.limelightvision.io/en/latest/networktables_api.html, there is a semicolon terminating a method call. This is redundant and not official python syntax.
update network table examples to use subscribe/publish 2023 WPI library changes.
include example of read only access subscribe tables and writing publish (pipeline) change to table.
Hi Limelight staff, love your product!
My frc team just bought a Limelight this year, and we're really excited to start using it. However, one thing we want to do is to be able to save frames from the camera stream during the match, when we detect contours. We want to do this in order to tune the HSV values for the lighting at the competitions. Our code is in Java.
We are having a lot of trouble doing this, however. We understand we could bring a laptop onto the field before the match and tune it then, but we'd prefer to not have to do that. So far, we've attempted to use OpenCV to access the stream, but have had problems with recognizing the stream in the code.
How could we save frames from the stream?
Our code:
import org.usfirst.frc.team1100.robot.subsystems.vision.Limelight;
import org.opencv.videoio.VideoCapture;
import org.opencv.core.Mat;
import org.opencv.imgcodecs.Imgcodecs;
import edu.wpi.first.wpilibj.command.Command;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
/**
* Attempt at capturing images using OpenCV
*/
public class CaptureImage extends Command {
//Only save one image per match
boolean imageCaptured = false;
Limelight lime;
VideoCapture camera;
Mat frame;
public CaptureImage() {
requires(Limelight.getInstance());
}
protected void initialize() {
//Attempts to open stream
camera = new VideoCapture("http://10.11.00.11:5800");
frame = new Mat();
lime = Limelight.getInstance();
}
protected void execute() {
if (lime.contoursDetected() && !imageCaptured) {
if (camera.read(frame)) { //Always returns false
//Writes Output.jpg to roborio, but always is blank
Imgcodecs.imwrite("/home/lvuser/Images/Output.jpg", frame);
imageCaptured = true;
} else {
SmartDashboard.putBoolean("Open", camera.isOpened()); //Always false
}
}
}
// Make this return true when this Command no longer needs to run execute()
protected boolean isFinished() {
SmartDashboard.putBoolean("Image Captured", imageCaptured); //Always false
return false;
}
// Called once after isFinished returns true
protected void end() {
}
// Called when another command which requires one or more of the same
// subsystems is scheduled to run
protected void interrupted() {
}
}
The link for the Limelight GRIP build on the Downloads page is labeled as "Flash Tool Download"
The 2019 Left Eyebrow Target Model link downloads the Dual Target model.
Hey all, great project you have here, it's saving my FRC team a lot of time messing with vision.
I was reading through the documentation, and found in the Vision Pipeline Tuning Section, under the Contour Filtering, you have the line:
"After thresholding, Limelight applies a .... to generate a list of contours."
What exactly do you apply here? We ask, because we are interested in trying to possible select a hull that isn't necessarily the largest and are interested in the process.
Changelog link works through the docs
Doesn't work on downloads just needs to be updated to above
The case study states: "If the error is bigger than some threshold, just add a constant to your motor command which roughly represents the minimum amount of power needed for the robot to actually move (you actually want to use a little bit less than this)."
Currently this logic is implemented by the following conditional statements:
if (tx > 1.0)
{
steering_adjust = Kp*heading_error - min_command;
}
else if (tx < 1.0)
{
steering_adjust = Kp*heading_error + min_command;
}
Basically what I interpreted this to mean is that when |tx| > threshold, add some minimum command to make the mechanism rotate in the correct direction and exceed static friction.
With the current code the conditional statements will work fine when tx is positive because when tx > 1.0, we will subtract the minimum command. The conditional statements will also work fine when tx is negative and less than -1.0 because we will add the minimum command. The problem is that the condition tx < 1.0
is too inclusive. For example when the tx is -0.5, which is "smaller" than the threshold of one degree, the minimum command will be in effect. However, when the tx is 0.5, the minimum command will not be in effect. I am wondering if this is intentional, or the code was intended to be written as the following:
if (Math.abs(tx) > 1.0) {
if (tx < 0) {
steering_adjust = Kp*heading_error + min_command;
} else {
steering_adjust = Kp*heading_error - min_command;
}
} else {
steering_adjust = Kp*heading_error;
}
My team and I have been having connection issues on our Limelight 2+ and recently discovered the solution.
I'm sharing what worked for us in this issue to help other teams experiencing the same issue and to bring the issue to your attention.
You turn on your robot and observe that every component successfully starts.
The Limelight fans turn on, blinks its lights, etc.
limelight.local:5801
does not resolvehttp://10.TE.AM.2
I'm not sure if this issue is caused by a misconfiguration on our part or if this is unexpected behavior from the Limelight.
Hello there!
I wanted to ask about the wording under (ADVANCED) 3D Coordinate Systems, specifically in “Limelight Camera Space”. It states that positive Y is associated with the downward direction, but in the Limelight program on the web when adjusting camera position positive Y is correlated with the upwards direction. The same is applicable to the “Target Space” subsection. Perhaps I misinterpreted what you have written, but I thought I would mention this to get some clarification. I appreciate your time.
Thanks!
Julia
Is documentation for tag space correct? because the axis is not following the right-hand rule, or should this also be like camera space where "Pointing to the right of the target (If you are embodying the tag, looking from behind/out of the tag)
Target Space
3d Cartesian Coordinate System with (0,0,0) at the center of the target.
X+ → Pointing to the right of the target (If you are looking ________)
Y+ → Pointing downward
Z+ → Pointing out of the target (orthogonal to target's plane).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.