Giter Club home page Giter Club logo

limelightdocs's People

Contributors

arynnette avatar bhjelstrom avatar brucemcrooster avatar colefrench avatar desertgreg avatar devmanso avatar jonahsnider avatar juliashimshock avatar katzuv avatar ndugal6 avatar nyxiad avatar rzblue avatar sciencewhiz avatar virtuald avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

limelightdocs's Issues

Marker size settings in April Tag sections of docs resulting in incorrect distances

We've been testing the new 2023.0.1 build of the Limelight software on our Limelight 2+ against April Tag targets for this year's FRC season game.

In doing so, we've found that the recommended value for marker size, 203.2 mm (8 inches), seems to result in distance and position calculations that are off by about a third. That value seems to correspond to a measurement of the target including the white border surrounding it. If we instead set the marker size to 152.4 mm (6 inches), which corresponds to the size of the marker from black edge to black edge, excluding the white border, we get accurate distance and positioning measurements.

Is there some configuration we should change to make this work with the recommended value, or should that value be corrected to exclude the 1 inch thick white border on the markers?

New GUI features not in documentation

Hi

I'm trying to get up to speed with Limelight and trying to work with cubes rather than Retro-Reflectors.
So colors matter....

These were added earlier this year...

Red-Balance slider
Blue-Balance slider
Better default color balance settings

Can they be discussed in the online docs please?
eg: How should we be using them?

Thanks
Phil.

Small error in Case Study: Aiming and Range at the same time

Thanks for all of this, our FRC team is really looking forward to using Limelight this year!
I was looking at some of the Case Study code and found a small error in the Aim + Range, instead of adding the distance adjustment, it subtracted it from the right motor.

`

if (joystick->GetRawButton(9))
{
        float heading_error = -tx;
        float distance_error = -ty;
        float steering_adjust = 0.0f;

        if (tx > 1.0)
        {
                steering_adjust = KpAim*heading_error - min_aim_command;
        }
        else if (tx < 1.0)
        {
                steering_adjust = KpAim*heading_error + min_aim_command;
        }

        float distance_adjust = KpDistance * distance_error;

        left_command += distance_adjust + steering_adjust;
        right_command += distance_adjust - steering_adjust;       // here
}

`

No search in docs

The new documentation format looks amazing, but it looks like there is no search bar?

No Documentation on Current Draw

I have looked around a good amount within the documentation, but could not find anything on current draw for the limelight. Is there anywhere I can look to in order to find this information?

Conflicting instructions for imaging of Limelight v1

On the getting started page of the documentation the Limelight 1 tab of the imaging instructions tells users to "Apply power to your limelight" directly after plugging in the usb cable. However, the warning underneath instructs "Only connect the microUSB cable while imaging." When imaging a Limelight 1, which instructions should be used?

Lines missing from 2019 java sample

I was reading through the Java sample programs shown on this page:
http://docs.limelightvision.io/en/latest/cs_drive_to_goal_2019.html

(Much appreciated BTW)

In the Java limelight tracking section, it looks like some key lines are missing... namely the else block around the code that calculates the driving commands.

I've included them in the sample below and just commented them as "Missing".

if (tv < 1.0)
{
  m_LimelightHasValidTarget = false;
  m_LimelightDriveCommand = 0.0;
  m_LimelightSteerCommand = 0.0;
  return;
}
else  // -------------------------- Missing
{     // -------------------------Missing
  m_LimelightHasValidTarget = true;

  // Start with proportional steering
  double steer_cmd = tx * STEER_K;
  m_LimelightSteerCommand = steer_cmd;

  // try to drive forward until the target area reaches our desired area
  double drive_cmd = (DESIRED_TARGET_AREA - ta) * DRIVE_K;

  // don't let the robot drive too fast into the goal
  if (drive_cmd > MAX_DRIVE)
  {
    drive_cmd = MAX_DRIVE;
  }
  m_LimelightDriveCommand = drive_cmd;	
}  //  -------------------------  Missing

How does "Snapshots" work

The docs are a bit confusing.

The say "1 | Take two snapshots per second"

But the s/w change log says: "Setting the snapshot value to “1” will only take a single snapshot and reset the value to 0. "

So if I want to take a series of snapshots during auto can I just set "Snaphots" to 1 at the beginning, and 0 at the end, or do I need to set it to 30, or do I need to monitor the value and set it to 1 whenever it goes to 0?

Thanks

Framerate locking up while driving.

Limelight 2, newest update.

When driving the robot aggressively, the limelight will reliably stop updating frames and "hang". The green light on the limelight goes solid when it hangs. It will hand for a few seconds then come back. While not hanging, it remains at a solid 85-90fps.

Saving frames from camera stream during match for tuning HSV values

Hi Limelight staff, love your product!

My frc team just bought a Limelight this year, and we're really excited to start using it. However, one thing we want to do is to be able to save frames from the camera stream during the match, when we detect contours. We want to do this in order to tune the HSV values for the lighting at the competitions. Our code is in Java.

We are having a lot of trouble doing this, however. We understand we could bring a laptop onto the field before the match and tune it then, but we'd prefer to not have to do that. So far, we've attempted to use OpenCV to access the stream, but have had problems with recognizing the stream in the code.

How could we save frames from the stream?

Our code:

import org.usfirst.frc.team1100.robot.subsystems.vision.Limelight;

import org.opencv.videoio.VideoCapture;
import org.opencv.core.Mat;
import org.opencv.imgcodecs.Imgcodecs;

import edu.wpi.first.wpilibj.command.Command;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;

/**
 * Attempt at capturing images using OpenCV
 */
public class CaptureImage extends Command {

        //Only save one image per match
	boolean imageCaptured = false;
	Limelight lime;
	VideoCapture camera;
	Mat frame;
	
    public CaptureImage() {
        requires(Limelight.getInstance());
    }

    protected void initialize() {
        //Attempts to open stream
    	camera = new VideoCapture("http://10.11.00.11:5800");
    	frame = new Mat();
    	lime = Limelight.getInstance();
    }

    protected void execute() {
    	if (lime.contoursDetected() && !imageCaptured) {
    		if (camera.read(frame)) { //Always returns false
                        //Writes Output.jpg to roborio, but always is blank
	    		Imgcodecs.imwrite("/home/lvuser/Images/Output.jpg", frame);
	    		imageCaptured = true;
    		} else {
    			SmartDashboard.putBoolean("Open", camera.isOpened()); //Always false
    		}
    	}
    }

    // Make this return true when this Command no longer needs to run execute()
    protected boolean isFinished() {
    	SmartDashboard.putBoolean("Image Captured", imageCaptured); //Always false
        return false;
    }

    // Called once after isFinished returns true
    protected void end() {
    }

    // Called when another command which requires one or more of the same
    // subsystems is scheduled to run
    protected void interrupted() {
    }
}

What algorithm do you apply to generate a list of contours?

Hey all, great project you have here, it's saving my FRC team a lot of time messing with vision.

I was reading through the documentation, and found in the Vision Pipeline Tuning Section, under the Contour Filtering, you have the line:

"After thresholding, Limelight applies a .... to generate a list of contours."

What exactly do you apply here? We ask, because we are interested in trying to possible select a hull that isn't necessarily the largest and are interested in the process.

Here's a link to the exact section in the github.

Possible Signage Error in Aiming Using Vision Case Study

The case study states: "If the error is bigger than some threshold, just add a constant to your motor command which roughly represents the minimum amount of power needed for the robot to actually move (you actually want to use a little bit less than this)."

Currently this logic is implemented by the following conditional statements:

if (tx > 1.0)
{
        steering_adjust = Kp*heading_error - min_command;
}
else if (tx < 1.0)
{
        steering_adjust = Kp*heading_error + min_command;
}

Basically what I interpreted this to mean is that when |tx| > threshold, add some minimum command to make the mechanism rotate in the correct direction and exceed static friction.

With the current code the conditional statements will work fine when tx is positive because when tx > 1.0, we will subtract the minimum command. The conditional statements will also work fine when tx is negative and less than -1.0 because we will add the minimum command. The problem is that the condition tx < 1.0 is too inclusive. For example when the tx is -0.5, which is "smaller" than the threshold of one degree, the minimum command will be in effect. However, when the tx is 0.5, the minimum command will not be in effect. I am wondering if this is intentional, or the code was intended to be written as the following:

if (Math.abs(tx) > 1.0) {
	if (tx < 0) {
		steering_adjust = Kp*heading_error + min_command;
	} else {
		steering_adjust = Kp*heading_error - min_command;
	}
} else {
	steering_adjust = Kp*heading_error;
}

Unable to connect to Limelight unless using static IP on roboRIO

My team and I have been having connection issues on our Limelight 2+ and recently discovered the solution.
I'm sharing what worked for us in this issue to help other teams experiencing the same issue and to bring the issue to your attention.

Prerequisites

  • A fresh Limelight 2+ flashed with v2020.4 (hardware version may not matter)
  • roboRIO v1 flashed with the initial 2022 version
  • Open-Mesh router flashed with the initial 2022 version

You turn on your robot and observe that every component successfully starts.
The Limelight fans turn on, blinks its lights, etc.

Problem

  • The Limelight finder can't detect the device
  • limelight.local:5801 does not resolve
  • There are no Limelight entries in NetworkTables

Solution

  1. Go to your roboRIO's configuration page at http://10.TE.AM.2
  2. Set the IP to static (docs)
    These instructions may not be applicable when at an FRC tournament
  3. Power cycle Limelight, roboRIO, and router
  4. The Limelight finder should now work

I'm not sure if this issue is caused by a misconfiguration on our part or if this is unexpected behavior from the Limelight.

Confused by Wording on (ADVANCED) 3D Coordinate Systems

Hello there!

I wanted to ask about the wording under (ADVANCED) 3D Coordinate Systems, specifically in “Limelight Camera Space”. It states that positive Y is associated with the downward direction, but in the Limelight program on the web when adjusting camera position positive Y is correlated with the upwards direction. The same is applicable to the “Target Space” subsection. Perhaps I misinterpreted what you have written, but I thought I would mention this to get some clarification. I appreciate your time.

Thanks!
Julia

Target Space documentation definition not following right-hand rule

Is documentation for tag space correct? because the axis is not following the right-hand rule, or should this also be like camera space where "Pointing to the right of the target (If you are embodying the tag, looking from behind/out of the tag)

Target Space
3d Cartesian Coordinate System with (0,0,0) at the center of the target.
X+ → Pointing to the right of the target (If you are looking ________)
Y+ → Pointing downward
Z+ → Pointing out of the target (orthogonal to target's plane).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.