Giter Club home page Giter Club logo

ftc-ideas's Introduction

ftc-ideas's People

Contributors

manthey avatar

Watchers

 avatar

ftc-ideas's Issues

Methods of Determine Robot Position

For FTC, we mostly care about the position on the field (x, y) and the robot orientation on the field (yaw). We don't necessarily care about height (z), pitch, or roll.

  • Dead Reckoning - Estimate position based on an educated guess about velocity
    • To use this you need somewhat accurate estimates of velocity when you ask a motor to turn
    • For both direction and orientation, you need to combine the estimates appropriately
    • Any error in the estimate results in error in position
    • Error compounds over time
  • Motorized Wheel Encoders - Calculate position based on encoder counts
    • Need to know how far a wheel moves per distance and degrees
      • This can be measured by traveling a known distance forward or back, right or left, and clockwise or counterclockwise and noting the encoder changes per some unit traveled.
    • Any slip introduces error
    • Error compounds over time
    • Could you use power draw to estimate slip?
  • Dead Wheel Encoders (odometry pods) - Calculate position based on encoder counts
    • Need to know how far a wheel moves per distance and degrees
      • This can be measured by traveling a known distance forward or back, right or left, and clockwise or counterclockwise and noting the encoder changes per some unit traveled.
    • Appropriate spring force needed to avoid slip or skips
    • Error compounds over time
  • Angular IMU - Calculate orientation from angular IMU
    • Reasonably accurate if the controller is in a known orientation
    • Relatively immune from slip effects
    • Error compounds over time
  • Linear IMU - Calculate horizontal position based on linear accelerometers in IMU
    • Very inaccurate
    • Error compounds over time
  • April Tags (also Vuforia from previous years) - Calculate position from identified vision targets
    • Absolute position (no compounding over time)
    • Only as accurate as the underlying model (April tags are better than Vuforia)
    • Requires a clear image
    • Original 9.0 SDK has bad angles for the April Tags
  • Physical Constraints - We know we are within the field and not where a post is located
    • Complex to apply
    • Requires some estimate of position before it can apply
  • Distance Sensors - If we know where things are in the world, we know where we are
    • Requires some estimate of position before it can apply
    • Some distance sensors are affected by transparent surfaces
    • Noisy
    • Rate dependent (if our program is busy, they may be less accurate)
    • Requires a fairly complex model of the playing field to use broadly
  • Vision Object Detection - Use TensorFlow or OpenCV
    • Complex to determine what we are looking at
    • Requires some estimate of position before it can apply
    • Accuracy subject to vision processing
  • Color Sensors - Downward facing to detect tape on the mats
    • Requires some estimate of position before it can apply
    • Can be very accurate
    • Usually requires multiple sensors to be used effectively (e.g., to steer along a tape line)
  • Stereo Vision - Two cameras lets us build a 3D model of what we see
    • Complex to determine what we are looking at
    • Could be very accurate
    • One idea would be to have a pair of cameras looking up and use details of the playing area ceiling to determine position

OpenCVPipeline in new VisionPortal

If you have an OpenCVPipeline class, it can be wrapped in the new VisionProcessor class, like so:

package org.firstinspires.ftc.teamcode.utility_code;

import android.graphics.Canvas;

import org.firstinspires.ftc.robotcore.external.Telemetry;
import org.firstinspires.ftc.robotcore.internal.camera.calibration.CameraCalibration;
import org.firstinspires.ftc.vision.VisionProcessor;
import org.opencv.core.Mat;

public class PropDetectionProcessor implements VisionProcessor {
    /* Our original class, PropDetection, implemented OpenCvPipeline, plus added a method
     * called getLocation.
     */
    private final PropDetection pipeline;

    /**
     *  Our constructor needs to pass whatever the OpenCvPipeline's constructor wanted.
     *
     * @param t Telemetry object
     */
    public PropDetectionProcessor(Telemetry t) {
        pipeline = new PropDetection(t);
    }

    /**
     * Our original pipeline exposed a computed value.  We just pass through the method.
     *
     * @return whatever the original class returned.
     * @noinspection unused
     */
    public PropDetection.Location getLocation() {
        return pipeline.getLocation();
    }

    /**
     * We aren't doing anything for init.
     *
     * @param width width of the image
     * @param height height of the image
     * @param calibration calibration parameters for the camera
     */
    @Override
    public void init(int width, int height, CameraCalibration calibration) {

    }

    /**
     * We pass through processFrame to the original pipeline, returning the result.
     *
     * @param input image frame
     * @param captureTimeNanos capture time duration
     * @return the modified image
     */
    @Override
    public Mat processFrame(Mat input, long captureTimeNanos) {
        return pipeline.processFrame(input);
    }

    /**
     * We aren't doing anything with onDrawFrame.  We could render the output from processFrame,
     * or augment the canvas by drawing graphics on top of it based on our determined actions.
     *
     * @param canvas a place to draw results
     * @param onscreenWidth the size of the display
     * @param onscreenHeight the size of the display
     * @param scaleBmpPxToCanvasPx a scale factor from our image to the display
     * @param scaleCanvasDensity see docs
     * @param userContext see docs
     */
    @Override
    public void onDrawFrame(Canvas canvas, int onscreenWidth, int onscreenHeight, float scaleBmpPxToCanvasPx, float scaleCanvasDensity, Object userContext) {

    }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.