umrover / mrover-workspace Goto Github PK
View Code? Open in Web Editor NEWThe University of Michigan Mars Rover Team workspace.
Home Page: https://mrover.org/
The University of Michigan Mars Rover Team workspace.
Home Page: https://mrover.org/
Upgrade onboard/filter with non-linear Kalman filtering models (Extended, Unscented).
We will be using EECS 280 unit testing framework with MAKEFILE inside of ra_kinematics folder. Navigate to folder in order to run tests. The command make test
on the command line will build all unit tests and then run them.
Add the ability to drop reference points on the field. These will be light gray points to make it clear they are not like the rest of the items on the field.
TargetList
Obstacle
Odometry
TargetList
Obstacle
Odometry
Reset Rover
button is clickedWhen sending a new path create a method for comparing the newly sent path to the previously sent path, and send the new path that is closest to the old path. This will prevent the jumping of the sent direction. Also, establish a check to confirm center path is blocked to filter out noise. Maybe check if it is blocked for two iterations before actually sending new path. Once new path is sent won't need to check center path twice again until the center path is clear, since we already confirmed earlier that it is blocked.
For example, if we are not simulating perception, the perception indicator will display red if we are receiving LCMs from the perception program.
Develop support for the ZED-F9P's RTK capabilities
The onboard/cv code needs refactoring for the latest upgrade of the ZED SDK from 2.8 to 3.2.
The SDK will need to be upgraded on the Jetson and on the CV Laptop
Create a tool for tuning the hyperparameters (specifically Q, process noise) of a Kalman filter. Should be a general tool that can be easily adapted to different filter models.
Complete re-write of the simulators/filter package. Contains three components:
Nav should check for an obstacle message by checking the distance
variable rather than the detected
variable in the Obstacle
LCM.
Branch to push: ar_threshold
My hypothesis for why AR Tags are not able to be detected unless they have a white background is because the threshold standards are set too high. When the cv::detectMarkers()
function is run the first thing it does is apply a thresholding that turns a greyscale image into a pure black and white or binary image. We are going to run the cv::threshold()
function ourselves before we run cv::detectMarkers()
on the image. This will allow us to adjust that threshold and hopefully have an easier time detecting AR Tags.
Resources:
Install OpenCV 4.4.0 in Vagrant box. After installation is complete compile the code. It should output a bunch of errors saying functions have been deprecated. Look for the modern equivalent of those functions and add them where needed in the code, until the code compiles successfully.
Instead of checking left then right check both and then send the shortest path
We need to be able to easily record images, depth files, and point clouds, at one time that way we can create our own test scenes for our code to run on. To do this we are going to be adding Point Cloud writing support for the 'write_frame' build option. When this option is set to true our program will use the functions already defined in camera.cpp to write depth images and actual images. An additional writing function will be added using the Point Cloud write function that will also allow us to write point clouds. All of these images will be written to three different folders in a folder specified by the 'data_folder' build option.
Develop an algorithm to see if there are any interest points located within the path directly in front of the rover. Should write path blocked or path clear to visualizer to indicate.
We want to do some pre-processing of the image before we run AR Tag Detection on it. We are going to first find the contours, then identify square shapes, then draw white borders around these square shapes. If you look at the images below the process we currently have seems very promising. We have to add this to the mrover-workspace cod. The code for this is in can be found in Murali's repo.
Currently our AR Tag detection requires the rover to detect AR Tags that have white borders. We don't want to have to rely on the presence of white borders during competition, so we are developing code to detect tags without borders.
Develop algorithm that finds a new clear path that will accommodate the rover width and outputs the bearing to that heading.
Check out the README.md and the meson_options.txt files. Build and run the code using the options specified in those two files. Make sure desired functionality is achieved. If functionality is not achieved, modify.
Option 1: Add popups when hovering over buttons to say what their corresponding hotkeys are.
Option 2: Have a help menu
shift+r
shift+space
shift+alt+space
shift+enter
shift+[1-5]
shift+backspace
shift+l
shift+p
shift+g
shift+s
shift+alt+[dms]
See #431 for complete list of current hotkeys
shift+r
shift+space
shift+alt+space
shift+enter
shift+[1-5]
shift+backspace
shift+l
shift+p
shift+g
shift+s
shift+alt+[dms]
Create outline of rover in 3d pcl visualizer.
The next step in our preprocessing will be to identify square shapes in the image, so that we can impose a white border around them.
In the campera.hpp file we have a camera class. This class is defined by a bunch of functions in the camera.cpp file. This file has two sets of camera functions, a set that pulls images from the ZED and a set that pulls images from a folder. You must use the build flag, described in README.md, with_zed=off and data_folder='address to photos' to pull images from a folder instead of the ZED, and run our code using these images. This system hasn't been utilized in a couple years, so adjustments are extremely likely.
detected
from LCM Obstacle
messagedetected
from Obstacle
message published by simulators/nav
detected
from Obstacle
message published by onboard/cv
simulators/nav
already does not rely on this variable (e.g. it is always set to false). This change is dependent onboard/nav
and onboard/cv
also not using this variable.
The Find Clear Path function has a lot of repeated code for finding the left clear path and then finding the right clear path. One would have to consolidate this code into one function that can take a parameter that will determine left and right, to reduce the amount of repeated code in PCL.cpp
Use the GPU Euclidean Cluster Extraction algorithm and see if there are any performance gains associated with this method. Checkout this file for an example on how to implement it. Add it to the pcl.cpp file
Also, check out this power point to gain a basic understanding of CUDA and memory allocation on the GPU: http://developer.download.nvidia.com/compute/developertrainingmaterials/presentations/cuda_language/Introduction_to_CUDA_C.pptx
We are going to:
Replace the gate shimmy algorithm with algorithm that drives to far post as an approximation for centering rover between posts.
Have done some integration testing with this
The concepts from the test cases in simulators/nav_v1/testing
should be made into new test cases. We don't need to do very specific ones necessarily. We should have a larger discussion about this when we start working on the issues that pertain to testing (see amverni#15).
In order for remote work to be possible we need everyone to be able to test the code, without having access to the physical equipment that the code runs on. To do this for point clouds we are going to create a system that reads in point clouds given the appropriate build flags 'with_zed=off'. When this build flag is set the system should read in .pcd files from a folder specified by the data_folder option.
It's very nice to have a class structure when creating a new processing file like PCL.cpp. It makes your code very clean and adds various checks that come along with the class interface to make sure all of the functions were implemented properly. It also provides safeties to make sure that member variables and private functions are not abused.
Every time RANSAC is run in PCL it reaches max iterations. I presume it is because the other parameters are not tuned correctly. Please tune and adjust the RANSAC parameters so that RANSAC doesn't reach max iterations every time.
Shift
/Ctrl
+ <arrow>
.Nav wants to know the distance to the closest object in front of us. We're going to send them the distance to the closest obstacle by averaging the distance away that all the key points of the nearest obstacle are from the rover.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.