chrisjseaton / sdp Goto Github PK
View Code? Open in Web Editor NEWGroup 7: Lucky Number Seven
Group 7: Lucky Number Seven
How often would we like to update a robot of his/her/it's next course of action after it has been given a command such as "Turn 20 degrees to the right and move forwards". If it is too fast it will always have to receive this command and process it, hence the robot will not actually do anything.
If we go along with the idea of acting out on one plan until it is achieved is also clearly flawed, like in the defensive case of shadowing an opponents attacker.
See the vision branch if this is not yet in master. I have cast/coded to int as a workaround but this is obviously not a great solution.
Tested over both USB and RF. Likely linked to issue #2 where the arduino cannot be programmed whilst the header board is attached.
It is certain that the Arduino is not receiving serial input (rather than receiving and not responding via serial) after testing reflex actions (receive 'MOTORS\n' -> all motors on).
The OpenCV KalmanFilter library is likely the best solution.
Not 100% sure if it fails if the folder already exists created by the current user, probably doesn't.
Especially a problem on "kilmore" PC - /group7/'s OpenCV folder has not been built.
Both chips are now set to ATCN 07 -- also need to confirm that this resolves the issues with collisions.
Further considerations are packet length, etc.
AVRDude will respond with avrdude: stk500_getsync(): not in sync: resp=0x68
.
This is likely the same issue observed here: http://www.instructables.com/id/A-solution-to-avrdude-stk500getsync-not-in-syn/
The above article claims that this error is a symptom of having some connection on pin 0. There is no clear solution.
The current workaround is to program with the header board detached.
TLDR:
[MILESTONE2] For the planner I need (will be edited based on whether we have it, or I add more).
Holonomic implementations will have to come a bit later(?)
In detail:
For all plans being made, it is necessary to find the angle in which to turn to and displacement of the ball for each of the robots. This can be done easily by calling the relevant pitch objects methods through the world being used by the planner.
However the problem is when giving commands to the robot in that plan cycle, as curently it is only possible to give commands such as TURN-RIGHT
which will make the robot turn towards the right indefinitely until the robot receives a new plan (which might make it turn left, and then the cycle will keep on going). This is the implementation used currently (excerpt below)
elif (self.mode == 'dog'):
# If the robot does not have the ball, it should go to the ball.
if (self.state == 'noBall'):
# Get the ball position so that we may find the angle to align with it, as well as the displacement
ball_x = self.world._ball.x()
ball_y = self.world._ball.y()
angle_to_turn_to = self.bot.get_rotation_to_point(ball_x,ball_y)
distance_to_move = self.bot.get_displacement_to_point(ball_x, ball_y)
dir_to_turn = get_rotation_direction(self.world._ball)
# We command the robot turn to the ball, and move forward if it is facing it.
# This is implementation is deeply simplified (but works)
bot_rotate_or_move(dir_to_turn)
As information can be calculated in the planner (like how much to turn, how much to go forward, etc.) it would be advantageous to be able to pass commands that utilise this information. So something like:
elif (self.mode == 'dog'):
# If the robot does not have the ball, it should go to the ball.
if (self.state == 'noBall'):
# Get the ball position so that we may find the angle to align with it, as well as the displacement
ball_x = self.world._ball.x()
ball_y = self.world._ball.y()
angle_to_turn_to = self.bot.get_rotation_to_point(ball_x,ball_y)
distance_to_move = self.bot.get_displacement_to_point(ball_x, ball_y)
dir_to_turn = get_rotation_direction(self.world._ball)
# We command the robot turn to the ball, and move forward if it is facing it.
bot_rotate_or_move(dir_to_turn, angle_to_turn_to, distance_to_move)
The existing grabber is not ideal. It often struggles to push the ball into the robot's grasp and depends too much on battery life to do so (no clutch gears -> can't drive the motor for too long).
Ball visibility is also poor, particularly with there being yellow pieces on the arm of the grabber.
Low battery also results in the grabber sagging as it often does not reach a point where it can stay open beneath its own weight.
This simply requires a rewrite/reworking of the SDPArduino library. I spoke to Garry and he says braking is possible with the existing motor board code (it's much like last year's motor mux).
This should be threaded in a reasonable manner. Whilst doing so there should be some sort of queueing added.
This is likely an issue with the configuration of the SRF stick and Xino RF chip, particularly with the node settings.
AVRDude returns the following error:
avrdude: ser_recv(): programmer is not responding avrdude: stk500_recv(): programmer is not responding
Mostly a note to self.
See grabber boxes in worldmodel module, and object sizes in models module.
Not sure why. I'll fix it tomorrow.
May be worth investing in a new UI at the same time. The OpenCV kit is very limited where e.g. pyqt would allow for something much better.
This is an issue with the alternating bit implementation.
If the planner is terminated while the robot is expecting bit 1 then it will not start again until the robot expected bit 0 - so a workaround is to reboot the robot whenever rebooting the planner.
I can fix this but the fix will be quite intrusive so I'm leaving it for now.
Find these and write a script to run on vision startup. Once this is done we can easily make calibrations, etc. pitch-dependent (or time-of-day dependent) only, rather than machine-dependent.
The processed image displayed by the main GUI (with FPS counter, lines, boxes etc) is being fed back in to the calibration GUI, making calibration slightly tricky. Calibration image should be taken before any overlay is applied, but after any pre-processing takes place - presumably being done in the wrong order?
Spoke to Garry - he suggests trying to EV3 equivalents. The EV3 motors are easier to mount, have slightly higher RPM, and are slightly less efficient.
We would have to order these motors ourselves as there is no existing stock.
It's currently a bit messy, with arbiter acting as a means of passing a load of references to what is referred to as a GUI wrapper, but is actually the entire control loop.
For some reason the vision has stopped displaying the camera feed properly.
Tried on multiple computers, as well as having confirmed that the video feed does work (checked with vlc).
Currently looks like this:
http://oi58.tinypic.com/2uj6xat.jpg
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.