Eurobot 2012: Lessons learned and plans for next year.

Now we’re in the ‘lull’ between the finals and the new rules being released, I thought it would be good to analyse what didn’t quite go to plan this year, and how it can be rectified for next year.

First of all, I stand by the philosophy that we did all we could in the time available. Unforeseen and uncontrollable delays with funding meant that we were still putting the hardware together right up to the week of the competition. This meant that we had much less time practising on the competition table than I would have liked, which meant that we ended up competing with un-calibrated software.

Aside from this, there was some behaviour that stemmed from our choice of hardware. The motor controllers were chosen more for their relevance to our current research interests, rather than their suitability for use as drive controllers. We chose them because they were the only low-powered motor controllers that we could find that could be controlled over a CAN-bus. They were designed to be used independent controllers for positioning motorised arms, etc, rather than the continuous wheel application that we had for them. This meant that they are very good at controlling the speed of their assigned wheel, but are unaware of how fast the other wheel was travelling. Having used dual motor controllers in the past, I didn’t realise quite how significant this would be. If we want to stick with the CAN-bus (which is a beautiful, beautiful system!) and these motor controllers, there are some problems we need to overcome:

  • Driving in a straight line:
This video (0.25x) shows the robot clearly arcing towards the left of the screen. This, unfortunately, isn’t the robot cleverly trying to avoid the opponent!

I’ve been researching into the way in which dual motor controllers achieve straight line driving (11: Feedback speed control). We were trying to correct the course by comparing the distances each side of the robot has travelled at specific points in time, and slowing down the side that’s travelled the furthest. This appears to be the wrong approach. It seems to be more effective if one motor becomes a ‘master’ (motor A) and the other a ‘slave’ (motor B). The speed of motor B is increased if it is going slower than motor A, and decreased if it is faster. This makes sense as it means one wheel is always following the correct course.

We need to do some testing to see if we can respond quickly enough to achieve this in software over the CAN-bus, or whether we need to use some sort of hardware comparator on a sub-system to monitor and correct the speeds of the two motors. I’d like to keep as much of this on CAN as possible as it makes debugging easier.

  • Starting synchronisation:
This video (0.5x) shows the robot turning to face the correct direction and then starting to move. The right-hand motor starts before the left, causing the robot to head to the right of the screen. Then the course correction (see above) over-compensates, causing the robot to arc to the left of the screen.

The motor controllers start the motion of the wheels as soon as they receive the CAN message to start. This means that one side will always start (an unpredictable period of time) before the other, causing the path of the robot to skew as it accelerates. If we get the above algorithm / system working correctly, the effect of this will be reduced, but it will still be there. There is, however, a hardware enable feature of the Axiomatic controllers that we aren’t using. If this is tied to a single GPIO on the main control board, we can set the speed and direction of the motors over CAN and then ‘enable’ both motor controllers at the same time. This will hopefully mean that both motors start turning at the same time, giving us a much straighter ‘launch’.

EDIT: The cause of this behaviour was later found to be a worn gearbox on the left-hand motor, causing it to have a lot of free-play at the wheel. The motors were, in fact, starting at the same time, but the worn gearbox took maybe 20 – 30 degrees of movement before it ‘caught’ and started moving. Lesson here is to not be so quick to blame electronics when there could be a much simpler, mechanical failure!

There are also issues with out positioning. We need to be able to slow the motors down almost to the point of stopping as we approach our target position. At the moment, our ‘crawl speed’ is way too fast, causing it to overshoot and occasionally get stuck in a feedback loop. I’d like to explore more sophisticated methods such as PID for this, although fitting this into a time-triggered system will be challenging.

Also we need to know if we’ve gone off course. We can make positioning through encoders extremely reliable in practice, but variations in table surface, other robots hitting us, playing elements being in unexpeted places can all have adverse effects on the final position in a real game. We need to keep track of our position on the table either needs to be through a beacon system, or some separate position calculation from the encoders, inertial measurements, or a combination of the three!


Tell me what you think...

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s