This is a description of my participation in programming contest
happened on 11-14 July 2008. I assume the reader is already familiar with
the task. If not,
I recommend reading the task before continuing here. In short, the task was
controlling a mars rover to drive it home avoiding crashing into craters and
boulders and being caught by martians. The process looks like this:
(green circle is the home base, brown are craters, grey are boulders,
bright ellipse is rover's field of view, red guys are martians)
As in last year, I was a one man team. My local time is UTC+7, so for me the
contest started at 2a.m. Saturday and ended at 2a.m. Tuesday.
Day 0 (before contest start)
Downloaded the LiveCD image. My native OS is Windows Vista, so I had to use
emulation. Tried VirtualBox with no success - hangs when switching to graphics
mode. Run the image in QEMU successfully, tested OCaml and Ruby, they worked ok,
but a simple test program in QEMU worked 10 times slower than in native mode.
It was deep night here when the task was published, so I read it and went to
Since QEMU was too slow, I had to find a different emulator and ended up with
VMWare 6 which worked great. Transfering data between LiveCD environment and
native system wasn't very convenient, so I did all the development inside
First, I decided to make manual control for my rover to learn how it
reacts to my commands and how it behaves in different conditions.
I chose OCaml as my programming language. My practical experience with
OCaml is quite small, so I had to learn how to work with sockets (the task
required communicating over TCP connection), how to set NODELAY on the socket
(this can't be done in pure OCaml and requires writing a routine in C) and
how to use Graphics module for interactive control. All that VM setup and TCP
stuff took me several hours, but in the evening manual controlled rover was
ready and working well.
There were 2 days left and I had 2 strategy ideas in mind, so my plan was
to implement the simpler strategy
on one day and more sophisticated strategy (A* search, which I've successfully
used before in one
of my products) on the other day.
My original idea of a simple algorithm that would work on simple maps but fail
in mazes was to use some kind of electric field where the home base attracts the
rover while obstacles and martians repel it.
Unfotunately this model is too bad for our task: first, if two obstacles arise
on the path and there is a way between them, the sum electric force from them
will prevent the rover from passing and will cause it to stop; second, all the
forces are summed up so the global extremum is shifted away from the home base when
objects distribution is not uniform. And simply there can be too many local
So I looked for a better formula, trying different ones in Mathcad where I
loaded information about one of the maps gathered by my rover.
After some trials and errors I've found one (click to enlarge):
This was a potential field which can be thought of as a height map where
the home base is in absolute minimum, obstacles are very high and the free space
gradually descends towards the base but warps around objects so if we follow
the gradient we'll move around the objects and will come to the base.
Radius of influence of each object is calculated as the minimum distance where
we must start turning to avoid crashing into the object:
where R is radius of turning, being speed / rotation_speed.
In the picture above the speed equals to maximum rover speed for the map,
but in my real code actual speed of closing to the object was calculated
(as scalar product of current speed vector and normalized vector towards
the object). So the actual potential field was dynamic, depending on current
speed and direction.
After I had this field function, this height map, I needed to teach my rover
to descend this height map to the minimum where the home base was located.
Basic algorithm for controlling the rover was:
Think 3 iterations ahead. In each iteration:
For each possible command:
Given current state predict the state
to which the rover will switch if issued this command.
Given a state, predict where the rover will be going in this state in
Knowing this location and rover's speed there, calculate the field
(height) value in this point.
Remember the command that leads to the point with minimal height value.
Issue the command which will lead to minimal height value 3 iterations
This algorithm requires ability to predict rover's movement. This in turn
requires knowing parameters of rover's physics such as acceleration, breaking,
drag coefficient and rotational acceleration. All these values are not given
and must be calculated.
Each time my rover received telemetry data it learned how known parameters
changed and calculated rotational speed and acceleration. Maximum rotational
acceleration observed was used in the modelling. As for progressive motion,
observed acceleration depended on current speed because of the drag.
In order to calculate values of acceleration and drag coefficients
(acc and k) rover
remembered observed accelerations for two different speeds and then
calculated the following way:
Implementing all this physics estimation and modelling took me end of Day 2 and
part of Day 3. There was too few time left to make a sophisticated pathfinding
alrogithm since I didn't have a very clear idea of it. I thought about making
virtual nodes around obstacles, calculating distances between them and then
searching shortest path on this graph using A*, but I wasn't sure my rover
will be able to drive precisely enough to follow the path and moreover the
path will consist of straight lines while rover moves in smooth curves.
So I continued polishing my simple working strategy. The next thing requiring
attention was the martians. Including them into my algorithm was very easy.
Each martian considered a moving boulder. When thinking 3 iterations ahead,
martians movement was predicted from their positions and speeds by just
straight extrapolation (assuming they move forward). When height map function
was calculated, each predicted martian position was just a boulder with its own
radius and calculated crash-avoiding-radius.
Another issue was breaking near the base, since very often my rover ended up
orbiting the base at high speed unable to land there.
I've found a good solution:
I get a normalized vector towards the base, take its perpendicular, calculate
scalar product with current speed vector V giving rover's
tangential speed Vt. With tangential speed Vt rover will orbit
the base with orbit radius r = Vt / turning_speed.
If r is less than R (distance to the base), then everything's ok, rover
can continue as it was. Otherwise it's time to break.
And if it's time to break, then when rover thinks about next command, the list
of possible commands reduces to only breaking ones (bl, b, br).
As for learning the map, all information about boulders and craters was stored
in a global hash map and reused in later runs. However when the height map was
calculated, it used a subset of all known objects - only those not very
far from the rover.
The resulting behavior of my rover on given sample maps (scatter & spiral):
There was a bug causing the rover in second run on spiral map to bounce into the
wall before it wakes up.
Same video in high resolution and perfect (lossless) quality in only 0,7 MB:
video_hi_res.avi, you'll need
this codec (I made it, by the way).
The resulting program I submitted 13 minutes to contest end was some
330 lines of OCaml code with quite short main thinking function:
When run, it loaded about 10% of CPU.
I had some plans of improving current strategy to virtually walk down the
height map from rover's location several steps away and make rover head
straight to current found minimum. That will make its path more straight and
hence fast. Another idea was to find local minima while virtually walking
down and place virtual boulders there, so the rover will not fall into local
minimum and continue going. This would allow solving mazes. Unfortunately
I hadn't time to implement it, missed just 2-3 hours.
Now I realize there were some flaws in my solution.
My rover started turning only when it was necessary to do so to avoid crash,
not earlier. This results in a curved trajectrory. If it started earlier, the
path could be more straight so it would arrive home faster.
The rover thought in 0,1 seconds steps and sent a command only once in
0,1 s. If it modelled its behaviour with lesser time step and sent commands
more frequently, its path could be more optimal.
Anyway it was fun, it was challenging, and I learned a lot. Looking forward
for the next year!