home‎ > ‎


What sensors will i be using?

All range finding sensors on this bot will be vision based using USB a webcam.

Realistically as no one system is ever 100% accurate there will always be the need for some other form of sensor.

To prevent damage to the drive motors my motor controllers monitor current draw and shut down power to any motor that has been stalled for over a second. This has the same effect as "bump" sensors on the front of the bot. If it crashes into something the motors will power down and the main processor will be informed that movement has stopped.

I may or may not install a magnetic compass as well to aid localisation. This will be decided when i get to the route finding stage of the project.

Vision based laser ranging.

Both of the most commonly used choices, Optical Flow and Stereoscopic Vision based ranging are far too processor intensive for this project. To use a powerful enough computer for one of these methods the power draw would be too high for the sort of time i would like my robot to be away from it's battery charger.

I decided to go for the path less traveled. I use a laser with an angular field lens to light a line across the area in front of the robot. The camera point the same direction as the laser but at a different height. The distance of objects can be calculated by observing where in the camera image the laser line appears and calculate the distance due to parallax.

This system is described in the excellent article here: http://www.seattlerobotics.org/encoder/200110/vision.htm

Fortunately technology has advance a bit since that article was written and it's now easy enough to do the filtering of the laser line in software.

So, i'm using a USB webcam. You can see it here mounted in it's servo controlled turret:

And some little laser line modules (bought on ebay). The laser modules are mounted to shine parallel to the ground. There are 2 of them in this picture mounted in brackets that let me fine tune the angle they are pointing at:

So first, using the webcam take a picture of the area in front of your bot:

Next switch on the lasers and take another picture:

Subtract one image from the other to leave you with an image of only the laser line:

Now get your maths head on and convert this into a 2D map.

The bot is located at the bottom of the triangle. The line shooting off towards the top shows i still have some work to do on filtering at this stage:

This process takes around 10 seconds to take pictures and produce the map on the on board computer. Ideally this would be under a second but it's really pushing the little NSLU2s memory capacity.

Must try a platform with more memory. Suggestions anyone?

to be continued....

(next joining multiple maps together.)