Recent Projects‎ > ‎

Webcam Based DIY Laser Rangefinder


There are many off the shelf range finding components available including ultrasonic, infrared, and even laser rangefinders. All of these devices work well, but in the field of aerial robotics, weight is a primary concern. It is desirable to get as much functionality out of each component that is added to an airframe. Miniature robotic rotorcraft for example can carry about 100g of payload. It is possible to perform machine vision tasks such as obstacle identification and avoidance though the use of a webcam (or mini wireless camera interfaced to a computer via USB adaptor). Better yet, two webcams can provide stereo machine vision thus improving obstacle avoidance because depth can be determined. The drawback of this of course is the addition of the weight of a second camera. This page describes how a mini laser pointer can be configured along with a single camera to provide mono-machine vision with range information.

This project is greatly based on a tutorial found here

Theory of Operation

The diagram below shows how projecting a laser dot onto a target that is in the field of view of a camera, the distance to that target may be calculated. The math is very simple, so this technique works very well for machine vision applications that need to run quickly.

So, here is how it works. A laser-beam is projected onto an object in the field of view of a camera. This laser beam is ideally parallel to the optical axis of the camera. The dot from the laser is captured along with the rest of the scene by the camera. A simple algorithm is run over the image looking for the brightest pixels. Assuming that the laser is the brightest area of the scene (which seems to be true for my dollar store laser pointer indoors), the dots position in the image frame is known. Then we need to calculate the range to the object based on where along the y axis of the image this laser dot falls. The closer to the center of the image, the farther away the object is.

As we can see from the diagram earlier in this section, distance (D) may be calculated:

Of course, to solve this equation, you need to know h, which is a constant fixed as the distance between your laser pointer and camera, and theta. Theta is calculated:

Put the two above equations together, we get:

OK, so the number of pixels from the center of the focal plane that the laser dot appears can just be counted from the image. What about the other parameters in this equation? We need to perform a calibration to derive these.

To calibrate the system, we will collect a series of measurements where I know the range to the target, as well as the number of pixels the dot is from the center of the image each time. This data is below:

Calibration Data
pixels from centeractual D (cm)

Using the following equation, we can calculate the actual angle based on the value of h as well as actual distance for each data point.

Now that we have a Theta_actual for each value, we can come up with a relationship that lets us calculate theta from the number of pixels from image center. I used a linear relationship (thus a gain and offset are needed). This seems to work well even though it does not account for the fact that the focal plane is a plane rather than curved at a constant radius around the center of the lens.

From my calibration data, I calculated:

Offset (ro) = -0.056514344 radians

Gain (rpc) = 0.0024259348 radians/pixel


I solved for calculated distances, as well as error from actual distance from the calibration data:

Actual and Calculated Range Data
pixels from centercalc D (cm)actual D (cm)% error


There are not a lot of parts in my sample range finder. I used a piece of cardboard to hold a laser pointer to a webcam so that the laser pointer points in a direction that is parallel to that of the camera. The parts seen below are laid out on a one inch grid for reference.

My assembled range finder looks like this:


I have written software two ways, one using visual c++ and the other using visual basic. You will probably find that the visual basic version of the software is much easier to follow than the vc++ code, but with everything, there is a tradeoff. The vc++ code can be put together for free (assuming that you have visual studio), while the vb code requires the purchase of a third party software package (also in addition to visual studio).

Visual Basic

The visual basic code that I have written is available as a package named at the bottom of this page.  

For this code to work, you will need the VideoOCX ActiveX component installed on your computer

The code that describes the functions found in the main form is seen below:

Private Sub exit_Click()
    ' only if running...
    If (Timer1.Enabled) Then
        Timer1.Enabled = False  'Stop Timer
    End If
End Sub

Private Sub Start_Click() 'Init VideoOCX Control, allocate memory and start grabbing
    If (Not Timer1.Enabled) Then
        Start.Caption = "Stop"
        ' Disable internal error messages in VideoOCX
        VideoOCX.SetErrorMessages False
        ' Init control
        If (Not VideoOCX.Init) Then
            ' Init failed. Display error message and end sub
            MsgBox VideoOCX.GetLastErrorString, vbOKOnly, "VideoOCX Error"
            ' Allocate memory for global image handle
            capture_image = VideoOCX.GetColorImageHandle
            ' result_image = VideoOCX_Processed.GetColorImageHandle
            Timer1.Enabled = True 'Start capture timer
            ' Start Capture mode
            If (Not VideoOCX.Start) Then
                ' Start failed. Display error message and end sub
                MsgBox VideoOCX.GetLastErrorString, vbOKOnly, "VideoOCX Error"
            End If
        End If
        Start.Caption = "Start"
        Timer1.Enabled = False  'Stop Timer
    End If
End Sub

Private Sub Timer1_Timer()
    ' Timer for capturing - handles videoOCXTools
    Dim matrix As Variant
    Dim height, width As Integer
    Dim r, c As Integer
    Dim max_r, max_c As Integer
    Dim max_red As Integer
    Dim gain, offset As Variant
    Dim h_cm As Variant
    Dim range As Integer
    Dim pixels_from_center As Integer
    ' Calibrated parameter for pixel to distance conversion
    gain = 0.0024259348
    offset = -0.056514344
    h_cm = 5.842
    max_red = 0
    ' Capture an image
    If (VideoOCX.Capture(capture_image)) Then
        ' VideoOCX.Show capture_image
        ' Matrix transformation initialization
        matrix = VideoOCX.GetMatrix(capture_image)
        height = VideoOCX.GetHeight
        width = VideoOCX.GetWidth
        ' Image processing code
        ' The laser dot should not be seen above the middle row (with a little pad)
        For r = height / 2 - 20 To height - 1
            ' Our physical setup is roughly calibrated to make the laser
            ' dot in the middle columns...dont bother lookng too far away
            For c = width / 2 - 25 To width / 2 + 24
                ' Look for the largest red pixel value in the scene (red laser)
                If (matrix(c, r, 2) > max_red) Then
                    max_red = matrix(c, r, 2)
                    max_r = r
                    max_c = c
                End If
            Next c
        Next r
        ' Calculate the distance for the laser dot from middle of frame
        pixels_from_center = max_r - 120

        ' Calculate range in cm based on calibrated parameters
        range = h_cm / Tan(pixels_from_center * gain + offset)

        ' Print laser dot position row and column to screen
        row_val.Caption = max_r
        col_val.Caption = max_c
        ' Print range to laser illuminated object to screen
        range_val.Caption = range
        ' Draw a red vertical line to intersect target
        For r = 0 To height - 1
            matrix(max_c, r, 2) = 255
        Next r
        ' Draw a red horizontal line to intersect target
        For c = 0 To width - 1
            matrix(c, max_r, 2) = 255
        Next c
        VideoOCX.ReleaseMatrixToImageHandle (capture_image)
    End If
    VideoOCX.Show capture_image

End Sub

Screen shots from this code can be seen below:

Visual C++

void CTripodDlg::doMyImageProcessing(LPBITMAPINFOHEADER lpThisBitmapInfoHeader)
	// doMyImageProcessing:  This is where you'd write your own image processing code
	// Task: Read a pixel's grayscale value and process accordingly

	unsigned int	W, H;			// Width and Height of current frame [pixels]
	unsigned int    row, col;		// Pixel's row and col positions
	unsigned long   i;			// Dummy variable for row-column vector
	unsigned int	max_row;		// Row of the brightest pixel
	unsigned int	max_col;		// Column of the brightest pixel
        BYTE		max_val = 0;	        // Value of the brightest pixel

	// Values used for calculating range from captured image data
	// these values are only for a specific camera and laser setup
	const double	gain = 0.0024259348;	// Gain Constant used for converting
						// pixel offset to angle in radians
	const double	offset = -0.056514344;	// Offset Constant
	const double	h_cm = 5.842;		// Distance between center of camera and laser
        double		range;		        // Calculated range 
	unsigned int	pixels_from_center;	// Brightest pixel location from center
						// not bottom of frame
	char		str[80];	        // To print message
	CDC		*pDC;			// Device context need to print message

        W = lpThisBitmapInfoHeader->biWidth;	// biWidth: number of columns
        H = lpThisBitmapInfoHeader->biHeight;	// biHeight: number of rows
	for (row = 0; row < H; row++) {
		for (col = 0; col < W; col++) {

			// Recall each pixel is composed of 3 bytes
			i = (unsigned long)(row*3*W + 3*col);
			// If the current pixel value is greater than any other, 
                        // it is the new max pixel
			if (*(m_destinationBmp + i) >= max_val) 
				max_val = *(m_destinationBmp + i);
				max_row = row;
				max_col = col;
	// After each frame, reset max pixel value to zero
        max_val = 0;

	for (row = 0; row < H; row++) {
		for (col = 0; col < W; col++) {

			i = (unsigned long)(row*3*W + 3*col);
			// Draw a white cross-hair over brightest pixel in the output display
			if ((row == max_row) || (col == max_col)) 
				*(m_destinationBmp + i) = 
				*(m_destinationBmp + i + 1) = 
				*(m_destinationBmp + i + 2) = 255; 

	// Calculate distance of brightest pixel from center rather than bottom of frame
        pixels_from_center = 120 - max_row;

	// Calculate range in cm based on bright pixel location, and setup specific constants
	range = h_cm / tan(pixels_from_center * gain + offset);

	// To print message at (row, column) = (75, 580)
	pDC = GetDC();	

	// Display frame coordinates as well as calculated range
	sprintf(str, "Max Value at x= %u, y= %u, range= %f cm    ",max_col, max_row, range);
	pDC->TextOut(75, 580, str);

My complete code for this project is available as a package named at the bottom of this page.  

Note, to run this executable, you will need to have both qcsdk and the qc543 driver installed on your computer.  Sorry, but you are on your own to find both of these.  

Below are two examples of the webcam based laser range finder in action. Note how it looks like there are two laser dots in the second example. This "stray light" is caused by internal reflections in the camera. The reflected dot loses intensity as it bounces within the camera so it does not interfere with the algorithm that detects the brightest pixel in the image.

Future Work

One specific improvement that can be made to this webcam based laser range finder is to project a horizontal line rather than a dot onto a target. This way, we could calculate the range for each column of the image rather than just one column. Such a setup would be able to be used to locate areas of maximum range as places that a vehicle could steer towards. Likewise, areas of minimum range would be identified as obstacles to be avoided.

Follow Up

Based on many questions and comments that I have received, it appears that a lot of people have tried to duplicate this effort.  Please keep in mind that this work was originally done before 2004 (a way long time ago).  If I were to do this all over, I would use OpenCV for the vision components.  If I have some extra time, I will create an example and post it here.  
Todd Danko,
Aug 27, 2009, 6:30 PM
Todd Danko,
Aug 27, 2009, 6:31 PM
Todd Danko,
Aug 27, 2009, 6:31 PM