OCamCalib: Omnidirectional Camera Calibration Toolbox for Matlab

Omnidirectional Camera Calibration Toolbox for Matlab (for Windows, MacOS & Linux)

Download OCamCalib_v3.0.zip

Includes Automatic Corner Extraction and undistortion functions (Matlab & C/C++)

This toolbox is currently used by NASA, PHILIPS, BOSCH, DAIMLER

Please report any bug, question, suggestion to the Google group: ocamcalib-toolbox (at) googlegroups (dot) com

or to the group webpage

This Toolbox was partially inspired by the "Caltech Calibration Toolbox" by Jean-Yves Bouguet

TUTORIAL

Please read this tutorial very carefully before contacting me for help.

1. Introduction to the Toolbox

The OcamCalib Toolbox for Matlab allows the user (also inexpert users) to calibrate any central omnidirectional camera, that is, any panoramic camera having a single effective viewpoint (see section 17). The Toolbox implements the procedure initially described in the paper [1] and later extended in [2] and [3]. A detailed introduction to this model is in section 19 of this Tutorial. Furthermore, you can also see a demo of how the toolbox works here.

The Toolbox permits the user to easily and quickly calibrate the omnidirectional camera through two steps. First, it requires the user collect a few pictures of a checkerboard shown at different positions and orientations. Then, the user is asked to extract the corner points. With the new version of the toolbox this operation is done completely automatically. Therefore, no manual extraction is needed. After these two steps, the calibration is completely automatically performed.

After the calibration, the toolbox provides two functions (CAM2WORLD and WORLD2CAM) which express the relation between a given pixel point and its projection onto the unit sphere (this is a 3D vector emanating from the single effective view point) (see section 17). This relation clearly depends on the mirror shape and on the intrinsic parameters of the camera. The novel aspects of the OCamCalib Toolbox with respect to other toolboxes are the following:

    • The toolbox is the only one with Automatic Corner Extraction (no manual extraction is required).

    • The toolbox does not require a priori knowledge about the mirror shape.

    • It does not require calibrating the perspective camera separately: the system camera-mirror is treated as a unique compact system that encapsulates both the intrinsic parameters of the camera and the parameters of the mirror.

    • The detection of the image center is performed automatically. It does not require the visibility of the circular external boundary of the mirror. Unlike other toolboxess, which require the visibility of the external boundary of the mirror to determine the image center, the OCamCalib Toolbox automatically identifies the center without any user interaction!

The calibration performed by the OCamCalib Toolbox is based on the following hypotheses:

    • The camera-mirror system possesses a single effective viewpoint (see section 18 for a definition), or also a “quasi” single viewpoint. In fact, the Toolbox is able to provide an optimal solution even when the “single view point property” is not perfectly verified (for instance when the camera optical center is not exactly in the focus of the hyperbola or also for spherical mirrors). The Toolbox showed to give very good calibration results even in the latter case (reprojection error < 0.5 pixels!).

2. Software requirements

The OCamCalib Toolbox for Matlab has been successfully tested under Matlab 8.0, 8.1 2013 for Windows, MacOS and Linux.

The Calibration Refinement tool requires the Matlab Optimization Toolbox, in particular the function lsqnonlin, which you should have by default.

Please report any bug, question, suggestion, or special request to me: davide (dot) scaramuzza (at) ieee (dot) org

3. Download, install and run the Toolbox

You can download the OcamCalib Toolbox from the links given at the top of this page.

Unzip the file, run Matlab, and type ocam_calib.

For catadioptric and fisheye cameras up to 195 degrees

Author: Davide Scaramuzza

4. Print the pattern and capture the images

Download the calibration pattern from here.

Print it for example on a A4 paper and attach it on a piece of carton. Make sure that there is a thick white border all around the pattern. This white border is needed by the Automatic Checkerboard Extraction tool to facilitate the corner extraction.

Now use your favorite image capture program to take the images with your camera. 6 to 10 image should be enough.

ATTENTION!

In order to obtain good calibration results, I suggest the following:

1. Approach the checkerboard to the mirror or to the fisheye as much as you can (see sample images below). This will improve the calibration and will increase the chances that the Automatic Checkerboard Extraction tool finds all the corners! Make sure that every corner of the checkerboard is visible in each image. For the Automatic Checkerboard Extraction tool it is furthermore important that a white border is present around the pattern.

2. Take pictures of the checkerboard in order to cover all the visible area of the camera, e.g. from all around the mirror. By doing this, you allow calibration to compensate for possible misalignments between the camera and mirrors axes. The second and most important reason for doing this is that it helps the automatic detection of the center of the omnidirectional image.

In the following figure you can see a the sample images of the checkerboard I will use in this tutorial:

Here are other sample images captured with a fisheye camera with 190-degree field of view. You can see them here.

4. Load images

Copy all the images into OCamCalib folder. The first step to calibrate your omnidirectional camera consists in loading the images of a checkerboard shown at different positions and orientations.

Ok if your images are now ready, you can start loading them and calibrating you camera!

Before loading the images, make sure that they are in the same folder of the toolbox files.

Then, click on the button Read names. You will see a message on the Matlab command shell:

Basename camera calibration images (without number nor suffix):

This message asks you to tape the basename of your image files, without the file format.

For example, the images that you will find in the OcamCalib Toolbox are of the type: VMRImage0.gif, VMRImage1.gif ... VMRImage9.gif. That means that the basename of the images is VMRImage. Thus, type:

Basename camera calibration images (without number nor suffix): VMRImage

Then, it will ask you to type the image format. So type the letter associated to your format:

Image format: ([]='r'='ras', 'b'='bmp', 't'='tif', 'g'='gif', 'p'='pgm', 'j'='jpg', 'm'='ppm') >> g

In our case, the image format is gif , so you will need to type g.

At this point, the Toolbox will load all images having that basename:

Loading image 1...2...3...4...5...6...7...8...9...10...

done

At the end, the Toolbox will show the thumbnails of your calibration images, something like this:

If everything was all right you can go to the next step!

5. Extraction of grid corners

The extraction of grid corners is the most important phase for calibration, as the calibration results depend on the position of the corners of the checkerboard in each image.

In the new version of the OCamCalib toolbox you can choose to use the automatic extraction of the checkerboard or to do the manual. In the first case, the toolbox will apptent to find all the corners. While in the second you will have to click on all the corners.

Let’s see how it works:

Click on “Extract grid corners”. You will get this message:

Extraction of the grid corners on the images

Type the images you want to process (e.g. [1 2 3], [] = all images) =

Type ENTER if you want to process every image, or type the vector containing the numbers of images you want to process. In our tutorial we want to process all the images, so we have just to press ENTER.

The next message is:

Number of squares along the X direction ([]=10) =

Type the number of squares present along the X direction; say the vertical direction in the reference frame of the checkerboard. In our case, for instance, we want to use 5 filled checkers along the vertical direction and 6 checkers along Y (say the horizontal direction). So type:

Number of squares along the X direction ([]=10) = 5

Number of squares along the Y direction ([]=10) = 6

Now type respectively the size of the square along the X and Y directions. In our images this is 30 mm.

Size dX of each square along the X direction ([]=30mm) = 30

Size dY of each square along the Y direction ([]=30mm) = 30

If you just press ENTER, the Toolbox loads the default parameters ([]=30mm) . Observe however that the checker size is ONLY used to recover the absolute positions of the checkerboards. But for the intrinsic parameters THIS IS NOT needed so you can even just leave this field empty.

By the next message, the Toolbox will ask the position (rows, columns) of the center of the omnidirectional image. Because the OcamCalib Toolbox is able to automatically determine the location of the center, you can just leave this field empty by pressing ENTER. In this case, the Toolbox will in a first stage take as center the point (height/2, width/2). Do not care about this because later you will use the button Find center to find the correct position of the center! However, if you desire, you can optionally specify the location of the center and refine it later with findcenter.

X coordinate (along height) of the omnidirectional image center = ([]=384) =

Y coordinate (along width) of the omnidirectional image center = ([]=512) =

Next you can choose to use the automatic image selection or treat every image individually.

When using the automatic image selection, any images for which the automatic corner extraction is not able to extract all corners will be omitted. Use of the automatic image selection is recommended for large calibration data sets (e.g. images extracted from video data) or if the corner extraction is known to work well for the calibration data set.

EXTRACTION OF THE GRID CORNERS

Do you want to use the automatic image selection

or do you want to process the images individually ( [] = automatic, other = individual )?

For the purpose of this tutorial, it's recommended to choose the individual image processing.

If you opted for automatic image selection, go to section 6 "Calibration". If you decide to process the images individually, continue reading here.

For individual processing, the toolbox will now ask you to specify if you want to use the automatic corner extraction or the manual one.

I recommend using the automatic one because this will save you clicking on every corner. If you are lucky the toolbox will automatically detect 100% of the corners in all the checkerboards. I tried this routine on more than 50 datasets of images from different cameras under different illumination and resolution and I always obtained a satisfying detection rate of 95%. In many cases even 100%. This routine is based on our IROS’08 paper. This routine requires you to use a checkerboard pattern with white large border. Download the pattern from here, print it out and attach on a flat rigid surface.

Do you want to use the automatic corner extraction

or do you want to extract all the points manually ( [] = automatic, other = manual )?

If you opted for the automatic extraction just continue reading here below otherwise jump directly to the section “Manual Extraction”, section 5.2.

5.1 Automatic Extraction

If you opted for the automatic extraction then the toolbox will display this message:

Processing image 1..

In this case for instance 100% of the corners were detected. Just press ENTER.

Would you like to reposition any of the assigned corners ([] = yes, other = no)?

If you are satisfied (like in this case) just say “no” by typing any character.

If you said “yes” then the toolbox will ask you reposition any of the assigned corners by using the left click to reposition it and the right click to quit the repositioning mode. Follow the indications written on the figure!

If some of the corners are missing the toolbox will ask you to click on the missing points by following the ordering given on the top of the figure. For example, in the figure below some corners are missing: i.e. point n. 37, 38, 39.

Follow the indications given on the image top: so just press ENTER. The message on the figure top will change and will indicate which corner you have to click on. In the figure above, you will be asked to click consecutively on point n. 37, then 38 and finally 40.

Note that the numbering of the points can change on every checkerboard. This does not matter at all during the automatic corner extraction as long as the ordering increases in one direction, but it matters during the manual extraction! Conversely notice that when you input the missing points you have to respect the ordering suggested in the title bar!

Before processing the next image, the toolbox will change the numbering of the points and display the orientation of the x-y axes and the axis origin. Press ENTER to continue.

Now, if you managed to detect all the points you can calibrate your camera. Go to section 6 (Calibration).

5.2 Manual Extraction

If you ended up here is because you opted for the manual extraction.

In this phase, the OCamCalib Toolbox will ask you to click on the corner points of each image of the checkerboard.

To facilitate the corner extraction while clicking, the toolbox takes advantage of a corner detector to interpolate the best position of the grid corner around the point you clicked on.

Do you want your clicking to be assisted by a corner detector ? ( [] = yes, other = no )

Just press ENTER if you want, or press another number if you do not. I suggest using the corner detector if the edges of the checkerboard are well defined. In this case, the corner detector will refine the position of the grid point you clicked on by trying to interpolate the best location around the point you clicked on. If you do no use this option the position you clicked on will be taken as definitive.

If you answered yes, the Toolbox will ask to type the size of the window of the corner detector. The window is the size (2*winx+1)*(2*winy+1)

Window size for corner finder (wintx and winty):

wintx ([] = 8) =

winty ([] = 8) =

Usually, the values given as default should work fine, otherwise, (if for instance the location chosen by the corner detector is too far from the point you clicked on) you can try to choose a smaller value. Conversely, if the resolution of your pictures is very high (up to 5 Mega pixels!) you may want to choose bigger values.

Coming back to our example, accept the suggested values by pressing ENTER. You will have something like this:

Window size for corner finder (wintx and winty):

wintx ([] = 8) =

winty ([] = 8) =

Window size = 17x17

Processing image 1...

Using (wintx,winty)=(8,8) - Window size = 17x17

Press ENTER and then Click on the extreme corners of the rectangular complete pattern (the first clicked corner is the origin)...

ATTENTION:

When clicking on the points follow the left-right order! (see the next figure below).

Moreover, in processing the remaining images, be careful to preserve the same correspondences of clicked points. For instance, the first selected point of image 1 has to be the first selected point of image 2, 3, and so on. So, make sure to preserve the point correspondences. This makes sure that the reference axes of every checkerboard maintain the same orientation.

The grid corner extraction will continue until you process the whole set of images you selected at the beginning. In this tutorial, for instance, we chose to process all the images.

This is the last but most important step before calibration. In fact, you are now asked to click on every grid point. Unlike other calibration tools, which ask only to click of 4 points, here you are required to click on every corner point. This is due to the fact that the OcamCalib Toolbox does not use any prior knowledge about the mirror shape, and so, the position of all corner points cannot be inferred from a few of points alone.

Anyway, this just requires a little more patience. Moreover, as we say in Italy, “who does not have patience, has nothing”! Furthermore, the quality of the result obtained justifies the wasted time! But let’s go on…

Before start clicking on the grid corners, you are allowed to zoom into the region of the image, which contains the checkerboard. When you have zoomed in, press ENTER. By doing so, the shape of the cursor changes into a square, meaning that you are in the click mode.

So, start clicking on every corner point, remembering that the first point identifies the origin of the X-Y axes of the reference frame of the checkerboard (point number 1 in the next figure). The clicking has to be done moving along the Y direction, following the ordering shown in the figure. The grid corners are highlighted by the red crosses, while the order of the click is given by the numbers (see figure below).

If you finished the corner extraction, and you didn’t get any error, then you can finally pass to the calibration phase!

6. Calibration

If you got here, I assume you have loaded the images and extracted the grid corners.

So, you are finally ready to calibrate you omnidirectional camera.

To do this, click on the button Calibration. You will receive the following message:

Degree of polynomial expansion ([]=4) =

If you read the paper in [1], which describes the calibration procedure, you know what this parameter is, if you did not this parameter permits you to choose the maximum order of the polynomial which approximates the function that back projects every pixel point into the 3D space. Several experiments on different camera models showed that a polynomial order=4 gives the best results. If you are not satisfied with the results obtained, try to decrease or increase the polynomial order. In this tutorial we set the value to 4. Once you have chosen the polynomial order, the calibration is performed very quickly because a least square linear minimization method is used.

At the end of calibration, the Toolbox displays the following graph, which shows the plot of function F, and the plot of angle THETA of the corresponding 3D vector with respect to the horizon.

7. Find center

ATTENTION!!! Use the Find center tool always BEFORE using the Calibration Refinement. In fact, the automatic detection of the image center is done by iteratively applying a linear estimation method, which is suboptimum. So, once you estimate the image center, then you can run Calibration Refinement, which refines both all calibration parameters and the position of the center using a non-linear method.

This routine will try to extract the image center automatically. If during the grid corner extraction you didn’t set the correct values for the center of the omnidirectional image, then you can use the automatic detection of the center. For doing it, just click on the button Find center, and OcamCalib Toolbox will start an iterative method for computing the image center, which minimizes the reprojection error of all grid points. The automatic center detection takes only a few seconds.

Iteration 1...2...3...4...5...6...7...8...9...

At the end, the Toolbox recomputes all calibration parameters for the new position of the center, and outputs the new coordinates of the center. See below.

Output of the Find center tool after iteration:

0.44 ± 0.28

0.37 ± 0.25

0.38 ± 0.24

0.42 ± 0.21

0.29 ± 0.18

0.32 ± 0.14

0.33 ± 0.18

0.46 ± 0.22

0.40 ± 0.31

0.36 ± 0.20

Average error [pixels]

0.377502

Sum of squared errors

81.896493

xc =

3.832866677912270e+002

yc =

5.163646215408636e+002

>>

The average error is the mean of the reprojection error computed over all checkerboards, while “Sum of squared errors” is obviously the sum of squared reprojection errors.

The calibration parameters are the variable ocam_model.ss. This variable contains the polynomial coefficients of function F.

I would like to recall that F has the following form:

Where, for instance, I used a 4th order polynomial, and ρ is the distance from the center of the omnidirectional image, measured in pixels.

In ocam_model.ss the coefficients are stored from the minimum to the maximum order, that is, ocam_model.ss=[a0, a1, a2…].

Note, if at any time you would like to modify the coordinates of the center, you can simply modify the value of the variables ocam_model.xc and ocam_model.yc, which respectively contain row and column of the center location.

9. Calibration Refinement

By clicking on the button Calibration Refinement, the Toolbox will start the non-linear refinement of the calibration parameters, by using the Levenberg-Marquadt algorithm. The optimization is performed by attempting to minimize the sum of squared reprojection errors.

The Calibration refinement is done using the Matlab Optimization Toolbox, and, in particular, it requires function lsqnonlin, which you should have by default.

The non-linear refinement is done in two steps. First, it refines the extrinsic camera parameters, that is, the rotation and translation matrices of each checkerboard with respect to the camera (i.e. RRfin). Then, it refines the camera intrinsic parameters (i.e. ocam_model). As the extrinsic and intrinsic parameters are not independent, the refinement may need a few iterations to converge to a solution that minimizes both intrinsic and extrinsic parameters.

Once you click on the button, Calibration Refinement, the Toolbox asks if you want to limit the number of iterations of the non-linear refinement, and informs you that the procedure can take several seconds.

This function alternately refines EXTRINSIC and INTRINSIC calibration parameters

by using a non linear minimization method

Because of the computations involved this refinement can take some seconds

Loop interrupting: Press enter to stop refinement. (OCamCalib GUI must be selected!)

Maximum number of iterations ([] = 100, 0 = abort, -1 = no limit) =

Press ENTER to start the refinement.

Iteration 1

Starting refinement of EXTRINSIC parameters...

Optimizing chessboard pose 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,

Chessboard pose 1 optimized

Starting refinement of INTRINSIC parameters...

Sum of squared errors: 137.670161

As you will see, the last step, which concerns the refinement of the INTRINSIC parameters of the camera, is the phase requiring most of the time, but in general it should not take more than a few seconds. You may interrupt the refinement at any time by pressing the ENTER key. The OCamCalib Toolbox GUI has to be selected (the window has to be in focus) for the interruption to work.

Once the non-linear refinement is terminated, you can again you use buttons Analyse error and Show calib results to display the reprojection error or the calibration results.

10. Reproject on images

By clicking on the button Reproject on images, the Toolbox will reproject the all grid corners according to the new calibration parameters just estimated.

In the left figure above, you can see the center of the image, indicated by the red round, which has been computed using the automatic center detection.

While, on the right figure, you can see a detail of the checkerboard, with all corner grid highlighted, and the X-Y axes of the reference frame.

The red crosses are the grid corners you clicked on, while the rounds are the grid corners reprojected onto the image, after calibration.

11. Show Extrinsic

By clicking on the button Show Extrinsic, the Toolbox will display the position of every checkerboard with respect to the reference frame of the omnidirectional camera.

12. Analyse error

If you click on the Analyse error button you can see the distribution of the reprojection error of each point for all the checkerboards. Colors refer to the different images of the checkerboard.

13. Recompute corners

By using this tool, the Toolbox will perform the automatic detection of every grid corner around the reprojected points. This function is very useful if during the extraction of grid corners you did some mistakes, or the automatic corner detector did. By using the button Recomp. Corners, the Toolbox will attempts to recompute the positions of every corner point you clicked on, by using the reprojected grid as initial guess locations for the corners.

14. Show Calib Results

Click on the Show calib results button.

The following image will be displayed:

The calibration results will also be visualized:

Average reprojection error computed for each chessboard [pixels]:

0.44 ± 0.28

0.37 ± 0.25

0.38 ± 0.24

0.42 ± 0.21

0.29 ± 0.18

0.32 ± 0.14

0.33 ± 0.18

0.46 ± 0.22

0.40 ± 0.31

0.36 ± 0.20

Average error [pixels]

0.377502

Sum of squared errors

81.896493

xc =

3.832866677912270e+002

yc =

5.163646215408636e+002

>>

15. Load, Save, and Export the calibration results

To load and save your calibration results, just click on the corresponding buttons.

The calibration results will be save under the name Omni_Calib_Results.mat.

Click on the Export Data button to export the calibration results to the file “calib_results.txt”.

This file is useful to read the calibration results with the C/C++ routines (undistort, cam2world and world2cam) given here.

16. Workspace variables

All variables used by the different functions are stored as members of the calib_data object, which is defined in C_calib_data.m.

The most important variables of the used by the OcamCalib Toolbox are the following:

17. Useful functions to use after calibration in both MATLAB or C/C++

Once you finish to calibrate your camera, you can use two functions to respectively project a 3D point onto the image, and, vice versa, to back project a pixel point into the space. This two functions are:

18. Central and Non-Central Cameras

Non Central Cameras

Omnidirectional cameras are usually arranged by optimally combining mirrors and perspective cameras. A camera-mirror assembly is called non-central (i.e. non-single effective viewpoint) system when the optical rays coming from the camera and reflected by the mirror surface do not intersect into a unique point. Check the image below for a better understanding.

Central Cameras

Conversely, central cameras are systems such that the single effective viewpoint property is perfectly verified. That is, every optical ray, which is reflected by the mirror surface, intersects into a unique point, which is called single effective viewpoint (see image below for a better understanding). A complete definition and analysis of this kind of imaging systems is given in [3].

As outlined in [3], central omnidirectional cameras can be built by optimally combining a pinhole camera (perspective camera) with hyperbolic, parabolic, and elliptical mirrors. Recently, lens manufacturing is also providing fisheye lenses, which well approximate the single effective viewpoint property. These imaging systems require only a fisheye lens to enlarge the field of view of the camera, without requiring mirrors. The former cameras (using both camera and mirror) are called catadioptric omnidirectional cameras, while the latter cameras (using only a fisheye camera) are called dioptric omnidirectional cameras.

For a catadioptric camera to be a central system, the following arrangements have to be satisfied:

  • Camera + hyperbolic mirror

    • The camera optical center (namely the center of the lens) has to coincide with the focus of the hyperbola. This assures the optical rays reflected by the mirror to intersect into a unique point (i.e. the internal focus of the hyperbola). For an example of camera + hyperbolic mirror see the image above.

  • Camera + parabolic mirror + orthographic lens

    • When using a parabolic mirror all reflected rays coming from the world into the camera are parallel to the mirror axis. This implies that a pin hole camera cannot in general be used as it is, because parallel rays do not converge towards the camera optical center. In order to provide a focused image onto the CCD plane, an orthographic lens has to be put between the camera and the parabolic mirror. Check the image below for a better understanding.

  • Camera + fisheye lens

  • An alternative method to enlarge the camera field of view without using mirrors consists in adding a fisheye lens above the camera CCD.

  • A fisheye lens is a system of lenses which are able to enlarge the field of view of a camera up to 190° (see the image below).

Cameras using fisheye lenses are not in general central systems, but they very well approximate the single view point property.

19. Our Omnidirectional Camera Model

Calibrating an omnidirectional camera implies finding the relation between a given 2D pixel point p and the 3D vector P emanating from the mirror effective viewpoint (see figure below). In general, this process requires finding the camera intrinsic parameters and the mirror intrinsic parameters.

Our omnidirectional camera model treats the imaging system as a unique compact system; that is, it does not care if you are using a mirror or a fisheye lens in combination with your camera.

19.1 Assumptions

Our model is based on the following assumptions:

  1. The mirror camera system is a central system, thus, there exist a point in the mirror where every reflected ray intersects in. This point is considered the axis origin of the camera coordinate system XYZ.

  2. The camera and mirror axes are well aligned, that is, only small deviations of the rotation are considered into the model.

  3. The mirror is rotationally symmetrical with respect to its axis.

  4. The lens distortion of the camera is not considered in the model. Camera lens distortion has not been included because omnidirectional cameras using mirrors usually need large focal length to focalize the image on the mirror. Thus, lens distortion can be really neglected. If you are using fish-eye lenses, camera lens distortion is already integrated in the projection function f.

19.2 The Omnidirectional camera model: partial model explanation

Now, I am going to explain the omnidirectional camera model.

For now, let us suppose that assumption 2 is perfectly verified, that is, camera and mirror axes are perfectly aligned. Later on, we will see how to overcome also this constraint.

Let p be a pixel point of your image, and (u,v) its pixel coordinates with respect to the center of the omnidirectional image (see image below). Let P be its corresponding 3D vector emanating from the single effective viewpoint, and (x,y,z) its coordinates with respect to the axis origin.

Because the camera and mirror axes are supposed to be perfectly aligned, observe that x and y are proportional to u and v respectively. Thus,

The function we want the calibration to estimate is the function, which maps an image point p into its corresponding 3D vector P . So we can write:

You would have probably observed that we can include α to the function f, and so we can equally write:

Indeed, remember that P is not a 3D point, but a vector; hence the last simplification is allowed!

Furthermore, because the mirror is rotationally symmetric, function f(u,v) depends only on the distance of a point from the image center .

So, we can still simplify the previous equation into the following one:

If you got what I explained so far, what we need for calibration is just the function f(ρ). Now, let’s go to introduce what this function looks like.

Our model describes function f(ρ) by means of a polynomial, whose coefficients are the calibration parameters to be estimated. That is:

So the parameters to estimate are: a0, a1, a2, a3, a4, ... Actually, the OcamCalib Toolbox asks to specify the polynomial degree you want to use. Actually, the more one augments polynomial order the more the accuracy of calibration increases. This is not true for high order polynomials. By using a lot this Toolbox I experienced that 4th order polynomials give the best calibration results.

19.3 The Omnidirectional camera model: complete model explanation

If you remember, so far we have assumed that the camera and mirror axes were perfectly aligned. Actually, because of natural errors in the camera-mirror settings, a small deviation from this hypothesis may occur. Moreover, because of the digitizing process of the camera, the pixels may not be square.

The natural consequence of these problems is that the circular external border of the mirror appears as an ellipse, as in the image below (the distortion effect in this image has been intentionally emphasized).

In order to take into account these considerations, I chose to model the misalignments errors and digitizing artefacts through an affine transformation:

.

This equation relates the real distorted coordinates (u',v') to the ideal undistorted ones (u,v).

20. References

    1. Scaramuzza, D., Martinelli, A. and Siegwart, R., (2006). "A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion", Proceedings of IEEE International Conference of Vision Systems (ICVS'06), New York, January 5-7, 2006.

    2. Scaramuzza, D., Martinelli, A. and Siegwart, R., (2006). "A Toolbox for Easy Calibrating Omnidirectional Cameras", Proceedings to IEEE International Conference on Intelligent Robots and Systems (IROS 2006), Beijing China, October 7-15, 2006.

    3. Scaramuzza, D. (2008). Omnidirectional Vision: from Calibration to Robot Motion Estimation, ETH Zurich, PhD Thesis no. 17635. PhD Thesis advisor: Prof. Roland Siegwart. Committee members: Prof. Patrick Rives (INRIA Sophia Antipolis), Prof. Luc Van Gool (ETH Zurich). Chair: Prof. Lino Guzzella (ETH Zurich), Zurich, February 22, 2008.

    4. Rufli, M., Scaramuzza, D., and Siegwart, R. (2008), Automatic Detection of Checkerboards on Blurred and Distorted Images, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, September 2008.

21. Acknowledgments

This work was conducted within the EU Integrated Project COGNIRON ("The Cognitive Robot Companion") and was funded by the European Commission Division FP6-IST Future and Emerging Technologies under Contract FP6-002020.

I want to thank Dr. Jean-Yves Bouguet, from Intel Corporation, for providing some functions used by the Toolbox.

I also want to thank Martin Rufli, now PhD student at the Autonomous Systems Lab of the ETH Zurich for implementing the Automatic Corner Extraction which he developed as Master thesis under my supervision.

Furthermore, I want to thank Zoran Zivkovic and Olaf Booij, from Intelligent Systems Laboratory Amsterdam (University of Amsterdam), for providing the sample images included in the Toolbox.

Finally, I want to thank all the users who sent me feedback since the first release of this toolbox in 2005! Many thanks to you all!