Today
For Next Time
In order to understand camera calibration, we need to define a simple model of how a camera images a 3-dimensional scene. We will use X to denote the (3d) coordinates of a point measured relative to the optical center of the camera and x to denote the (2d) coordinates of the image of that point measured relative to the point p.
In order to compute the coordinates x we need to know the focal length of our camera. The focal length is shown in the figure below as f.
The y coordinate of the point x is given by f Y / Z (you can derive this easily using similar triangles). Similarly, the x coordinate of the point x is f X / Z.
Calculating Pixel Coordinates
Knowing the coordinates x where a point X will be imaged is useful, however, sometimes it is convenient to compute the pixel coordinate where the point X will be imaged. The following picture should give you a good idea as to the difference between these two coordinate systems.
Where what used to be labeled as x and y is now shown as xcam and ycam, and the pixel coordinates are now shown as x and y. While it seems logical that the optical center of the camera would be imaged exactly at the middle pixel, this is not necessarily the case.
Distortion
The preceding model of image formation is not adequate for all cameras. For instance, think of the fisheye lens. If we were to use the simple model above, then straight lines in the real world would always be imaged to straight lines. This is not the case for fisheyed images. In order to account for this, we must have some model of how these type of lenses distort the image. The exact nature of how we model this type of distortion is outside the scope of this activity, however, OpenCV has support for several models. All that matters for understanding camera calibration is to understand that distortion exists and that we have some models to describe it.
The purpose of camera calibration is to infer the parameters of the model of image formation described above by taking images of a known pattern from different positions. A commonly used pattern is the chessboard.
Specifically, by taking pictures of this known pattern we seek to determine:
The reasons for doing this are twofold:
As an example of (2), consider this pair of images (the first is the original image, and the second is the same image after removing distortion).
Notice how straight lines in the real world that were previously curved are now straight in the undistorted image.
To try out camera calibration in ROS, first connect to a Neato. Next, run the following command.
rosrun camera_calibration cameracalibrator.py -p chessboard -s 9x6 -q .025 camera:=/camera image:=/camera/image_raw
To calibrate the camera move the robot around to different views of the checkerboard (or move the chessboard around if you have a way to mount it in a rigid fashion). When the calibrator has enough images, it will allow you to click calibrate. Once you have calibrated, you can click save, and finally commit in order to write the changes to ROS.
To see that your changes have been successfully saved, run this command.
rostopic echo /camera/camera_info
You should see the calibration parameters being published to this topic.
Rectifying Camera Images
In order to apply the calibration parameters to undistort the camera images, you can run:
ROS_NAMESPACE=camera rosrun image_proc image_proc
This command will publish a bunch of new image topics with the suffix _rect
which stands for rectified (or undistorted).
Once you have calibrated the camera once, redo the steps with the fisheye lens to see a much more dramatic result! You may find that the default camera calibration doesn't work super well on the fisheye. This is because the calibration model opencv uses is not really designed for severe distortion. You can do something more reasonable by running this command instead (try different values of k from 3 to 6 if you still can't get it working well). Unfortunately, these instructions seem to be broken in ROS Kinetic. Hopefully, someone with Indigo can show us the result on the projector.
rosrun camera_calibration cameracalibrator.py -p chessboard -s 9x6 -q .025 camera:=/camera image:=/camera/image_raw --fix-principal-point -k 4
This is about as good as I could get the calibration using the above strategy:
For a more detailed treatment of these topics, you may be interested in checking out my slides from the first iteration of CompRobo.