Using images to explore other planets is not a straightforward process; there are a lot of aspects that must be taken into account if we want to be able to derive certain information just from the pictures.
It is most essential to thoroughly plan image acquisition strategies and procedures, since any drawback in image acquisition can be hard or even impossible recovered by further processing. Some of these issues include:
During the field trials...
For the field tests to be valuable it's important to generate reference data. What's the point of doing a test if you cannot assess its success at the end? White Styrofoam spheres were located on the scene and on top of Bridget's mast. PRoVisG 3D Vision products contain coordinates of these spheres and they are measured geodetically, making it possible to assess the accuracy of mapping and navigation. An IMU was also incorporated on Bridget (delivered by Frank Trauthan, from DLR), which allowed GPS coordinates and heading,pitch and roll angles to be recorded.
Image processing: from 2D to 3D
Camera setups are inspired on the human visual system. We have two eyes that allow us to detect and guess distances, since the different parallaxes caused by objects at different distances can be easily “processed” by our brain. It detects one point in the scene in both views and generates its own 3D model. We are making a similar thing with the cameras: We try to find for all pixels in the left image (taken from left camera) their corresponding scene point in the right image (taken from right camera). This is called “Matching”. Using previous camera calibration, generated parallaxes and algebra 3D coordinates can be obtained. In principle, we would not need stereo vision (having two cameras, like two eyes) to generate 3D coordinates: This could be obtained by just moving one camera also (this process is called “Structure from Motion” - SFM). However, SFM in most cases is less accurate when using few images since we know the distance between stereo cameras better than the length of a path driven by the rover.
Products that can be obtained from stereo images are:
Another product that is obtained from imaging is visual odometry, i.e. information about Rover position and pointing. A certain landscape is imaged after intervals and from the differences in the pictures the distance travelled by the rover can be inferred.
For PRoVisG, images are taken with lots of different cameras. The combination of information from different sensors is tricky: data from different resolutions and wavelengths needs to be brought together, as well as Rover imagery and remote sensing images. Inaccuracies and missing information are also present. This is the computer vision science part of the PRoVisG project.
Scientists will use PRoVisG products for decision making what to do next (having a look on global views) and direct assessment of detailed parts of the products. Such global views allow the virtual combination of the Rover with its environment (example from prior work in PRoVisG):
Interactive (real time) access to landscape reconstructions such as the one provided from Clarach Bay (in Cool Field Trials videos) gives even a more immersive view.
“The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".”