...to conduct a scientific mission to scout for possible colonization sites
Landing a spacecraft on the surface of an extraterrestrial body is one of the most complex challenges for a space system. As mentioned before, a systems engineering approach would demand asking ourselves the what, why, where and whom of such an endeavour. Putting aside the why, let us assume we are tasked with accomplishing it, let us set some simplistic requirements:
Accuracy: landers will need to target well-illuminated landing sites and areas of special scientific interest, as well as rendezvous with other spacecraft.
Safety: landers will have to land softly on potentially hazardous terrain so as not to damage sensitive equipment
Payload capability: landing more cargo will be necessary to increase mission returns and to start building things on the target site.
Autonomy: landers will need to operate independently of humans, taking decisions in real time.
Since this is a blog about neural networks and AI we shall focus on the last requirement. Autonomy is the ability of a system to achieve goals while operating independently of external control. It is needed:
When the cadence of decision-making exceeds communication constraints (delays, bandwidth, and communication windows)
When time-critical decisions (control, health, life-support, etc) must be made on-board the system, vehicle, etc.
When autonomous decision-making can reduce system cost or improve performance
This autonomy is normally required during all phases of the landing, but to achieve the necessary landing location accuracy we shall focus on the phases below 1Km altitude from the target. The moon is taken into consideration for this example as next logical target of space colonization.
To guide the spacecraft to the target destination on the Moon it is essential to take into account the peculiarities of the moon craters. Why the craters? Because they are the most likely locations to find ice to generate water for colonization.
Challenge: highly varying illumination conditions, with low Sun elevation angle in the South Pole region and large shadow areas in the image. So how can an autonomous vision based system determine an accurate location to refine landing trajectory?
Crater-based systems use traditional image processing techniques to detect craters from imagery. However, these approaches are typically not robust to changes in lighting conditions, viewing angles, and image noise. It is here that convolutional neural networks (CNN)* come to the rescue. NASA and MIT have refined Neural Network based algorithms (like LUNA/PyCDA) to extract the circular feature from the images of the moon surface captured in real time by the lander during the last phases of the approach.
Image classification through the approach of neural network theory is a common application. The training phase is certainly essential to update the algorithm on the basis of known data acquired during the exploration of the lunar surface. The craters are characterized by a particular geometry that is retrieved through the use of the algorithm that in turn returns reliable outcomes to recognize “new” craters autonomously.
Of course such a network will have to be trained with a great number of labelled real and synthetic pictures of the loon surface created over the years by NASA and other scientific institutions. An example of crater (circular) pattern recognition is shown in the picture below together with an example of spacecraft imager (camera).
*For more information on Convolutional Neural Networks, we can recommend the following source.
Sources:
Artificial Intelligence Techniques in Autonomous Vision-Based Navigation System for Lunar Landing, Stefano Silvestrini, Politecnico di Milano
ESA
Deep Learning Crater Detection for Lunar Terrain Relative Navigation, Lena Downes, MIT