Most of this section was researched and designed by Richard Osbourne
An interstellar probe will need to mix advanced AI decision capabilities with some robust and field replaceable electronics. It will benefit from over a century of technological progress, and an extrapolation based on Moore’s law puts its eventual capabilities orders of magnitude beyond our present ones. Self driving cars are progressing rapidly today. An Interstellar probe will be a self driving starship.
The principle for the IT systems is to blend the most advanced technologies with simple and robust electronics technologies. This ensuring that operations can be maintained, even should significant adverse events arise, such as catastrophic engine failure or high velocity impact.
From a raw power perspective, ExaFLOP power supercomputers are forecast for the year 2020, albeit with very high energy requirements. A 1 ExaFLOP scale computing capability is widely regarded to be equivalent to the neural output of 1 human brain. Given the continued decrease in energy requirements and volumetric efficiency for a given processing power, it is not unreasonable to suggest that given a further decade of development, ExaFLOP class computers compact enough and energy efficient enough to be used in a large interstellar probe design such as Icarus would be achievable. Depending on the eventual launch date of the spacecraft, it may be that the available computational power will significantly exceed this, as long as the energy requirements can be reduced.
Evaluation of the last 22 years of supercomputing processing power increase, the following table provides a measure of the performance gains:
Table 1 - Supercomputing power increase
YEAR : PROCESSING POWER : INCREASE OVER PERIOD
1993 :131 GFLOPS
1995 : 220 GFLOPS : x 1.7
2000 : 7.2 TFLOPS : x 32
2005 : 280 TFLOPS : x 39
2010 : 2.6 PFLOPS : x9
2015 : 55 PFLOPS : x21
The following table shows current integrated chipset vs micro engineered machine dimensions and areas, and based on backcasting, then linearly extrapolates by forecasting future dimensional reductions. This enables a basic level of understanding of where these technologies will be at specific timeframes in the future, which provides some insight into the required sizing and mass budget for these components, which reflects similar component sizing.
Table 2- Technological extrapolation
Integrated Chipset : Micro-Engineered Machine : Generation
1.6mm x 2mm (3.2mm2) : 1.1mm x 1.1mm : 2015
1.4mm x 1.8mm (2.52mm2) : 0.9mm x 0.9mm : 2017
1.2mm x 1.6mm : 0.8mm x 0.8mm : 2019
1.0mm x 1.4mm : 0.7mm x 0.7mm : 2021
0.9mm x 1.2mm : 0.6mm x 0.6mm : 2023
0.8mm x 1.0mm : 0.5mm x 0.5mm : 2025
0.7mm x 0.9mm : 0.4mm x 0.4mm : 2027
0.6mm x 0.8mm : 0.3mm x 0.3mm : 2029
0.5mm x 0.7mm : 0.2mm x 0.2mm : 2031
0.4mm x 0.6mm : 0.15mm x 0.15mm : 2033
0.3mm x 0.5mm : 0.1mm x 0.1mm : 2035
0.2mm x 0.4mm : 0.09mm x 0.09mm : 2037
0.15mm x 0.3mm : 0.08mm x 0.08mm : 2039
0.1mm x 0.2mm : 0.07mm x 0.07mm : 2041
0.09mm x 0.15mm : 0.06mm x 0.06mm : 2043
0.08mm x 0.1mm : 0.05mm x 0.05m : 2045
0.07mm x 0.09mm : 0.04mm x 0.04mm : 2047
0.06mm x 0.08mm (0.0048mm2) : 0.03mm x 0.03mm (0.0009mm2) : 2049
Since an interstellar probe is projected to launch in about eighty years, following the performance increase presented in the above tables it is likely its computing capabilities will be equal to the task of maintaining the spacecraft and exploring the unknown environment of the target star system. With all these elements in mind, a primary flight computer mass allowance of 1 000 kg seems adequate, with a processing power of 1 exaflop or more. This might be a fairly mainstream piece of equipment in the technological environment of the day.
The systems for processing the sensor data from the main spacecraft and from the various probes following encounter system deployment will require significant processing power, since the bandwidth requirements alone, for multiple gigapixel imaging sensors returning data from a range of probes, will be immense. However, with the development of 8K video broadcast technologies requiring significant bandwidth (and no doubt eventually 16K and 32K), the same set of technologies can be used to base the transmission of stills image data on.
A multiplexed system for the incoming probe data will need to have sufficient channels to cater for as many as a dozen of incoming telemetry data streams in parallel when all the probes are deployed, and the individual channel bandwidth will need to be sufficient to process gigapixel images, ultra high definition video and other sensor data.
Simpler, robust and reliable space qualified hardware will be required for backup systems and for the sub probes and various landers, descent probes, rovers etc. Whilst the Galileo spacecraft relied on RCA1802 based hardware (an 8-bit CPU with a 2.5MHz to 8MHz clock frequency), more recently, there has been considerable in-space use of the PowerPC Architecture, developed by IBM and Apple in the 1990s, and in the case of the RAD5500 PowerPC series, providing 700-5200 MIPS and 466-3700 MFLOPS of performance.
However, over 25 years later, this is becoming dated, and a possible architecture for the future would be an ARMv8-A architecture, such as the 64-bit ARM Cortex-A72, or ARM Cortex-A35 for more power efficient applications. A Processor Unit based on the ARMv8-A architecture such as the Mali-T880MP can deliver 23.4 GFLOPS of processor performance in single core configuration, and for a 16 multi core configuration, can deliver 374.4 GFLOPS of processor performance (it is suggested that because different processor performance testing can yield varying performance figures due to setup conditions differing, that a margin of error of +/- 5% is applied to these figures).
This level of performance would be sufficient for processing 4K video streams, thus for single images or high bandwidth data measurements, it would be sufficiently powerful from a processing perspective, and would lend itself to use in the sub probes and other spacecraft carried by the main interstellar probe.
The size of these modern architectures is sufficiently small to fit within existing small laptop PCs (2016). Extrapolating forward to the point of construction, it can be forecast that power requirements will have decreased and processing capabilities will have increased, certainly enabling processing of 8K video streams, if not even greater resolution.
In addition to the backup flight computer systems, there will be very simple, microcontroller based backup systems. These will enable continual spacecraft operations at a very low caretaking / maintenance mode level of operations, even in the event of the primary and backup systems being compromised. They will also see use on the sub probes and associated spacecraft, where they may be in environments where power is not consistently available. The other reason for simple, microcontroller type backup systems is that their simplicity ensures less can go wrong, they have minimal power requirements, and more rapid recovery from major, system wide adverse incidents is possible.
Three candidate simple, low power microcontroller architectures that could be considered using current technologies would be the Microchip PIC, Atmel Atmega 328 series and Texas Instruments MSP430. These provide simple instruction sets, robustness in terms of ability to restart rapidly in the event of systems failure (due to not having the overheads of having to start up relatively extensive operating systems), and can be put into very low power modes for maximising power efficiency in the event of there being a power deficit.
For instance, the Atmel Atmega 328 series, in idle mode with the clock speed reduced, can draw as little as 1uA, and the MSP 430, in idle mode with the clock speed reduced, can draw less than 0.5uA. For the main probe, this provides scope for backup operations from near catastrophic losses of power, and for sub probes and similar smaller spacecraft deployed, this maximises their potential usefulness in areas of inconsistent power generation. Thus even with almost complete losses of power, vehicles as large as the main Icarus spacecraft could remain operational. As with the backup flight computer systems, the technologies for even simple microcontroller emergency backups will progress in the time between now (2016) and the start of vehicle construction. Thus even greater power efficiencies will be possible, and for even more processing capability.
A single processor/memory unit by the time of the construction of an interstellar probe such a unit might ....
A main processor/memory unit with multiple redundant storage
There will be a limit to the bandwidth that can be allocated to the communications with Earth, based on the bandwidth of the communications arrays and the overall gain of the system. This has been set at 20Gbps, comprising 6 off 1.5Gbps links from planetary probes and 2 off 1Gbps links from stellar probes, or an equivalent combination of data links. Thus, significant amounts of data will need to be stored onboard the main spacecraft before transmission back to Earth.
The current likely long term storage medium is the memristor developed by HP as a successor to Solid State, flash storage. Although progress in optical based photonic memory storage is being made in the laboratory, most notably by the University of Oxford and Karlsruhe Institute of Technology in Germany's GST (Germanium, Antimony and Tellurium) coated photonic technology. However, all three of these (and others) are still under development, and yet to be tested in space.
Even looking at existing space qualified storage technologies, flash storage technologies are increasing in popularity, as they are proving to be suitably reliable for space missions, and these have been used on numerous deep space space missions including the New Horizons mission to Pluto and the Kuiper Belt, and the Dawn mission to the main belt asteroids, Ceres and Vesta. Technologically, flash storage systems are growing to much greater storage capacity too, and even with today's technologies (2016) it is possible to build 1U Cubesat sized volumes capable of holding quad redundant flash memory modules of up to 20 Terabytes using the mSATA and m.2 SATA memory module form factors.
Large amount of storage would also be required for the deep learning software and to store the multiple scenarios required for the decision algorithms of the ship controls.
From the Interstellar Probe and sub probes perspective, how this would equate in terms of file sizes, for 100 minutes of 8K video, an uncompressed video file would result in:
100 minutes x 60 seconds per minute x 24 frames per second x 8192 pixels wide x 4320 pixels high x 3 colors x 8 bits per color / 8 bits per Byte = 15 Terabytes.
where simplistically, it is assumed; 1000 Bytes per KB / 1000 KB per MB / 1000 MB per GB / 1000 GB per TB
At 60 frames per second - 240 frames per second, and at 10 bits – 12 bits per color, the file would be comparatively larger.
If this 15 TeraByte file were compressed, 100 minutes of 8K video as above, compressed using the HEVC/H.265 standard at 300Mbps, would generate a 225 GB file.
If it is assumed that the initial files will need to be buffered, and thus need initial storage, then taking a 24 hour time period as the upper bound figure for length of single data file capture, we have a period of 1440 minutes. That implies 14.4 x 15 Terabytes = 216 Terabytes before storage (3.24 Terabytes after compression). At 1.5Gbps, that 3.24TB would take 4.8 hours to be transmitted. After including transmission overheads, that might become 5 to 6 hours, so (simplistically) each 1.5Gbps link could support 4 planetary probes, each storing and transmitting a 225GB file each day.
Small form factor (credit card sized) solid state storage currently (2016) is commercially available in 2 Terabytes, or more than a factor of 100 less than this for the uncompressed data (although very close to the compressed storage file size). However, solid state storage has already increased by a factor of 100 in the last 15 years to reach the 2 Terabyte level, thus it is not unreasonable to predict that a similar timeframe of 15 years hence, will result in a similar increase in storage capacity.
However, the sub probes and other deployed spacecraft will all be producing copious amounts of data in numerous forms, not necessarily continuously. Thus it may be that the required storage per individual spacecraft, will be of the order of 10 – 100 times greater (2 – 20 Petabytes for multiple uncompressed files / 32 – 320 Terabytes for multiple compressed files).
For the main spacecraft, assuming an upper bound of 300 simultaneous data streams from sub probes and micro-probes, and 100 simultaneous data streams on the main vehicle, the storage requirements on the main vehicle, would notionally be set to between 800 Petabytes to 8 Exabytes for uncompressed files, and 12.8 Petabytes to 128 Petabytes for compressed files.
However, this assumes the main spacecraft is always able to communicate back to Earth with no more than a day of full data. The reality will be, that there may be periods of its orbits, where there will not be a clear line of communication, or bandwidth capabilities will be significantly less capable, and significantly more data may need to be stored onboard for later data transmission back to Earth. As a result, it is assumed a worst case of 30 days storage need be retained onboard the main spacecraft. This figure allows for orbital occlusion by the star, and allowance for any other unforeseen events that might impact on data transmission.
Consequently, the storage requirements must be multiplied by a factor of 30 to gain a contingency acceptable data storage requirement. Thus the total storage of 800 Petabytes to 8 Exabytes for uncompressed files, and 12.8 Petabytes to 128 Petabytes for compressed files, would need to be increased to between 24 Exabytes to 240 Exabytes for uncompressed files, and 384 Petabytes to 3.84 Exabytes for compressed files.
It was noted previously, that 2 Terabytes is the current largest small form factor solid state storage capacity. Whilst, the size of the Icarus spacecraft enables a comparatively large volume to be allocated for storage, if the size constraint of the small form factor storage is retained for this exercise, to reduce overall systems volume, it would still require a disproportionate 12 million to 120 million, 2 Terabyte solid state drives for the storage of uncompressed files. The storage per drive will need to increase massively in order to achieve a practical storage volume.
Based on these figures, for the Icarus Interstellar probe, a maximum of 40 units are used for data storage, with dual redundancy of these 40 storage units. To accommodate all the data in 40 storage units, would require each unit having a storage capacity of 600 Petabytes to 6 Exabytes, or equivalent to 300,000 to 3 million, 2 Terabyte solid state storage drives.
Previously, it was noted that storage sizes have increased by a factor of 100 in the last 15 years. Applying this factor to the current 2 Terabyte capacity drives would result in 200 Terabyte capacity drives, which would still require 3000 to 30,000, 200 Terabyte capacity solid state drives. Another 15 years, with another factor of 100 increase in storage capacity, results in a requirement for 30 to 300, 20 Petabyte capacity storage drives (weighing 2.4kg to 24kg). Another storage generation or two beyond this, or approximately 3-5 years beyond that point, would see the storage requirements for the main Resolution II spacecraft being met in 33-35 years from now (2016), not including technological breakthroughs with data storage.
NOTE.
A slightly larger (2.5”) form factor for flash data storage currently (2016) stores a maximum of 15.36 Terabytes, with recent developments leading to possible capacity doubling in this format to 30.72 Terabytes by the end of 2016. This would, already, reduce the requirement to 20,000 to 200,000 30 Terabyte solid state storage drives. The mass budget for this number of solid state storage drives is still excessive, given a typical 2.5” SSD such as the Intel SSD X25-M weighs 80g, resulting in a total mass budget of between 1600kg to 16,000kg, thus demonstrating why several generations of storage increase will be required in order to achieve sufficient mass decrease.
Thus the possible masses for data storage vary widely. Keeping in proportion with the primary computer mass budget chosen, a mass budget of 4000 kg seems reasonable.
The electrical power consumption for the flight computers, avionics and other electronics systems is separated into the following power consuming systems:
Main Flight Computers power
Guidance Systems power
Navigation Systems power
Telemetry transmitters power
Telemetry receivers power
Avionics sub systems power
Secondary Flight Computers power
Backup Flight Computers power
Avionics Bay Thermal Control power
Each of these sub-systems will generate thermal energy as a byproduct, albeit minimised as far as possible, and this needs to be considered as part of the overall thermal heat rejection budget.
As of 2015, efficiencies of 7 gigaflops per watt are reached by system such as the Shoubu supercomputer of RIKEN outside Tokyo Japan(). So a single exaflop would require a power supply of 142 MWatts. A reduction in power demand of two orders of magnitude seems perfectly feasible, so we can suppose a power budget of approximately one megawatt.
The system is still, at this point, sufficiently lacking in detail to enable a realistic solution for programming language choice for the main interstellar probe flight computing system, much less operating system or application software. It is envisaged that any such choices would need to be taken a few iterations further in the future, although at this point (2016), it is envisaged that any operating system would be a custom or hyper optimised operating system, to take advantage of the specific system architecture.
Expert systems such as IBM's Watson have shown what is possible using today's technology, in terms of inferring software outcomes to problems. Applying today's advanced software technologies such as Watson and deep learning to an interstellar probe would enable an existing probe to be autonomous. Autonomous cars are also example of what is possible with existing technology. Taking today's technologies and scaling them down (following standard technological extrapolation) to fit size and mass constraints on the Icarus Interstellar probe capability seems quite possible. Deep learning software and simulations should allow us to ‘’train’’ the ship software to handle various situations, covering a large spectrum of possibilities(3)(4).
However, we can make inferences as to the software design philosophy which will be needed to maintain the systems on the starship.
> Add work on Intelligent Agents by Dimos Homatas, Andreas Tziolas(5)
It is envisaged that the systems standards used will follow similar approaches to existing standards such as MIL-1553B (1MBit/s) and STANAG-3910 (20MBit/s), or the specifically space orientated SpaceWire for the mission critical onboard communications, only at vastly greater bandwidths. Clearly, the technology will have advanced significantly by the time of launch, and the latest, most robust in space standard will be used. However, the examples mentioned provide a baseline to measure against.
For data communications on the spacecraft, it may be possible to develop what will by then be a more commercially available alternative, such as an equivalent to a 1 Terabit Ethernet.
The thermal output of the IT systems may be considerable. However, a substantially photonic based system would have much more modest cooling requirements. One of the positive points of using a large vehicle such as Icarus is that it can provide ample power resources for its electronics, since the primary power needs of the drive are at the level of Gigawatts, and even the secondary power systems need to provide Megawatts of power. The present trend allowing for higher temperature operations of electronic systems also reduces cooling needs and the future weight of cooling systems. If we plan for a one megawatt electrical power supply,
Cabling has the advantage over wireless communications of being more robust and less affected by radio frequency interference. The cabling systems would consist of optical cables for primary data signalling cabling, especially for long cable runs, and electrical cabling for secondary / backup data signalling, and connection to specific sensors or actuators.
Optical cabling would increase bandwidth and reduce mass considerably compared to electrical cabling, although material degradation, especially due to thermal cycling embrittlement and radiation bombardment over long timescales needs to be considered. It is indeed likely that parts of the cabling will need to be replaced over the life of the starship, and mass allowances should be part of the vehicle maintenance mass budget.
Use of electrical connectors will be minimised to reduce mass and opportunities for mechanical failure. Where they are necessary, they will follow the relevant standards laid down at the time, which could entail MIL-1553B, RS-232, USB or Thunderbolt type connectors. Connectors would need to be used for tanks and any other piece of jettisoned or separated system, such as probes and subprobes.
Electrical and electronic shielding of enclosures will be essential. Due to the large amount of highly energetic power systems in the starship, emitting electromagnetic waves in many parts of the spectrum, and the use of extremely strong magnetic fields for the drive, the starship will be very ‘noisy’.
Although the basic design of all the starships puts most of the electronics behind the radiation shield protecting the fuel tanks and payload from the drive’s radiation, it will also be necessary to protect electronics from the effects of cosmic ray impacts coming from all directions in deep space. One possible shielding configuration would be to put at least the primary computer systems inside the final deceleration fuel tank, using the mass of the fuel to serve as shielding.
The use of optical communications would also reduce the possibilities of interference from induction from the many power systems and reduce the need for electronic shielding.
Computing power calculation and mass allowance
Function : Mass (kg)
Primary computer : 1000 kg : 1 Exaflop
Robust back-up : 1000 kg
Storage : 4000 kg : Xx drives, xx kg per drive
Cabling : 1000 kg : Optical
Instruments : 1000 kg : 2000 x 0,5kg instruments
Cooling systems 3000 kg : 300 W/kg = 1 MW cooling
Power source : 1000 kg From the secondary power chapter
Total mass budget 12 000 kg
HPE bolts hi-capacity SSD support onto StoreServ – 7th June, 2016 – The Register - [http://www.theregister.co.uk/2016/06/07/hpe_adds_hicap_ssd_support_to_storeserv/]
HPE bolts hi-capacity SSD support onto StoreServ – 3rd March, 2016 – Samsung - [https://news.samsung.com/global/samsung-now-introducing-worlds-largest-capacity-15-36tb-ssd-for-enterprise-storage-systems]
AutoGNC. This testbed is designed to test autonomous spacecraft proximity operations around small celestial bodies, moons, or other spacecraft. The software is baselined for upcoming comet and asteroid sample return missions. This architecture and testbed will provide a direct improvement upon the onboard flight software utilized for missions such as Deep Impact, Stardust, and Deep Space 1. https://ntrs.nasa.gov/search.jsp?R=20100028867
DARPA SEEKS TO CREATE SOFTWARE SYSTEMS THAT COULD LAST 100 YEARS http://www.darpa.mil/NewsEvents/Releases/2015/04/08.aspx
work on Intelligent Agents by Dimos Homatas