Analog to digital conversion (ADC) is the backbone of modern electronics. ADCs move information between the real world and the abstract digital world by taking measurements of signals.
Traditionally, a single ADC samples a signal very quickly to record its content. Our approach aligns many ADC subunits in parallel and then uses machine learning to reconstruct the signals from those subunits' measurements.
Each individual subunit is slow, cheap, and simple. At small node sizes, the physical interaction between circuit elements becomes complex, especially in the analog space. We use only well-studied designs to reduce the time and risk associated with new products.
Unlike sub-Nyquist approaches, we make no assumptions about the signals before they reach our device. Our technology can handle any signal in its frequency band. Our machine learning model learns the behavior of its own circuitry, rather than the content of the signal.
We are no longer limited by how fast an individual ADC can sample a signal. Our designs can apply more subunits to break up a signal into manageable tasks. Sample rates and effective bit resolutions that were previously unattainable are enabled by scaling up the number of subunits.
At small node sizes, the unexpected behavior of a single block along a critical path of a circuit can lead to a defective chip. The parallel nature of our designs builds in redundancy, and the machine learning unit turns that into resilience. We can reduce costs associated with a design & manufacture cycle.