Background

Moore's Law has held pretty solid for over 40 years, and this exponential growth of availability and density of circuit elements (and thus computational power) has engineers thinking of reproducing the most complicated computational machine we know of, the human brain.  Several attempts have been made, but even Babbage's difference and analytical engines, 19th century mechanical computers based on mechanical looms, used the von Neumann architecture, the modern computer architecture named after its describer in 1945.  This design separates processing units from memory units.  Engineers have come up with simple techniques to program amazingly complex processing schemes in machines with this design.  New techniques have allowed us to continue to evolve new methods to take advantage of the increasing processor and memory power, and we have not reached limits on what we can do with this von Neumann architecture.

But around the time that von Neumann set out his amazingly useful design ideas for a stored-program digital computer, it became clear that the human brain processed things differently.  Some discussions separate computational architectures into three categories: von Neumann, parallel processing, and neural networks.  Parallel processing has multiple processors working together with at least slightly separated but shared memory.  Neural networks, even the simple perceptron invented around the same time (1957), use an artificial neuron as a computational subunit which also serves as a memory subunit.  The computation and memory functions both depend on the  particular arrangement and connection of these subunits.  This is different than the von Neumann architecture, where a standard arrangement and connection of memory and processing units is exploited by a different sequence of logical instructions, also stored in a standard arrangement of memory units.

Although perceptrons were also inspired by biological neural networks, it has become clear that by themselves they cannot reproduce how the human brain processes information. [Need to fill in discussion on why they cannot, including how linear activation functions reduce the system to a linear programming system.]

We have created amazing data processing and storage computers with the von Neumann architecture, including the massive data collection, sharing, and visualization mechanism known as the "internet."  We haven't built a single data storage warehouse or other practical computer out of a human brain.  Even the most powerful human brains use digital von Neumann computers as adjuncts for memory and computation,and  the computational subunits of the brain, neurons, are much slower than similar electronic units, so why do we want to take anything from the human brain?  The main reason is that we don't currently understand how the brain can compute some things so quickly with these slow subunits and slow connections.  We know that it involves massive parallelism in both processing and connections.

Several projects using computational simulations to investigate how the brain's organization can solve some of these pattern recognition and higher functional problems.  My research is evaluating how well each of them can solve certain engineering problems that classical computer approaches have not been able to solve.  I am evaluating both accuracy and efficient use of resources, and they are both areas where classical computational techniques have fallen short.
Comments