Learn Like a Human

In the paper, the author presents a hierarchical temporal memory (HTM), which is based on the functional model of neocortical region of brain. The author notes that in the past few decades, the artificial intelligence systems have been able to outperform humans mostly in arithmetical computation. However, the contemporary artificial intelligence systems perform poorly in doing tasks where memory is dynamic and learning is required for doing the tasks. The author points out that the reason for poor performance in learning is probably that the systems are not able to learn like humans do. He suggests that human-like learning might be possible, if learning is done in a manner similar to human brain. He thus proposes the HTM model based on the neocortical portion of brain. The analysis of neocortical region of brain, which is the largest component in the brain, reveals that the information is stored in a very unique hierarchical and dynamic manner in the neocortex. The author uses various features of this neocortical model in order to make artificial learning system that learns in a manner similar to humans and shares various common features.

It has been noted that various portions of neocortical region look very similar at the macroscopic as well as microscopic level, though they may be performing hugely different functions like seeing, reading, listening, etc. The information is stored in the neocortical regions in the form of overlapping interconnected planes. This suggests that almost all the functions use similar simplistic method to learn and store the information. Tracing the connectivity of the planes and the paths among them reveals a well defined hierarchical structure. This learning model is basically built around a hierarchy of nodes (neurons). It is studied that the lower level neurons in this hierarchical structures are used to store simple small piece of information. On the other hand, the higher levels store more complex information.

This hierarchical model of storing information in brain has some important features. First feature is the distribution and reuse of the information. Information about a single thing is distributed in graded level of details across various nodes. Very small and simple aspects are stored in lower nodes; higher level nodes store ideas which are combinations of the information in lower nodes. Thus, the knowledge is greatly reusable. If a new thing which is not learnt before is encountered, then the lower level nodes can be used to identify the similarities with previously learnt information. High level nodes share the information learnt previously at low levels. As the information is shared in higher levels, only for the aspects/notions that are not in common with previously learnt information need to be stored. This implies that though the initial training time for such hierarchical model is high, the subsequent learning is quick. Further, the subsequent learning is also reused for further learning.

Second important feature of such learning model is the manner in which time is used as a teacher. In this model, each node learns common sequential patterns over the time. When a new sequence arrives, it is matched to the existing patterns and finds the best suited pattern. As we move towards the higher level nodes, the information is identified in the form of sequence of sequences of sequences of the patterns. Due to this, even though there may be fine variations among two similar objects, they can be classified as similar by such a learning model. In short, even in the presence of rapidly changing patterns, this model forms stable thoughts at the bottom of the hierarchy, and stable concepts at the top of the hierarchy. Time is also used by the model to decide the precise thing to learn based on the occurrence of patterns. Such a model understands that the patterns that are occurring closely in time are generally related to some common cause. Thus, even in absence of a programmer or a guidance about what to learn, it itself identifies a common pattern in closely occurring events are store them as belonging to one concept/notion/thing.

Third important feature of the learning model is the memory structure. The memory in the brain does not store information about only one instance. On the other hand it does not keep all the information in a manner similar to the computer memory. Memory in brain does not fail if a neuron dies or is unavailable. Brain can make do with whatever information is available. Further, since the model keeps learning/training continuously, the memory is essentially dynamic. Thus, the memory for such learning model is highly distributed, interconnected, hierarchical and dynamic.

The author’s team has been able to form the HTM model based on these features of the neo-cortical learning model. Among various challenges, the implementation of memory architecture for such a model has been the greatest challenge in this model. However, despite the challenges, the model demonstrates good learning capabilities for various small and simple problems.