CMSC 191: Introduction to Neural Computing
Recurrent Networks and Memory
In this topic, we’ll dive into the fascinating role of memory in neural computation, focusing on how recurrent neural networks (RNNs) have shaped our understanding of how machines can “remember” and learn from experience. We’ll start with two foundational models—Elman and Jordan networks—which, although simple in structure, introduced a game-changing idea: feedback connections. These connections allow the network to process information in a way that mimics our own sense of time and memory.
As we explore these networks, you'll see how context units store information from previous steps, enabling the network to not just respond, but adapt and predict sequences of events. This gives the network the ability to “remember” and use that memory to improve future decisions.
Next, we’ll dive into the Hopfield Network, a system that models associative memory through a process called energy minimization—a concept borrowed from physics. Hopfield networks help us understand how memories are not static, but rather can evolve and connect in dynamic ways.
These two architectures—Elman, Jordan, and Hopfield—show us that intelligence is more than just performing calculations. It’s about the continuity of thought, the context in which we experience things, and the remarkable ability to recall and reshape what we’ve learned. In many ways, they demonstrate that to truly understand intelligence, we need to explore how memory and time shape thought, action, and learning.
By the end of this topic, we hope you'll feel inspired to see how memory, context, and feedback play essential roles in not only neural networks but in the very nature of learning and intelligence itself.
Describe how Elman and Jordan networks implement memory through feedback connections.
Explain the role of context units in modeling temporal and sequential dependencies.
Illustrate the concept of Backpropagation Through Time (BPTT) as a method for training recurrent networks.
Explain how Hopfield networks perform associative memory retrieval using energy minimization.
Discuss how local neuron interactions can yield global intelligent behavior.
How does feedback allow recurrent networks to model context and continuity over time?
In what ways do Elman and Jordan networks differ in how they store and use memory?
What makes the Hopfield network an elegant example of collective intelligence and associative recall?
Recurrent Networks and Memory* (topic handout)
When Networks Remember
Elman and Jordan Networks
The Echo Chamber: Introducing Context
Modeling Time: The Power of Feedback
Hopfield Networks and Associative Memory
Stable States: Memory as an Energy Minimum
Global Intelligence from Local Rules
Echoes and Energy
The semester at a glance:
Recurrent Networks and . . .