CMSC 191: Introduction to Neural Computing
Learning Rules and Adaptation
In this topic, we’ll explore the fascinating process of how neural networks learn to adjust, correct, and improve over time. We’ll start with Hebbian learning—the biological idea that "neurons that fire together, wire together." This concept shows how the strength of connections between neurons is shaped by their correlation in unsupervised settings, helping the network learn from patterns in data.
Next, we’ll introduce the Delta Rule, which brings in error-based learning—this is the key to how neural networks figure out how to fix their mistakes. From there, we’ll lay the groundwork for gradient descent, the powerful method that guides the network toward better solutions by minimizing errors step by step.
Finally, we’ll dive into the world of gradient-based optimization, where the process of learning is like a journey through an "error landscape." Along the way, you'll learn how to navigate key challenges like understanding learning rates, avoiding local minima, and overcoming saddle points—all essential to mastering how a neural network converges to its optimal solution.
Explain Hebbian learning as a biologically inspired rule of associative adaptation.
Describe the Delta Rule as a mechanism for error-based weight correction.
Define the role of the loss function in measuring and minimizing network error.
Illustrate how gradient descent operates as an optimization process in weight space.
Analyze the effects of learning rate and error surface topology on convergence behavior.
How does Hebbian learning illustrate the idea of association without explicit supervision?
In what ways does the Delta Rule transform the idea of “learning from mistakes” into mathematics?
How can visualizing the loss landscape help us understand why neural networks sometimes learn too slowly—or not at all?
Learning Rules and Adaptation* (topic handout)
When Neurons Learn to Learn
Hebbian and Delta Learning Rules
The Foundation: Wiring Together
Learning from Mistakes: Introducing Error
Gradient Descent and the Learning Process
Climbing Down: The Engine of Optimization
Mapping the Path: Navigating the Error Surface
The Mathematics of Memory
The semester at a glance:
Learning Rules . . .