CMSC 191: Introduction to Neural Computing
Unsupervised Learning and Self-Organization
In this topic, we’ll explore the exciting world of unsupervised learning—where neural networks learn to organize and make sense of data without the need for explicit labels or error signals. We’ll start with competitive learning, where neurons "compete" to represent input patterns, using a winner-take-all approach. This competition leads to specialization, helping the network distribute its resources efficiently and capture important features of the data.
Next, we’ll look at Self-Organizing Maps (SOMs), which are powerful tools that preserve the topological relationships between data points, turning complex, high-dimensional data into intuitive, easy-to-understand spatial maps. These maps help us see structure in data that might otherwise seem chaotic.
Finally, we’ll revisit Hebbian learning and look at how stabilized rules like Oja’s Rule allow networks to extract principal components—a link between neural adaptation and classical statistical methods.
Explain the principle of competitive learning and the winner-take-all mechanism.
Describe how Self-Organizing Maps (SOMs) preserve data topology through neighborhood learning.
Analyze how Hebbian learning can be stabilized for unsupervised feature extraction.
Explain how Oja’s Rule connects neural learning with Principal Component Analysis (PCA).
Discuss how self-organization and feature discovery emerge in unsupervised neural systems.
Why does competition among neurons lead to more efficient and specialized representations?
How do Self-Organizing Maps transform abstract data relationships into visual, interpretable structures?
What does the connection between Oja’s Rule and PCA reveal about the universality of learning across neural and statistical systems?
Unsupervised Learning and Self-Organization* (topic handout)
Order Without a Teacher
Competitive Learning and Kohonen Maps
The Battle for Representation: Winner-Take-All
Mapping the World: Self-Organizing Feature Maps
Hebbian Learning Extensions and Principal Component Networks
Hebb's Rule Revisited: Finding Important Directions
The Neural Path to Dimensionality Reduction
When Learning Finds Its Own Way
The semester at a glance:
Unsupervised Learning . . .