CMSC 191: Introduction to Neural Computing
Perceptrons and Linear Separability
In this topic, we’ll explore the Perceptron—the first practical model of a machine that can learn on its own—and discover the key idea of linear separability. You’ll dive into how Frank Rosenblatt’s groundbreaking work in 1958 introduced the concept of learning by correction, laying the foundation for supervised learning algorithms that power much of modern machine learning today.
As we examine the Perceptron, we’ll also tackle its limitations, particularly through the famous XOR problem. This problem highlighted the challenges of single-layer networks and showed what they couldn’t represent. By looking at both the successes and the shortcomings of the Perceptron, you’ll understand how it sparked a shift in scientific thinking toward more powerful, multilayer architectures and the search for more complex, non-linear learning methods.
Describe the structure and function of the single-layer Perceptron.
Explain the Perceptron learning rule as a form of error-driven weight adjustment.
Define linear separability and interpret its geometric meaning in classification tasks.
Analyze why the Perceptron fails to solve non-linearly separable problems such as XOR.
Discuss how the limitations of the Perceptron inspired the development of multilayer networks and backpropagation.
What makes the Perceptron both powerful and limited as a learning system?
How does the concept of linear separability shape what a neural network can learn?
Why did the failure of the Perceptron to solve XOR lead to one of the most important breakthroughs in AI history?
Perceptrons and Linear Separability* (topic handout)
Lines That Learn
The Single-Layer Perceptron
The Original Classifier: Simplicity and Power
Learning by Correction: The Perceptron Rule
The Limitations of Linear Models
The XOR Challenge: An Unsolvable Line
Pushing Deeper: The Call for Multilayer Networks
Beyond Straight Lines
The semester at a glance:
Perceptrons and Linear . . .