CMSC 191: Introduction to Neural Computing
Representational Power and Universal Approximation
In this topic, we’ll take a deep dive into both the potential and the challenges of neural networks. We start with the Universal Approximation Theorem (UAT), a powerful result that shows mathematically how a neural network, with enough neurons and a non-linear activation function, can approximate any continuous function. It’s a fundamental idea that underscores the vast capability of neural networks to model complex relationships.
However, as we’ll see, theory and practice are often different. The topic then explores how, in real-world scenarios, depth (the number of layers) often matters more than width (the number of neurons in each layer). Understanding this distinction is crucial for building networks that are both efficient and effective.
The second half of the topic introduces the Bias-Variance Tradeoff, a critical concept that helps us balance the desire to learn as much as possible from data while avoiding overfitting—learning too much and capturing noise instead of true patterns. Together, these ideas remind us that the real power of neural networks doesn’t just come from their mathematical capabilities, but from how they’re trained, constrained, and guided to generalize effectively to new, unseen data.
Explain the Universal Approximation Theorem and its significance to neural network theory.
Distinguish between the theoretical sufficiency of wide networks and the practical efficiency of deep networks.
Describe the Bias-Variance Tradeoff and how it governs model generalization.
Apply practical strategies such as regularization and early stopping to manage overfitting.
Evaluate how theoretical guarantees translate—or fail to translate—into effective learning systems.
What does the Universal Approximation Theorem truly guarantee—and what does it leave unanswered?
Why are deep networks often more practical than single, wide-layer networks?
How can we recognize and manage the tension between bias and variance during model training?
Representational Power and Universal Approximation* (topic handout)
The Promise and the Price of Power
Universal Approximation Theorem
The Mathematical Guarantee
From Theory to Practice: Deep vs. Wide
The Bias-Variance Tradeoff
The Core Balancing Act: Generalization
Tools for Model Control
Balancing Perfection and Practicality
The semester at a glance:
Representational Power . . .