CMSC 191: Introduction to Neural Computing
Neural Network Architectures
In this topic, we’ll explore the core architectures and learning approaches that shape how neural networks process and understand information. We’ll start by comparing Feedforward Neural Networks (FFNNs), which process inputs in a straight line, moving in one direction, with Recurrent Neural Networks (RNNs), which have feedback loops that allow them to "remember" previous inputs, giving them the ability to understand context and sequences over time.
Next, we’ll dive into the difference between Supervised and Unsupervised learning. You’ll see how labeled data (where we tell the network the "right answer") and unlabeled data (where the network must figure things out on its own) both play crucial roles in helping networks learn. These two approaches complement each other, helping machines learn in different ways to build smarter, more adaptive models.
By the end of the topic, you’ll understand that the structure of a neural network doesn’t just determine what it can learn—it shapes how it learns, whether through static snapshots of data or dynamic flows of time, and whether through direct teaching or self-guided exploration.
Differentiate between feedforward and recurrent neural architectures.
Explain how feedback loops in RNNs enable modeling of temporal and sequential dependencies.
Describe the difference between supervised and unsupervised learning structures.
Discuss how each learning paradigm shapes the type of problems a network can solve.
Analyze how hybrid and self-supervised methods integrate both paradigms for more general intelligence.
Why does the introduction of feedback make recurrent networks capable of handling temporal sequences?
How does unsupervised learning mirror how humans naturally organize knowledge?
In what ways do hybrid or self-supervised systems bring us closer to human-like learning?
Neural Network Architectures* (topic handout)
Architectures of Thought
Feedforward vs. Recurrent Architectures
The Arrow of Time: One-Way vs. Looping Signals
Modeling Sequence and Context
Supervised vs. Unsupervised Learning Structures
Learning with Labels vs. Discovery in the Dark
Towards Integrated Intelligence
From Signals to Understanding
The semester at a glance:
Neural Network Architectures