CMSC 291: Special Topics
Advanced Neural Computing
CMSC 291: Special Topics
Advanced Neural Computing
This course continues the academic lineage of the undergraduate Special Topic course CMSC 191: Introduction to Neural Computing — but designed for a graduate audience whose learning now extends beyond understanding how neural systems compute, toward exploring why they work, how they can be improved, and where they lead scientific inquiry next.
This course explores the advanced principles, models, and methods in neural computing. It focuses on the theory, design, and application of complex neural architectures and learning systems, including second-order optimization methods, ensemble and recurrent models, probabilistic and generative frameworks, and biologically inspired computations. Emphasis is placed on critical reading of primary research literature, experimental replication, and innovation through design and analysis.
In this course, we will move beyond learning how neural networks compute toward understanding why they work — and, more importantly, how we can make them better. If our previous Special Topic journey in Introduction to Neural Computing taught us to build neurons and connect them into networks, this course will teach us to question their behavior, their limits, and their beauty.
We will read, implement, and discuss the ideas that shaped the field of neural computation. Each topic in this course represents a milestone in that history: algorithms that learned faster, networks that remembered longer, and architectures that imagined new data. Through these papers, we will witness how scientific curiosity and mathematical rigor combine to advance what we know about learning systems.
As graduate students, our task is not to memorize but to interrogate — not merely to apply algorithms, but to understand and extend them. We will study how second-order methods accelerate convergence, how recurrent networks model time, how autoencoders reconstruct meaning, and how generative models create. Each reading will be a conversation with the minds that shaped artificial intelligence — Fahlman, Hinton, Schmidhuber, Bengio, Goodfellow, Maass, and many others.
Our learning philosophy follows the principles of constructive alignment (Biggs & Tang, 2011) and active engagement (Freeman et al., 2014): we learn deeply when we question, compare, replicate, and communicate our understanding. Every reading, seminar, and experiment in this course will be an act of discovery — our own experiment in thinking.
By the end of this course, we will not only know the state of the art in neural computing; we will be ready to help define it.
At the end of this course, we should be able to:
Analyze advanced learning algorithms and neural architectures, understanding their mathematical foundations and limitations.
Critically evaluate research papers in neural computation, identifying assumptions, innovations, and opportunities for extension.
Design and implement advanced neural architectures for temporal, generative, or complex adaptive tasks.
Compare and synthesize findings from diverse learning paradigms, including symbolic, probabilistic, and biologically inspired approaches.
Communicate and defend our insights through scholarly writing and oral presentation that reflect the standards of research practice.
We will learn through a blend of guided readings, seminars, and implementation projects. Each week, we will explore 2–3 research papers, discuss their methods and results, and replicate key experiments when possible.
Below is our tentative 16-week schedule:
Introduction to Neural Computing (Recap)
Revisiting neuron models, backpropagation, learning rules, and network generalization — leveling the field for all students.
Advanced Learning Techniques I
Quickprop, second-order methods, and optimization history
Advanced Learning Techniques II
Cascade-correlation and RBF networks
Advanced Learning Techniques III
Complexity penalties, generalization measures, temporal difference learning
Evolutionary and Bio-inspired Learning
Learning via Genetic Algorithms and Artificial Chemistry
Recurrent Neural Networks I
From Elman networks to modern RNNs; modeling temporal dependencies
Recurrent Neural Networks II
LSTM and GRU architectures; solving long-term dependency problems
Autoencoders and Associative Memory I
Hopfield networks and Boltzmann machines as foundations of memory-based learning
Autoencoders and Associative Memory II
Restricted Boltzmann Machines and Deep Belief Networks
Other Advanced Networks I
Deep Convolutional and Deconvolutional Networks
Other Advanced Networks II
Generative Adversarial Networks and inverse graphics
Reservoir and Liquid-State Models
Liquid State Machines, Echo State Networks
Neural Network-like Systems
Kohonen networks, Support Vector Machines, Neural Turing Machines
Reflection and Future Directions
Ethics, interpretability, and the next frontiers of neural computation
We believe that knowledge grows best when shared. Collaboration is therefore encouraged, but understanding must always be personal. We may discuss papers, debate algorithms, or debug code together — but the analyses we write and the ideas we claim must be our own. If we collaborate with others, we will acknowledge them openly. If we draw from external sources, we will cite them respectfully. The real measure of integrity is not whether we worked alone, but whether the work truly represents our thinking.