CMSC 191: Special Topics
Introduction to Neural Computing
CMSC 191: Special Topics
Introduction to Neural Computing
In this course, we will journey into the world of neural computing—the art and science of teaching machines to learn. We will ask how something as simple as a mathematical neuron can give rise to recognition, memory, and adaptation. As we explore these ideas, we will discover that neural networks are not only models of computation but also metaphors for learning itself—whether in machines or in us.
We will build our understanding from the ground up. We will start with the neuron as a unit of computation, then connect many neurons to form systems that can see, decide, and remember. By constructing these systems ourselves, using only basic mathematics and code, we will uncover the logic behind how networks learn from experience. This way, we will see that the intelligence we build in machines reflects the same iterative trial and error that drives human learning.
Our approach will be active and hands-on. Instead of merely listening, we will work, test, and reason together. Research shows that learning becomes deeper when we take part in the construction of knowledge rather than simply receive it (Freeman et al., 2014; Prince, 2004). When we build a neural network by hand, we are not just coding—we are thinking computationally, exploring how a system transforms numbers into knowledge.
Throughout the semester, we will treat neural computing as both theory and craft. Each network we design will represent an idea about how computation might arise from connection and cooperation. Each experiment we run will be an act of curiosity, a way of asking, “Can this system learn?” Through our collective work, we will learn not only to design and analyze neural architectures but also to see patterns in data as a scientist and as a creator.
By the end of this course, we will have learned more than how neural networks work—we will have experienced how learning itself unfolds. And when we reach that point, we will realize that neural computing is not just a branch of computer science; it is also a mirror held up to our own way of understanding the world.
Introduction to the computational principles behind artificial neural networks (ANNs) as models of learning, pattern recognition, and decision-making. Emphasis is placed on understanding neural computation from first principles — neuron modeling, network architectures, learning algorithms, and ensemble methods — leading to the design and innovation of neural systems capable of solving real-world problems.
At the end of this course, we should be able to:
Explain the mathematical and computational foundations of artificial neurons and neural architectures.
Implement basic neural models from first principles using structured code.
Analyze how learning rules modify network behavior and convergence.
Design neural network architectures for classification, regression, or associative learning tasks.
Evaluate and innovate on ensemble or deep network architectures for improved performance.
Preliminary Handouts:
Topic Handouts:
Additional Handouts:
Our learning in this course will culminate in a single, meaningful output — a technical paper that captures our journey of designing and implementing a neural system. This paper will not just be an assignment; it will be our story of inquiry: how we asked questions, built models, tested ideas, and learned from the results. Through this process, we will practice the skills that define true computer scientists — thinking critically, communicating clearly, and creating with purpose.
The paper will follow the same structure that scientists and engineers use when they share new discoveries. Each section will help us articulate our thoughts in an organized, logical, and reflective way. Together, these parts will form a narrative that connects our curiosity to evidence, and our results to insights.
Here’s how our paper will take shape:
ABSTRACT: In a single paragraph, we will summarize what we did, what we found, and what it means. The abstract is our entire journey compressed into a few sentences — the essence of our study distilled into clarity.
INTRODUCTION: This section answers why we pursued our topic. We will write at least five short but focused paragraphs, each responding to a guiding question:
Why is this topic interesting or important?
What have others done before us (a review of related works)?
What gaps or possibilities do we see (our motivation)?
What did we do to address the problem (a brief summary of our method)?
What did we discover (a preview of our results and insights)?
Through this, we tell the reader not just what we did, but why it matters — both scientifically and personally.
OBJECTIVES: We will clearly state one general objective and at least two specific objectives. These will serve as our compass — reminding us of what we want to achieve and helping us keep our methods and results aligned with our purpose.
METHODOLOGY: Here, we will describe how we achieved our objectives. Each specific objective will have its corresponding methodology. We will explain every design choice, from how we prepared our data to how we tuned our neural network. More than listing steps, this section shows our reasoning — how the ideas and principles we learned guided each decision we made.
RESULTS AND DISCUSSION: This is the heart of our paper — the part where we show what happened and what we learned. For every methodology, there should be a corresponding result. The discussion goes deeper: it connects these results to our objectives and helps us make sense of what they mean. This is where we show how our neural network “thought,” and how we learned to think like it — systematically, iteratively, and reflectively.
CONCLUSION AND RECOMMENDATIONS: In this section, we will draw our final insights — the truths we have uncovered from our work. We will then share our recommendations: what future researchers, engineers, or even our future selves could do next. This reminds us that every study is part of a larger conversation, and our paper is our contribution to it.
REFERENCES: We will list all the scientific books, journal papers, and proceedings that we used. This shows that our work stands on the foundation of the world’s collective knowledge. We will avoid non-scholarly sources such as tutorials or blog posts unless they serve as background context.
APPENDIX: Here, we will include our source code or a link to our GitHub repository. This section makes our work transparent and reproducible — two hallmarks of good science.
We will also list here the names of our collaborator-classmates. Please read more on this in the Collaboration Policy section below.
Milestone 1: Proposal (20%)
Design concept + problem statement
Milestone 2: Midterm Progress Report (25%)
Implementation progress + preliminary results
Final Report and Presentation (45%)
Full paper + oral defense
Participation and Consultations (10%)
F2F and asynchronous engagement
Why Only One Requirement?
Because this one requirement already embodies all that we need to become independent thinkers and creators. Designing a neural architecture, implementing it, and writing about it in a structured, scientific way will challenge both our technical and communicative abilities. It will train us to think deeply, argue logically, and write with clarity — the same skills we will use in research, industry, and leadership roles.
On Punctuality and Professionalism
As future professionals, we must value time as much as correctness. A brilliant paper delivered late loses part of its brilliance, because science values not only insight but also discipline. Timeliness is an act of respect — for our work, for our peers, and for ourselves.
Why Writing Matters Here
At its heart, this course is about communication through computation. We are not just coding neural networks; we are learning to explain how and why they work. Writing is the bridge between our ideas and the world. As we learn to express our technical insights clearly, we also train ourselves to think more clearly.
In the end, this single paper will not only represent our mastery of neural computing — it will also reflect how far we have grown as thinkers, writers, and scientists.
Grading Scheme
In this course, our grades will reflect not only what we have learned but also how we have learned. Each number will represent the story of our growth — how we approached challenges, honored deadlines, and engaged with ideas. In neural computing, learning happens through feedback and adjustment. So too with us: each task, consultation, and submission becomes part of our own process of learning, evaluating, and improving.
Our overall performance will be expressed as a percentage between 0% and 100%, which will then be converted into the University’s numerical grade scale. Below summarizes this relationship:
< 65% --> 5.00
65% - 68% --> 3.00
69% - 72% --> 2.75
73% - 76% --> 2.50
77% - 80% --> 2.25
81% - 84% --> 2.00
85% - 88% --> 1.75
89% - 92% --> 1.50
93% - 96% --> 1.25
97% - 100% --> 1.00
We will use the ceiling rule when computing grades. This means we round up — even the smallest effort that pushes a score past a boundary will be acknowledged. For example, a final percentage of 64.000001 will be raised to 65 and converted to a grade of 3.00. This reflects our belief that learning is not binary: sometimes, one extra bit of effort — one more iteration, one more revision — makes all the difference.
However, this also means that punctuality and accountability matter. A late submission, no matter how brilliant, will receive at most 75% of the grade it could have earned if submitted on time. This is not a penalty; it is a lesson in professionalism. In the real world, deadlines ensure coordination, fairness, and trust. Learning to manage our time is as valuable as mastering algorithms.
Every component of our grade — the proposal, progress report, final paper, and class participation — is designed to nurture both competence and character. Technical mastery shows what we can do; discipline and integrity show who we are. Together, these make us the kind of scientists and professionals who can be trusted with both data and decisions.
We do not chase grades in this course; we chase understanding. When we engage fully — when we listen, question, design, and write with purpose — the grades will naturally follow. Each percentage point will simply mark another neuron firing in the growing network of our own learning.
Primary Texts:
Haykin, S. (1999). Neural Networks: A Comprehensive Foundation (2nd ed.). Prentice-Hall.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
Supplementary Materials:
Lecture slides, video recordings, and datasets available via Google Classroom.
Journal papers on neural computation and ensemble learning.
A well-documented technical paper (minimum 7 pages) that demonstrates:
A novel or adapted neural architecture.
A detailed explanation of learning rules and convergence behavior.
Evaluation on real or synthetic datasets.
Complete code repository (GitHub link).
In this course, collaboration is not only allowed — it is encouraged. Learning is most powerful when it is shared. The purpose of our projects is to give us a hands-on experience in mastering the course material. Working with others allows us to exchange perspectives, test our ideas, and refine our understanding. Research in education consistently shows that students who learn together tend to achieve deeper and longer-lasting understanding than those who work alone (Johnson & Johnson, 2009; Smith et al., 2005).
When we form study or work groups, we honor the principle that two minds can indeed think better than one — but only when both minds are equally engaged. Collaboration is a partnership, not a shortcut. This means we come to every group discussion prepared, ready to contribute, and eager to listen. We share our insights generously, but we also take responsibility for our own learning. We do not simply absorb knowledge from others; we help create it.
At the same time, collaboration is different from co-authorship. While we may discuss ideas, algorithms, and design strategies with our peers, the final implementation and written report must be our own work. Each of us must independently write our own code, prepare our own figures, and craft our own explanations in our technical paper. If we collaborate with others, we must name them clearly in our document. Acknowledging our collaborators shows honesty and respect — it celebrates our shared learning rather than concealing it.
If we refer to external sources, such as research papers, books, or online materials, we must cite them properly and explain the ideas in our own words. The goal is not to copy but to comprehend — to make what we read a part of how we think. It is a violation of this policy to submit a solution that we cannot confidently explain or defend. In science, true understanding means being able to teach what we have learned.
Plagiarism and other anti-intellectual acts have no place in a university that stands for honor and excellence. At UP, we take pride in our individual accomplishments and in the integrity that sustains them. To maintain transparency, we may record or document our group discussions using any available communication platforms — such as Discord, Facebook, or others — as a record of our collective effort.
In essence, collaboration in this course is not just about dividing work; it is about multiplying learning. When we work together with honesty, curiosity, and respect, we not only strengthen our projects — we also strengthen each other.
Neural Computing at UPLB continues to evolve. What began as a special topic course under the academic initiative of the Institute’s faculty is now on its way to becoming a regular offering in the undergraduate curriculum. This course, currently offered as CMSC 191: Special Topics (Introduction to Neural Computing), has been formally proposed to the University of the Philippines Board of Regents (BOR) as a new degree course titled CMSC 179: Fundamentals of Neural Computing.
This transition marks an important milestone in the Institute’s effort to institutionalize the teaching of computational intelligence and to strengthen UPLB’s leadership in artificial intelligence education. The proposed course has already undergone and successfully passed the institute-level curricular review, and it is now progressing through the college-, university-, and system-level evaluation processes. Once approved, CMSC 179 will serve as the permanent foundation course in neural computation for the BS Computer Science program.
Visitors are invited to view the early draft of the CMSC 179 course proposal*, which provides a preview of the course’s learning outcomes, topical outline, and assessment framework. This draft represents a synthesis of years of instructional development, research experience, and student feedback drawn from previous CMSC 191 offerings. It offers a sneak peek into the future of neural computing education at UPLB, where the study of intelligent systems will take its place as a core component of the computer science curriculum.
Freeman S, SL Eddy, M McDonough, MK Smith, N Okoroafor, H Jordt & MP Wenderoth. 2014. Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences 111(23):8410–8415. https://doi.org/10.1073/pnas.1319030111
Johnson DW & RT Johnson. 2009. An educational psychology success story: Social interdependence theory and cooperative learning. Educational Researcher 38(5):365–379. https://doi.org/10.3102/0013189X09339057
Prince M. 2004. Does active learning work? A review of the research. Journal of Engineering Education 93(3):223–231. https://doi.org/10.1002/j.2168-9830.2004.tb00809.x
Smith KA, SD Sheppard, DW Johnson & RT Johnson. 2005. Pedagogies of engagement: Classroom-based practices. Journal of Engineering Education 94(1):87–101. https://doi.org/10.1002/j.2168-9830.2005.tb00831.x