Wednesday, March 4, 2026
Holding AI Accountable
In SI 376, students learn to move beyond “AI as magic” and become human-centered problem solvers in an era of autonomous systems.
In the current "vibe coding" era, where AI can generate code and content with a simple prompt, the world doesn't necessarily need more people who can just build algorithms. It needs people who can govern them. This is the core philosophy of SI 376: AI in Practice, a cornerstone of the new Human-Centered Artificial Intelligence (HCAI) minor at UMSI. Led by Associate Professor Jiayu Zhou, the course is designed to move students past the idea of "AI as magic" and into a space where they can test and deploy these tools with a critical, human-centered eye.
While a traditional computer science department might focus on the mathematical implementation of neural networks, SI 376 takes a different path. "SI 376 is less about turning students into algorithm implementers and more about turning them into AI-literate problem solvers," says Zhou. Students in the course don't just describe AI; they learn to stress-test it. Through hands-on labs and end-to-end projects, they master the art of interpreting model errors and using AI responsibly within real-world workflows.
For students entering the HCAI minor from non-technical backgrounds like the arts, public health, or business, the technical barrier is lower than they might expect. However, the intellectual bar is still high. Zhou says the biggest competitive advantage in the 2026 job market is judgment and discernment. "Student’s biggest advantage is the ability to frame the right problem, ask the right questions, and recognize when AI is not the right tool.” He views UMSI students as human-to-AI translators, professionals who can turn messy human needs into clear system goals while communicating the inevitable trade-offs regarding bias, privacy, and safety.
Zhou’s own research in health informatics, using AI to assist in Alzheimer’s diagnosis, heavily informs his teaching. In the medical field, clean data is a rarity, and perfect answers don't exist. This reality is reflected in how SI 376 handles uncertainty.
Students are taught that failure modes cannot be afterthoughts. By focusing on error analysis and evidence-based explanations, students learn that responsible AI means being honest about what a system can’t do. This semester, students are applying these principles to a project on fact-checking with evidence, a universal challenge whether one is in social work or software engineering.
As AI moves from simple chatbots to autonomous agents that orchestrate complex tasks, the roles students step into will evolve. Zhou envisions graduates becoming AI workflow designers, safety leads, or evaluation specialists."The job titles will evolve," says Zhou, "but the need for professionals who can connect models, tools, and guardrails into systems that actually hold up in the real world is only going to grow."
Whether you are a data scientist or a poet, SI 376 is proving that the future of AI belongs to those who understand the human at the center of the system.
Jiayu Zhou