Intelligent Machines & Sociotechnical Systems Lab

  • EE360 Feedback Control Systems: This course provides an introduction to the fundamental concepts of classical control systems and state space representation of linear time-invariant systems. Topics covered in the course include design of linear feedback control systems for command-following, disturbance rejection, stability, and dynamic response specifications, root locus and frequency response design (Bode) techniques, and Nyquist stability criterion.

  • EE560 Linear Systems Theory: This course aims to provide an applied introduction to linear dynamical systems theory. Dynamical systems are the systems that evolve over time possibly under external excitations. A dynamical model of a system is a set of mathematical laws explaining in a compact form and in quantitative way how the system evolves over time. Linear dynamical system (LDS) refers to a mathematical representation of a physical system whose dynamical model can be represented by a set of differential equations or difference equations.

  • EE567 Multiagent Systems: The course is about algorithmic game theory and its applications in engineering, economics, and social domains. Game theory is the study of interacting decision makers where each decision maker is acting in a selfish manner. The course will cover the basic framework for strategic games and its various manifestations. Topics include matrix games, extensive form games, mixed strategies, repeated games, Bayesian games, and cooperative games. The course will continue with an application of game theory as a design tool for multiagent systems, i.e., systems that consist of a collection of programmable decision-making components. Game theory as a design tool has applications in diverse domains like online advertisements and auctions, modeling of social phenomena, distributed optimization, resource allocation, throughput maximization in computer networks, and demand management in smart grids.

  • EE569 Dynamic Programming and Reinforcement Learning: Dynamic programming is a framework for deriving optimal decision strategies in evolving and uncertain environments. In the first part of the course, we will cover the theoretical foundations of dynamic programming in detail. Topics include the principle of optimality in deterministic and stochastic settings, LQR control, value and policy iteration. In the second part of the course, we will focus on approximation techniques and simulation-based methods such as online reinforcement learning, approximate dynamic programming, and model predictive control.