“You’re the U. S. Robot’s psychologist, aren’t you?”
“Robopsychologist, please.”
“Oh, are robots so different from men, mentally?”
“Worlds different.” She allowed herself a frosty smile, “Robots are essentially decent.”

- Evidence, Isaac Asimov


How can we design efficient local algorithms for multiagent learning with global performance guarantees? 


Automation relegates many decision-making processes from humans to machines. In recent years, automation has been driven by artificial intelligence (AI), which enables trained machines to achieve expert-level performance in some tasks. Many of these tasks require humans to interact, so automating them will require AIs to interact, creating a networked environment where the decisions of one AI affect others and the data they learn from. Examples are autonomous vehicles, multi-robot systems, on-device learning, cloud computing, communication networks, the smart grid, and, more recently, AI-agents based on large language models (LLMs). As game theory predicts, this interaction can lead to globally inefficient outcomes. However, machines follow programmatic objectives and protocols that, unlike humans, are not limited by selfish interests, but by their local information and access to resources. This modern setting calls for new tools and paradigms, which are the focus of my research.


My research tackles the above question by bringing together game theory, distributed control, and multiagent learning. I study networks such as wireless networks, autonomous vehicles, the smart grid, epidemics and social networks, and edge computing. In these networks, designing scalable and efficient algorithms requires overcoming the following challenges: