I am a Postdoctoral Fellow at the University of California, Irvine, under the advisement of Professor Magnus Egerstedt, the Stacey Nicholas Dean of the Samueli School of Engineering. I am a participant in the Intelligence Community Postdoctoral Research Fellowship Program administered by Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the U.S. Department of Energy (DOE) and the Office of the Director of National Intelligence (ODNI), focusing on applications for the Department of Homeland Security (DHS). I graduated from Purdue University in 2024, earning my Ph.D. in Electrical and Computer Engineering, where I worked with Professor Philip E. Paré to study safety guarantees for dynamic networked systems.
When not performing research, I enjoy spending time with my wife and two kids, playing and composing music on the piano and guitar, singing, and playing games with friends.
Algorithmic altruism, collaborative control, networked dynamic systems, safety-critical control, machine learning, optimization, robotics, epidemic processes, autonomous vehicles, signal processing, and social modeling are a few of my interests.
Ph.D., Electrical and Computer Engineering, Purdue University, 2024
M. S., Computer Science, Brigham Young University, 2020
B. S., Applied Physics, Brigham Young University 2019
Here is a copy of my full CV.
A visual overview of how theoretical foundations lead to research applications.
Autonomous agents are transforming both robotic and cyber-physical systems through their ability to amplify human capability and coordinate complex activities at scales and speeds beyond direct human supervision. In robotic domains, multi-agent coordination enables teams of autonomous vehicles, drones, and manipulators to perform tasks that exceed the capacity of any individual system. In cyber-physical systems, distributed autonomous processes can manage critical infrastructure, optimize resource flows, and adapt intelligently to dynamic environments. However, the realization of these capabilities hinges on ensuring their safety and reliability. Thus, as our technological ecosystems grow in complexity, so does our need for explainable frameworks that can facilitate autonomous collaboration reliably, intelligently, and safely.
With an emphasis on networked control systems and safety-critical control, my research focuses on bridging the gap between theory and application. I strive for research that advances our basic understanding of core concepts, where my past and current work include applications across multiple domains that build toward designing explainable, safe, and collaborative control frameworks for dynamic and networked autonomous systems.
One advantage provided by multi-agent systems is a natural embedding of robustness and resiliency, where if one agent fails, you still have N-1 agents remaining to complete the task. But, if such robustness is a desired feature, one can take this one step further and ask if there are situations in which individual agents should sacrifice themselves for the good of the team, or at least volunteer to perform high-risk maneuvers?
This idea of incurring a purposeful cost (such as risk) for individual robots is reminiscent of the idea of altruism, whereby individuals or organisms perform acts that are costly to themselves to benefit a receiver organism or organisms. Drawing on Hamilton’s rule in ecology, we established conditions under which an agent should voluntarily incur a cost to enhance the productivity of other agents.
Two examples of trajectories for an 8-agent system where the star shows each agent's goal. On the left, all agents are assigned equal importance values, whereas on the right, we assign the greatest importance to the blue and orange agents.
A delivery robot (orange) navigates to delivery points identified by search robots (blue), where the delivery robot's importance is set to be greater than the search robots.