on November 17th, 2023 from 9am to 12pm

DeNetS: Decision-making in Networked Systems

Workshop co-located with IFIP Performance 2023

Northwestern University, Chicago campus, IL

Wieboldt Hall, 339 E Chicago Ave, Chicago, IL 60611

About:

Rapid development in digital systems, communication technologies, and sensing devices have led to the emergence of large-scale networked systems connecting a massive number of intelligent agents. Examples of such networked systems are abundant: power systems, smart factories, sensor networks, and smart buildings. In these systems, agents are often required to jointly solve optimization, control, and learning problems so that a desirable network-wide objective is achieved. This motivates the development of distributed algorithms for large-scale networked systems. 

The goal of this workshop is to bring together prominent researchers to share their ideas, methods, and results on the topic of learning, control, and optimization of networked systems.

Organizers:

 Invited Speakers:

Program: 

Steven Low

Time: 9:00am to 9:30am

Title: Inverse Kron Reduction to Identify Three-phase Radial Power Grid  

Abstract: A reliable network model is critical for implementing many of the smart grid innovations on a distribution system, but is often unavailable. Unlike transmission grids, distribution grids are not well instrumented, with measurements available only at a subset of the nodes (e.g., substations and customer meters). These partial measurements can be used to estimate a Kron reduced admittance matrix (Schur complement) that is a model of a reduced network consisting of only measured nodes. In this talk, I will describe a method to compute the admittance matrix of the original three-phase radial network from its Kron reduction, under the assumption that every hidden node has a degree of at least 3.  The key idea is to show an invariant structure under iterative Kron reduction that allows one to reverse each iteration of the Kron reduction. 

Urbashi Mitra

Time: 9:30am to 10:00am

Title: Digital Cousins: Ensemble Learning for Large & Heterogeneous Networks 

Abstract: The evolution to 6G promises communication-and-compute networks that are larger in scale and with significantly heterogeneous edge devices.  These modern networks challenge our ability to design and optimize efficiently. While cognitive networks promise to introduce agility and intelligence, there is a need to significantly scale up such approaches in an internet-of-everything world. We present modeling strategies that effectively capture the dynamics and heterogeneity of these modern networks – however, the models come at the price of complexity. To this end, we propose a multi-pronged approach to network design and optimization. We review strategies exploiting graph signal processing for network optimization including new representations for network behavior. We show that the new representations allow for efficient graph reduction and enable low complexity optimization of network control policies. An exciting consequence is that the graph representations allow for the efficient creation of related synthetic networks, or digital cousins, that accurately capture network behavior without the need for excessive trajectory sampling of the actual network. A novel on-line/off-line Q-learning methodology is proposed enabling ensemble learning across the digital cousins. The proposed strategy offers significantly improved convergence rates and performance versus current state-of-the-art learning methods including those based on neural networks.  Theoretical guarantees can be provided, and the proposed methods offer strong performance gains across a variety of networks. The ensemble learning can be adapted to general graphs described by Markov chains. Ongoing work is analyzing these methods in terms of a coverage analysis is adapted from approaches for hybrid reinforcement learning. 

Guannan Qu 

Time: 10:00am to 10:30am

Title: Scalable Reinforcement Learning for Multi-Agent Networked Systems  

Abstract: We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we present our framework that exploits the network structure to conduct reinforcement learning in a scalable manner. The key feature in our framework is that we prove spatial decay properties for the Q function and the policy, meaning their dependence on faraway agents decays when the distance increases. Such spatial decay properties enable approximations by truncating the Q functions and policies to local neighborhoods, hence drastically reducing the dimension and avoiding the exponential blow-up in the number of agents. Lastly, we demonstrate the effectiveness of our approach in a microgrid inverter control example, showing our approach is significantly more scalable than benchmarks.  

Alejandro Ribeiro  

Time: 10:30am to 11:00am

Title: Graph Neural Networks in Decentralized Control: Architectures, Stability, and Transferability

Abstract: We review success stories on the use of Graph Neural Networks (GNN) in decentralized control of networked systems. We present GNN architectures as algebraic generalizations of convolutional neural networks (CNNs) and discuss fundamental stability and transferability properties. Stability refers to the response of a GNN to perturbations of the graph and transferability pertains to the ability to train in smaller graphs and transfer the trained GNN to larger graphs. We explain the existence of discriminability versus stability and transferability tradeoffs and how multilayer nonlinear GNN architectures have better tradeoffs relative to linear architectures. Analyses are spectral and show that stable and transferable GNNs are more difficult to realize when policies depend on graph properties associated with large eigenvalues of the graph.

Sanjay Shakkottai 

Time: 11:00am to 11:30am

Title: Robust Multi-Agent Multi-Armed Bandits 

Abstract: We consider a multi-agent multi-armed bandit setting in which a group of honest agents collaborate over a network to minimize regret, but malicious agents can disrupt learning arbitrarily. In this talk, we discuss the impact of graph structure on learning and regret. We first discuss methods to mitigate the effects of malicious agents when the graph is complete. We then generalize beyond the complete graph, and show that the effect of malicious agents is entirely local, in the sense that only the malicious agents directly connected to an agent affect its long-term regret. 

Based on joint work with Daniel Vial and R. Srikant.

Vijay Subramanian 

Time: 11:30am to 12:00pm

Title: Cooperative Multi-Agent Constrained POMDPs: Strong Duality and Primal-Dual Reinforcement Learning with Approximate Information States 

Abstract: We study the problem of decentralized constrained POMDPs in a team-setting where the multiple non-strategic agents have asymmetric information. Strong duality is established for the setting of infinite-horizon expected total discounted costs when the observations lie in a countable space, the actions are chosen from a finite space, and the immediate cost functions are bounded. Following this, connections with the common-information and approximate information-state approaches are established. The approximate information-states are characterized independent of the Lagrange-multipliers vector (under certain assumptions) so that adaptations of the multiplier (during learning) will not necessitate new representations. Finally, a primal-dual multi-agent reinforcement learning (MARL) framework based on centralized training distributed execution (CTDE) and three time-scale stochastic approximation is developed with the aid of recurrent and feedforward neural-networks for function-approximation. 

This is joint work with Nouman Khan at the University of Michigan, Ann Arbor.