09:30 - 10:30 Yuval Emek, Uniform Distributed Graph Algorithms
10:30 - 11:00 Coffee break
11:00 - 12:00 Lelia Blin, Beyond Silence: Towards Memory-Efficient Self-Stabilization
12:00 - 13:00 Andrea Richa, Algorithmic Programmable Matter: From Local Markov Chains to "Dumb" Robots
Yuval Emek (Technion)
Uniform Distributed Graph Algorithms
The term "uniform computational model" typically refers to an algorithm designer who is oblivious to the parameters (e.g., size) of the input instances on which the algorithm is to be invoked. This is a desired feature since it facilitates the deployment of algorithms in a one-size-fits-all manner. In distributed graph algorithms, uniformity has an additional angle: more often than not, the global parameters of the input graphs are not readily revealed to the individual decision-making units (the nodes) even after the algorithm is invoked. Therefore, the design of uniform distributed algorithms raises certain algorithmic challenges which are less common in other domains. In this talk, we will discuss those challenges as well as some recent advances in the design of uniform algorithms for fundamental distributed tasks.
The talk will be self-contained.
Bio: Yuval Emek is a professor at the Faculty of Data and Decision Sciences in the Technion – Israel Institute of Technology, where he conducts his research on algorithmic aspects of coping with uncertainty. He graduated (summa cum laude) from the Technion with a bachelor degree in computer science (2000) and completed his master studies (2003) and PhD (2008) in computer science and applied mathematics at the Weizmann Institute of Science under the supervision of David Peleg. Following that, he held post-doc positions at Tel Aviv University (2008-2009), Microsoft (2009-2010), and ETH Zurich (2010-2013), before joining the Technion as a faculty member in 2013. Between 2021-2022, Yuval was a visiting Assoc. Prof. at the Department of Computer Science in the University of British Columbia.
Yuval is the author of more than 30 journal papers that were published in prestigious venues such as JACM, SICOMP, TALG, and Distributed Computing; and more than 65 conference papers that appeared in the proceedings of competitive conferences such as STOC, SODA, PODC, and ICALP. He has served on the program and steering committees of dozens of conferences and workshops, including PODC, DISC, ICALP, and SODA, and he is currently an editorial board member of Information and Computation.
In his research, Yuval focuses on combinatorial optimization problems with a special interest in network algorithms, particularly when the data is provided in an online fashion or when it is distributed across the network and requires decentralized decision making. Yuval's theoretical research is motivated by applications in diverse domains ranging from large scale telecommunication systems and transportation networks to nano-computing and distributed biological processes.
Beyond Silence: Towards Memory-Efficient Self-Stabilization
Self-stabilization is a fundamental property of distributed systems that enables them to recover a correct state after transient faults, without requiring external intervention. Even if the global state of the system becomes corrupted, a self-stabilizing protocol guarantees convergence to a legitimate configuration from which its behavior satisfies the intended specification. A central challenge in this area is achieving memory efficiency, since reducing the information stored and exchanged by nodes lowers communication overhead and enables more resource-efficient fault-tolerant systems.
The notion of silent algorithms, where nodes eventually stop changing state, plays an important role in the theory of self-stabilization. However, silent algorithms impose constraints on the memory size. For example, Ω(log n) bits per node are required for tasks such as leader election or spanning tree construction. However, being silent is not inherently required for designing self-stabilizing algorithms. By relaxing this constraint, one can design protocols that drastically reduce space complexity.
In this talk, we will discuss the issue of space complexity of self-stabilizing algorithms. We will present recent advances in the design of memory-efficient self-stabilizing algorithms for fundamental distributed tasks.
Bio: Lélia Blin is a Professor of Computer Science at Université Paris Cité, and member of Institut de Recherche en Informatique Fondamentale (IRIF). Her research focuses on fault-tolerant distributed algorithms, especially self-stabilization. She has authored numerous publications in these areas, contributing to significant advances in memory complexity for fundamental distributed computing tasks.
Algorithmic Programmable Matter: From Local Markov Chains to "Dumb" Robots
Many programmable matter systems have been developed, including modular and swarm robotics, synthetic biology, DNA tiling, and smart materials. We describe programmable matter as an abstract collection of simple computational elements (particles) with limited memory that each execute distributed, local algorithms to self-organize and solve system-wide problems, such as movement, reconfiguration, and coordination. Self-organizing particle systems (SOPS) have many interesting potential applications like coating objects for monitoring and repair purposes, and forming nano-scale devices for surgery and molecular-scale electronic structures. We describe some of our work on the algorithmic foundations of programmable matter, investigating how macro-scale system behaviors can naturally emerge from local micro-behaviors by individual particles: We utilize tools from statistical physics and Markov chain analysis to translate Markov chains defined at a system level into distributed, local algorithms for SOPS that drive the desired emergent collective behavior for the problems of compression, separation, and foraging, among others. We further establish the notion of algorithmic matter, where we leverage standard binary computation, as well as physical characteristics of the robots and interactions with the environment in order to implement our micro-level algorithms in actual testbeds composed of robots that are not capable of any standard computation. We conclude by addressing full concurrency and asynchrony in SOPS.
served as SCAI's Interim Associate Director. Her main areas of expertise are in distributed and network algorithms and computing in general. More recently, she has focused on developing the algorithmic foundations on what has been coined as programmable matter, through her work on self-organizing particle systems (SOPS). Her work has been widely cited, and includes, besides SOPS, work on bio-inspired distributed algorithms, distributed load balancing, packet routing, wireless network modeling and topology control, wireless jamming, data mule networks, underwater optical networking, and distributed hash tables (DHTs). She received the 2024 ASU Fulton Undergraduate Research Initiative Outstanding Faculty Mentor Award, the 2021 ASU Faculty Women Association Outstanding Mentor Award and the 2017 SCAI Best Senior Researcher award. She is currently the recipient of a DoD MURI award and was the recipient of an NSF CAREER Award, among others; she was also the keynote speaker and program and general chair of several prestigious conferences. For more on her work and that of her students, please check http://sops.engineering.asu.edu .