HACDA
2nd Annual Workshop on
Highlights of Asynchronous Concurrent and Distributed Algorithms
HACDA is a one-day workshop that aims to highlight interesting research directions in asynchronous and fault tolerant concurrent and distributed algorithms. Its program consists of invited talks that are targeted at the general DISC audience, and are designed to cover general topics rather than a specific result. The goal of HACDA is to encourage discussion and open the field to new researchers.
HACDA 2024 will be colocated with DISC 2024, and will take place on November 1st. The workshop will be chaired by Naama Ben-David.
Schedule
9:00 - 10:00 Sergio Rajsbaum
10:00 - 11:00 Yuanhao Wei
11:00 - 11:30 Coffee Break
11:30 - 12:30 Neil Giridharan
12:30 - 14:00 Lunch
14:00 - 15:00 Gal Sela
15:00 - 16:00 Sean Ovens
Speakers
Neil Giridharan
Talk Title: Evolution of DAG BFT
Talk Abstract: DAG BFT protocols have risen in popularity due to their ability to achieve high throughput and scalability while maintaining simplicity. While DAG BFT has seen great adoption, they have been limited by their poor latency. Newer DAG protocols promise to retain the throughput and scalability benefits while achieving lower latency. This talk will start by covering early DAG BFT protocols and their limitations. Next, it will cover the current state of the art protocols, and how they achieve low latency and high throughput. And finally, it will conclude with future directions and open questions.
Bio: Neil Giridharan is a 5th year PhD student at UC Berkeley advised by Natacha Crooks. His research interests are in distributed systems and distributed computing with a focus on consensus algorithms.
Sean Ovens
Talk title: Visualizing the memory layout of multithreaded applications
Abstract: The performance of a data structure in practice is impacted by a range of low-level factors external to its algorithmic details. One such factor is the application's memory layout, i.e. the manner in which objects are placed in memory. Until now, determining an application's memory layout has required either careful deduction based on a deep understanding of the memory allocator and the allocation patterns performed by the algorithm, or painstakingly recording the addresses produced by the application and then reconstructing the layout by hand. This is especially complex for multithreaded applications because different thread schedules can lead to different allocation patterns. There are plenty of existing tools that can profile an application to count, for example, the number stalled cycles, cache misses, or cache hits in a modified state (HITMs). If one of these counts is higher than expected, this may indicate the presence of memory layout issues. However, determining the actual cause can be significantly more effort. In this talk, I will discuss ongoing work to create a set of software tools to log and visualize memory allocations for multithreaded programs. I will also demonstrate how these tools can be used to surface subtle performance issues in several benchmarks.
Speaker Bio: Sean Ovens is a postdoctoral researcher in Trevor Brown's Multicore Lab at the University of Waterloo. He earned his PhD in Summer 2023 from the University of Toronto, where he was supervised by Faith Ellen. His research interests include impossibility results and complexity lower bounds for distributed algorithms, concurrent data structures, and recoverable algorithms. Sean was the recipient of the best paper award at PODC in 2022 and 2024.
Sergio Rajsbaum
Talk title: Distributed Computability and Continuous Maps
Abstract: The celebrated Herlihy and Shavit Asynchronous Computability theorem characterizes the tasks that are wait-free solvable in a shared memory read/write system in terms of certain subdivisions relating the input and output complex of a task. While this result is fundamental, it is not as informative as the corresponding result for colorless tasks, which directly connects wait-free solvability and topology: it states that a colorless task is solvable if and only if there is a continuous map from the input to the output complex respecting the input/output relation of the task. A self-contained, intuitive introduction to this topic will be presented, explaining these theorems,
some of their important consequences, and the efforts that have been done at presenting general task solvability characterizations in terms of continuous maps.
Bio: Sergio Rajsbaum received a degree in Computer Engineering from the Universidad Nacional Autónoma de México (UNAM) in 1985, and a PhD in the Computer Science from the Technion, Israel, in 1991. Since then he has been a faculty member at the Instituto de Matemáticas at UNAM. After a postdoc at MIT, he has visited several institutions, currently IRIF-Universite Paris Cite. His research interests are in the theory of distributed computing. He published a book on the topology approach to distributed computability with Maurice Herlihy and Dmitry Kozlov. He has been chair of the Program Committee of conferences such as LATIN, PODC, SIROCCO and a member of the editorial board of IEEE Transactions on Dependable and Secure Computing, Information Processing Letters, and Computer Science Review.
Gal Sela
Talk Title: Efficient Concurrent Aggregate Queries
Abstract: Data structure queries that involve multiple elements, such as snapshots and aggregate queries, are widely used but difficult to implement efficiently in the presence of concurrent updates. Concurrent snapshots, which capture all elements of a data structure in a certain point in time, have been extensively studied. Yet, aggregate queries, which provide succinct information about a range of data items, such as the number of elements or the i-th element, have only recently been explored for concurrent dynamic data structures. This talk will cover recent advances in the design of aggregate queries for concurrent data structures. Taking a snapshot to answer an aggregate query is inefficient, but ideas from the snapshot literature can be leveraged to develop more efficient approaches, which will be presented in the talk.
Bio: Gal is a postdoctoral researcher in Professor Rachid Guerraoui's Distributed Computing Lab at EPFL. She has recently obtained her PhD from the Technion - Israel Institute of Technology, advised by Professor Erez Petrank. She received her MSc and BSc in Computer Science from the Technion as well, both summa cum laude. She was a research intern at VMWare Research Group and a software engineer at Microsoft. Gal's research focuses on algorithms for concurrent data structures and distributed computing in general. She is the recipient of several awards, including the best student paper award at DISC'23, the Schmidt Postdoctoral Award for Women in Mathematical and Computing Sciences, and the Jacobs Fellowship for Outstanding Graduate Students.
Yuanhao Wei
Talk Title: General Techniques for Efficient Concurrent Data Structures
Abstract: Concurrent programming is becoming increasingly important as systems are scaling up by increasing the number of processors rather than the speed of a single processor. However, concurrent programming can be very difficult and error-prone. This talk will cover several simple, general, and efficient techniques for designing concurrent algorithms and data structures. This includes techniques for lock-freedom, for taking consistent snapshots of shared memory, and for safe memory management. With the right abstractions and algorithms, these general techniques can be very efficient in practice, achieving competitive performance with the fastest hand-optimized data structures, while also providing strong theoretical guarantees.
Speaker Bio: Yuanhao Wei is a Postdoctoral Associate at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He will be joining the University of British Columbia as an assistant professor next January. He obtained his PhD from Carnegie Mellon University advised by Professor Guy Blelloch. His research interests are in developing efficient algorithms and easy-to-use abstractions for multi-core machines. His work in this area has been recognized by a Principles of Distributed Computing Doctoral Dissertation Award and a best paper award from the ACM Symposium on Principles and Practice of Parallel Programming (PPoPP'22).