Friday, October 19th, 2018 in New Orleans, Louisiana, US
Co-located with DISC 2018
Workshop Address
Hampton Inn & Suites New Orleans-Convention Center
1201 Convention Center Blvd., New Orleans Louisiana, 70130
We are excited to announce the 2nd workshop on Storage, Control, Networking in Dynamic Systems (SCNDS). The workshop is interdisciplinary and touches storage systems, control systems, distributed computing as well as networking. It is intended as a forum where people with different backgrounds can learn from each other. We want to attract both industry relevant papers as well as papers from academic researchers working on the foundations of dynamic systems.
"Dynamic system" is defined broadly. Following are some of the areas of interest, but not strictly limited to these topics:
Storage in dynamic networks, e.g., MANET, VANET
Control in dynamic systems
Complex systems
Network science
Erasure-code based storage systems
Consistency and erasure-codes
Consistency in dynamic networks
Web caching
Storage in data-centers
Big Data in clouds for IoT
Data storage in Fog/Edge Computing
Archival storage systems
Cryptocurrency systems
Cloud storage
Gossip algorithms
Design, implementation, evaluation, and deployment of storage systems
Performance evaluation of networked storage systems
Fault-tolerant control in dynamic networks
Fault-tolerant storage in dynamic networks
New directions on networking techniques
Title: Putting Distributed Ledgers Together
Speaker: Antonio Fernández Anta, IMDEA
Abstract:
There have been many blockchain systems proposed in the latest years, to provide different functionality. However, not much work has been done in order to interconnect all these blockchains into a single system. In this talk, we start with our definition of individual distributed ledger objects and extend it to systems formed by multiple interconnected and interacting ledgers. We will identify challenges and benefits that these complex system can offer.
Bio:
Dr. Antonio Fernández Anta is a Research Professor at IMDEA Networks. Previously he was a Full Professor at the Universidad Rey Juan Carlos (URJC) and was on the Faculty of the Universidad Politécnica de Madrid (UPM), where he received an award for his research productivity. He was a postdoc at MIT from 1995 to 1997, and spent sabbatical years at Bell Labs Murray Hill and MIT Media Lab. He has more than 25 years of research experience, and more than 200 scientific publications. He was the Chair of the Steering Committee of DISC and has served in the TPC of numerous conferences and workshops. He received his M.Sc. and Ph.D. from the University of SW Louisiana in 1992 and 1994, respectively. He completed his undergraduate studies at the UPM, having received awards at the university and national level for his academic performance. He is a Senior Member of ACM and IEEE.
Title: Erasure Coding in Consistent Object Stores: Advantages, Challenges, Theory and Practice
Speaker: Kishori M. Konwar, MIT
Abstract:
Distributed storage systems are at the heart of storing large volumes of data in almost every sphere of life. Replication-based storage algorithms provide fault-tolerance and consistency, and it is the most commonly used method for achieving data-durability in large-scale storage systems. Erasure codes on the other hand are increasingly being used to achieve fault tolerance, data durability, and scalability. Recently, a number of key-value and object store implementations that uses erasure-codes are available. However, combining erasure-codes and consistency mechanisms is challenging and in most of these implementations, the underlying algorithms are not clear. This tutorial will give an overview of the various implementations and the type of consistencies they guarantee. It will also introduce concepts on consistency and erasure-codes and algorithms and existing practical systems, such as Giza, OpenStack Swift. Next, we will present some algorithms and point out various trade-offs they provide when compared with one another. We will also discuss certain challenges in designing algorithms that combine erasure-codes and strong-consistency. Finally, we will share our experience in the implementation of a key-value store system and their performance results. The talk will be accessible to anyone with a background a basic knowledge of algorithms or programming.
Title: Distributed Fault-Tolerant Optimization
Speaker: Nitin Vaidya, Georgetown University
Abstract:
Consider a network of agents 1 to N, wherein each agent i has a local convex cost function Fi(x). For instance, the agents may be robots and each robot’s local cost function may represent its cost of moving to location x for a rendezvous with other robots. The objective here is to identify argument x that minimizes the total cost over all the agents. Similar problems arise in the context of machine learning as well where the goal is to determine the optima of a sum of many cost functions. This is a special case of convex optimization that has received significant attention over the past three decades. In particular, many distributed algorithms have been developed to determine the optima without requiring any single agent to be aware of all the cost functions.
Our work addresses a fault-tolerant version of the problem, allowing for some of the agents to crash during the computation, or suffer from Byzantine failures (which can result in malicious behavior). For brevity, this talk will focus on the case when up to f agents may suffer Byzantine failures. The ideal goal then is to determine the optima of the sum of local cost functions of just the fault-free agents. It is easy to show that this problem is not solvable unless the local cost functions exhibit some form of redundancy. The talk will consider the irredundant case, and show that despite the failure of f agents, it is possible to determine the optima of a global cost function obtained as the weighted average of the cost functions of just the fault-free agents, wherein all-but-f fault-free agents are assigned non-trivial weights. The talk will provide the intuition behind our results, and discuss some algorithms.
Bio:
Nitin Vaidya is the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair and Department Chair of Computer Science at Georgetown University. He received Ph.D. from the University of Massachusetts at Amherst. He previously served as a Professor in Electrical and Computer Engineering at the University of Illiniois at Urbana-Champaign. He has co-authored papers that received awards at several conferences, including 2015 SSS, 2007 ACM MobiHoc and 1998 ACM MobiCom. He is a fellow of the IEEE. He has served as the Chair of the Steering Committee for the ACM PODC conference, as the Editor-in-Chief for the IEEE Transactions on Mobile Computing, and as the Editor-in-Chief for ACM SIGMOBILE publication MC2R.
Title: Resilient Distributed State Estimation of Dynamical Systems
Speaker: Shreyas Sundaram, Purdue
Abstract:
Applications in environmental monitoring, surveillance, and industrial control typically require a network of (mobile) agents to collectively estimate the state of a dynamical process evolving over a region. However, these networks of mobile agents also introduce various challenges for distributed state estimation, including intermittent observations of the dynamical process, loss of communication links due to mobility and packet drops, and the potential for malicious or faulty behavior by some of the agents. In this work, we develop a resilient, fully-distributed, and provably correct state estimation algorithm that accounts for each of the above considerations, and in turn, offers a general framework for reasoning about distributed state estimation problems in dynamic and adversarial environments. Our approach revolves around a resilient filtering technique for dealing with worst-case adversarial behavior, and is capable of tolerating a potentially large number of compromised nodes. We identify conditions on the dynamical system, the network topology (induced by the communication patterns between the agents), and the attack/failure models that guarantee applicability of our proposed techniques. In particular, our scheme relies on a novel topological property termed "strong-robustness" that captures both measurement and communication redundancy; by drawing connections to bootstrap percolation theory, we show that this property can be checked in polynomial time.
Bio:
Shreyas Sundaram is an Associate Professor in the School of Electrical and Computer Engineering at Purdue University. He received his Ph.D. in Electrical Engineering from the University of Illinois at Urbana-Champaign in 2009, and was a Postdoctoral Researcher at the University of Pennsylvania from 2009 to 2010. He was an Assistant Professor at the University of Waterloo from 2010 to 2014. His research interests include network science, large-scale dynamical systems, fault-tolerant and secure control, game theory, linear system and estimation theory, and the application of algebraic graph theory to system analysis. Dr. Sundaram is a recipient of the NSF CAREER award, and an Air Force Research Lab Summer Faculty Fellowship. At Purdue, he received the Hesselberth Award for Teaching Excellence, and the Ruth and Joel Spira Outstanding Teacher Award. At Waterloo, he received the Department of Electrical and Computer Engineering Research Award, the University of Waterloo Outstanding Performance Award, and the Faculty of Engineering Distinguished Performance Award. He was a finalist for the Best Student Paper Award at the 2007 and 2008 American Control Conferences.
Title: Securing Distributed Machine Learning in High Dimensions
Speaker: Lili Su, MIT
Abstract:
We consider unreliable distributed learning systems wherein the training data is kept confidential by possibly unreliable external workers, and the learner has to interact closely with those external workers to have a model trained. In particular, we assume that there exists a system adversary that can adaptively compromise some external workers; the compromised workers deviate from their local designed specifications and behave maliciously in an arbitrary manner.
We assume in each communication round, up to $q$ out of the $m$ workers suffer Byzantine faults. Each worker keeps a local sample of size $n$ and the total sample size is $N=nm$. We propose a secured variant of the gradient descent method that can tolerate up to a constant fraction of Byzantine workers. Moreover, we show the statistical estimation error of the iterates converges in $O(\log N)$ rounds to $O(\sqrt{q/N} + \sqrt{d/N})$. As long as $q=O(d)$, our proposed algorithm achieves the optimal error rate $O(\sqrt{d/N})$. The core of our method is to robustly aggregate the gradients computed by the workers based on the filtering procedure proposed by Steinhardt et al. On the technical front, deviating from the existing literature on robustly estimating a finite-dimensional mean vector, we establish a uniform concentration of the sample covariance matrix of gradients, and show that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function. To get a near-optimal uniform concentration bound, we develop a new matrix concentration inequality, which might be of independent interest.
Bio:
Lili Su is a Postdoc in EECS (CSAIL) at MIT, hosted by Prof. Nancy Lynch. She got her Ph.D degree from UIUC in 2017, supervised by Prof. Nitin H. Vaidya. Before that, her master work was on ordinal data processing at CSL Communication Group from Aug. 2012 to May 2014 . She received her BS degree at Nankai University in 2011. Her research intersects distributed systems, brain computing, security, optimization, and learning. She was one of the three nominees for the 2016 International Symposium on distributed Computing Best Student Paper Award. She received the 2015 International Symposium on Stabilization Safety and Security of Distributed Systems Best Student Paper Award. She also received the Sundaram Seshu International Student Fellowship for the academic year of 2016 to 2017, conferred by UIUC. She was/is on the Program Committee for ICDCS 2018, SCNDS 2018, and SCNDS 2017.
All attendees must register through the registration website of DISC 2018 at http://www.disc-conference.org/wp/disc2018/registration/. There are several registration packages, so make sure that you choose one that covers (possibly among other activities) the workshops.
9:00~10:00 -- Keynote by Antonio Fernández Anta: Putting Distributed Ledgers Together
10:00~10:15 -- Coffee Break
10:15~11:15 -- Invited Talk by Nitin H. Vaidya: Distributed Fault-Tolerant Optimization
11:15~11:35 -- Proof of Work Without All the Work
11:35~11:55 -- TREAS-OPT: A Storage-Efficient Two-round Erasure-coded Algorithm for Atomic Storage
11:55~12:05 -- Revisiting Lightweight Approximate Distributed Top-k Queries
12:05~1:05 -- Lunch
1:05~2:05 -- Invited Talk by Shreyas Sundaram: Resilient Distributed State Estimation of Dynamical Systems
2:05~3:05 -- Invited Talk by Lili Su: Securing Distributed Machine Learning in High Dimensions
3:05~3:20 -- Coffee Break
3:20~3:40 -- Benchmarking the Performance of ABD Algorithm
3:40~3:50 -- Exploring Schemes for Efficient and Reliable Caching in Flash
3:50~4:00 -- Creating Highly Accurate Map in Bandwidth Constrained Wireless Networks
4:00~5:00 -- Tutorial by Kishori M. Konwar: Erasure Coding in Consistent Object Stores: Advantages, Challenges, Theory and Practice
We invite submissions of brief announcements and short papers describing recent results relevant to the workshop. We especially welcome work describing new insights and/or application studies even if these are not fully formed. A major goal of the workshop is to explore new directions and approaches; thus, we encourage the submission of ongoing work. Selected contributors would be asked to present and discuss their work at the workshop.
Submissions should be written in English as PDF file and include title, author information, and either as a brief announcement or as a short paper. A brief announcement submission must not exceed 3 pages (including references) and the title should start with “Brief Announcement:” and a short paper must not exceed 5 pages (excluding references). The material in brief announcements or short paper can be published later on in other conferences or journals.
Submissions must be formatted in accordance with the LIPIcs proceedings guidelines. An appendices are allowed. LIPIcs typesetting instructions can be found here and the lipics.cls LaTeX style here.
Please use the following EasyChair submission link: https://easychair.org/conferences/?conf=scnds2018
Sep. 17, 2018 - Submission deadline, 23:59 Honolulu time
Sep. 27, 2018 Sep. 30, 2018 - Decision notification
Oct. 19, 2018 - Workshop
Note: The workshop will include a proceedings (citable). However, we welcome submissions of extended abstracts describing work that has appeared or is expected to appear in other venues. Please indicate clearly with the submission if the work has already been presented/accepted elsewhere.
Moayad Aloqaily, Gnowit Inc. (Publicity Chair)
Viveck Cadambe, PSU
Diana Gonzalez, RLE, Massachusetts Institute of Technology
Aria Hahn, University of British Columbia
Chung-Wei Lin, National Taiwan University
Kishori Konwar, Massachusetts Institute of Technology (Co-Organizer)
Derya Malak, Northeastern University
Prakash Narayana Moorthy, Massachusetts Institute of Technology
Nicolas Nicolaou, Algolysis Ltd & KIOS Research and Innovation Center of Excellence, U. of Cyprus
Muntasir Raihan Rahman, Nokia Bell Labs
Hsin-Hao Su, Boston College
Lili Su, CSAIL, Massachusetts Institute of Technology
Lewis Tseng, Boston College (Co-Organizer)