NSF Workshop on Future Directions for Parallel and Distributed Computing (SPX 2019)
Abstractions
Parallel and distributed software and hardware have become increasingly sophisticated. To bridge software and hardware, developers leverage a multi-layer software stack. This multi-layer stack has led to layers and layers of abstractions (e.g., SDK, library interfaces, boundaries between languages) that limit the programming system’s understanding of applications and hence limit the system’s ability to optimize their executions on the hardware platform at hand.
This motivates rethinking the boundaries between layers of the software/hardware stack, especially in light of the opportunities provided by higher-level parallel languages.
Discussion Questions
There is an opportunity to design new computational models, algorithms, and corresponding hardware systems that when mated shield users from the increasingly complicated details of parallel and distributed systems, algorithms, and programming. To what extent can we shield users (e.g., computational scientists) from those intricacies by
enabling them to write high-level code that has provably good performance on hardware in practice?
Bridging the gap from high-level specifications to practical, low-level performance will require new hardware that is amenable to algorithmic modeling, general frameworks for designing algorithms for such model(s), and translators that provably map algorithms into implementations that run with performance guarantees on the hardware in question.
Discussion Questions:
- What are the fundamental computational paradigms that we should be building machines to implement?
- How will we design algorithms on top of those computational models?
The computing landscape has evolved to include a wide variety of different modalities and hardware platforms. These modalities include computing in the cloud and at the edge (e.g., mobile, wearable). The hardware platforms include not only the dominant platforms in each of these modalities, but also the emerging landscape of reconfigurable computing fabrics and hardware accelerators.
How do we develop flexible abstractions that support the development of tools to automatically map computations to different modalities and hardware platforms with acceptable (but perhaps not optimal) performance in our objectives?
Discussion Questions:
Contemporary wisdom suggests that new opportunities will come from mating the full system (algorithms-software-hardware) with the computational abstractions of the application at hand. The goal of DSD is to develop complete algorithms-software-hardware solutions that absolutely maximize our objectives for a given domain. This stands in contrast to the Heterogeneity topic, where the goal is to design flexible abstractions that map across platforms, while here we desire domain and -- potentially -- platform-specific abstractions.
Discussion Questions:
There are modern distributed applications whose utility comes from the shared participation of many users. For example, social networks, large-scale deep learning, and shared ledgers are valuable because they align and aggregate the data and computation of many users into a productive good. At the same time, users are concerned about the security/privacy of their data in the standard regime of centralized computation. Instead, there is a demand for decentralized versions of these applications in which users retain control of their data and sensitive computations are done locally. Federated machine learning, confidential computing, edge/mobile computing and blockchains are a few examples.
Discussion Questions: