Overview

Rapid innovation in datacenter software and hardware infrastructure is set to radically transform the information technology world. Of particular interest is the most recent trend toward “serverless” platforms, or “Cloud 3.0” [1], which has the potential to reinvent both hardware and software architectures. With Cloud 3.0, application developers “just bring code and data” without worrying about the infrastructure: configuring/spinning up VMs, load balancing, orchestration, monitoring, logging, etc. Developers define applications as a set of simple stateless functions, or “lambdas” that communicate with each other via simple standard API calls or via an external (local or remote) data store. The developer does not need to worry about the boundary between one server and another--that is hidden behind the Cloud 3.0 abstraction. An infrastructure provider (e.g., a datacenter operator) handles deploying/scaling function instances. This model offers many benefits including vastly improved developer velocity, rapid scale up/down under load variations, and better fault tolerance (e.g., due to statelessness). Serverless computing enables cloud providers to offer new pricing models that let them bill customers for very fine-grained resource usage, in particular sub-second CPU utilization. This means that user can scale up (and then down) their computing footprint by orders of magnitude in a matter of seconds. Several large online service providers have begun offering platforms for serverless computation.

The current approach to supporting Cloud 3.0 applications, infrastructure, and networking is to repurpose existing hardware and software, protocols and algorithms. But there are several places, e.g., in networking, where existing technologies are a poor fit for Cloud 3.0. For example, at the hardware level, recent research in disaggregated network designs has the potential to further erode the hard boundaries defining servers as well. Disaggregated networks rely on very low-latency, yet high-bandwidth, interconnects that have the potential of integrating individual components such as CPU, memory, and non-volatile storage into on-demand server platforms. Coupled with function-like abstractions, the combined design can scale to very high levels of parallelism in sub-second intervals, enabling new ways of scaling services in response to user demands. Similarly, new approaches to rethinking network protocols, abstractions, APIs, and algorithms may lead to more improved support for serverless computing.

The goal of this workshop is to bring together researchers in networked systems to come up with a new networked systems research agenda for “Cloud 3.0”, focusing on rethinking from the ground-up networked systems infrastructure within the datacenter and beyond. “Cloud 3.0” provides a rich set of opportunities for interesting core networking, as well as cross-cutting (“networking + X”) research.

Why a workshop? The industry has just begun its journey toward a highly agile future Cloud. The opportunity is ripe for academics to make foundational contributions, collaboratively with industry, to shape the next ~5 years of cloud computing, and especially, cloud networking. Data centers also provide a fertile playground to consider disruptive software and hardware designs.

Organizers:

Sponsors: National Science Foundation