ACM SIGMETRICS 2015 and is part of the Federated Computer Research Conference (ACM FCRC 2015). The first edition of DCC was co-located with the International Conference on Utility and Cloud Computing (IEEE/ACM UCC 2013) and the second edition was co-located with ACM SIGCOMM 2014.
Synopsis: Most of the focus in public cloud computing technology over the last 10 years has been on deploying massive, centralized data centers with thousands or hundreds of thousands of servers. The data centers are typically replicated with a few instances on a continent wide scale in semi-autonomous zones. This model has proven quite successful in economically scaling cloud service, but it has some drawbacks. Failure of a zone can lead to service dropout for tenants if the tenants do not replicate their services across zones. Some applications may need finer grained control over network latency than is provided by a connection to a large centralized data center, or may benefit from being able to specify location as a parameter in their deployment. Nontechnical issues, such as the availability of real estate, power, and bandwidth for a large mega data center, also enter into consideration.
Another model that may be useful in many cases is to have many micro or even nano data centers, interconnected by medium to high bandwidth links, and the ability to manage these data centers and interconnecting links as if they were one larger data center. This distributed cloud model is perhaps a better match for private enterprise clouds, which tend to be smaller than the large, public mega data centers, and it also has attractions for public clouds run by telco carriers which have facilities in geographically diverse locations, with power, cooling, and bandwidth already available. It is attractive for mobile operators as well, since it provides a platform on which applications can be deployed and easily managed that could benefit from locality and a tighter coupling to the access network. Applications with latency constraints or with too much data to backhaul to a large mega data center can benefit from distributed processing. The two models are not mutually exclusive: for instance a public cloud operator with many large data centers distributed internationally could manage its network of data centers like a distributed cloud. The distinguishing characteristic from federated clouds is that the component data centers are more integrated, especially with respect to authentication and authorization, so that the computation, storage, and networking resources are as tightly managed as if they were in a single large data center.
Our keynote will be given by John Wilkes (Principal Software Engineer, Google).
A video recording of the keynote is available here.
Cluster management at Google
Cluster management is the set of tools and processes that Google uses to control the computing infrastructure in our data centers and support almost all of our external services. It includes allocating resources to different applications on our fleet of computers, looking after software installations and hardware, monitoring, and many other things. Much of the talk will be about lesson's we've learned from the challenges that we face, driven by the scale at which we operate, an acute awareness of failures, and the drive to provide ever-better service-levels while curbing complexity. We certainly don't have all the answers, but we do have some pretty impressive systems.
John Wilkes has been at Google since 2008, where he is working on cluster management and infrastructure services. Before that, he spent a long time at HP Labs, becoming an HP and ACM Fellow in 2002. He is interested in far too many aspects of distributed systems, but a recurring theme has been technologies that allow systems to manage themselves. In his spare time he continues, stubbornly, trying to learn how to blow glass.
Personal page: http://e-wilkes.com/john
9:15 - 10:30 Session 1: Distribute, Replicate and Lead! (Chair: Rick McGeer)
10:45 - 11:00 Demo session (Chair: Stefan Schmid)
11:00 - 11:20 Coffee break
11:20 - 12:35 Session 2: Dealing with Uncertainty (Chair: Kurt Tutschku)
14:00 - 15:15 Session 3: Economics and Incentives (Chair: Marco Canini)Chairs:
See the full call for papers here. DCC 2015 accepts high-quality papers related to the distributed cloud, falling in one of the following categories:
Submission Guidelines: Submissions are single-blind and should not exceed 4 pages in length (in ACM format). For an accepted paper, at least one author must attend the workshop. Submissions are handled by EasyChair: submit here.
Publication: Accepted papers will appear in a Special Issue of the ACM Performance Evaluation Review (PER). Authors of accepted abstracts grant ACM permission to publish them in print and digital formats. There are no copyright issues with PER, and thus authors retain the copyright of their work with complete freedom to submit their work elsewhere.
Registrations should be made directly through the FCRC registration site.The workshop is supported by the EU project BigFoot.