the 3rd Workshop on Distributed Cloud Computing (DCC 2015) is co-located with ACM SIGMETRICS 2015 and is part of the Federated Computer Research Conference (ACM FCRC 2015). The first edition of DCC was co-located with the International Conference on Utility and Cloud Computing (IEEE/ACM UCC 2013) and the second edition was co-located with ACM SIGCOMM 2014.


  • [16 October 2015] Workshop keynote and final proceedings are online.

  • [14 June 2015] Welcome to Portland! The DCC workshop will take place in room B113-B114.

  • [8 June 2015] Workshop pre-proceedings papers are online.

  • [22 May 2015] Program is online.

  • [14 April 2015] Registration information is online. Registrations should be made directly through the FCRC registration site.

  • [20 March 2015] Published keynote description

  • [25 February 2015] Deadline extended to 13 March (AoE)

  • [20 February 2015] EasyChair submission site open

  • [29 January 2015] John Wilkes (Google) will be our keynote speaker

  • [29 January 2015] The workshop date has changed. The workshop will take place on 15 June 2015

  • [12 January 2015] CfP online

  • [6 December 2014] Website online


  • Abstract registration due: 13 March 2015 AoE (extended!)

  • Submission due: 13 March 2015 AoE (extended!)

  • Notification: 10 April 2015

  • Camera Ready: 22 April 2015

  • Workshop: 15 June 2015  (note this has changed since the original CfP)

  • ACM SIGMETRICS: 15-19 June 2015

The Workshop

Goals: The International Workshop on Distributed Cloud Computing (DCC) is interdisciplinary and touches both distributed systems as well as networking and cloud computing. It is intended as a forum where people with different backgrounds can learn from their respective fields and expertise. We want to attract both industry relevant papers as well as papers from academic researchers working on the foundations of the distributed cloud. 

Synopsis: Most of the focus in public cloud computing technology over the last 10 years has been on deploying massive, centralized data centers with thousands or hundreds of thousands of servers. The data centers are typically replicated with a few instances on a continent wide scale in semi-autonomous zones. This model has proven quite successful in economically scaling cloud service, but it has some drawbacks. Failure of a zone can lead to service dropout for tenants if the tenants do not replicate their services across zones. Some applications may need finer grained control over network latency than is provided by a connection to a large centralized data center, or may benefit from being able to specify location as a parameter in their deployment. Nontechnical issues, such as the availability of real estate, power, and bandwidth for a large mega data center, also enter into consideration.

Another model that may be useful in many cases is to have many micro or even nano data centers, interconnected by medium to high bandwidth links, and the ability to manage these data centers and interconnecting links as if they were one larger data center. This distributed cloud model is perhaps a better match for private enterprise clouds, which tend to be smaller than the large, public mega data centers, and it also has attractions for public clouds run by telco carriers which have facilities in geographically diverse locations, with power, cooling, and bandwidth already available. It is attractive for mobile operators as well, since it provides a platform on which applications can be deployed and easily managed that could benefit from locality and a tighter coupling to the access network. Applications with latency constraints or with too much data to backhaul to a large mega data center can benefit from distributed processing. The two models are not mutually exclusive: for instance a public cloud operator with many large data centers distributed internationally could manage its network of data centers like a distributed cloud. The distinguishing characteristic from federated clouds is that the component data centers are more integrated, especially with respect to authentication and authorization, so that the computation, storage, and networking resources are as tightly managed as if they were in a single large data center.


Our keynote will be given by John Wilkes (Principal Software Engineer, Google).

A video recording of the keynote is available here.

Cluster management at Google

Cluster management is the set of tools and processes that Google uses to control the computing infrastructure in our data centers and support almost all of our external services. It includes allocating resources to different applications on our fleet of computers, looking after software installations and hardware, monitoring, and many other things. Much of the talk will be about lesson's we've learned from the challenges that we face, driven by the scale at which we operate, an acute awareness of failures, and the drive to provide ever-better service-levels while curbing complexity. We certainly don't have all the answers, but we do have some pretty impressive systems.

Speaker biography
John Wilkes

John Wilkes has been at Google since 2008, where he is working on cluster management and infrastructure services. Before that, he spent a long time at HP Labs, becoming an HP and ACM Fellow in 2002.  He is interested in far too many aspects of distributed systems, but a recurring theme has been technologies that allow systems to manage themselves. In his spare time he continues, stubbornly, trying to learn how to blow glass.

Personal page:

Program (in room B113-B114)

9:00 - 9:15   Opening

9:15 - 10:30   Session 1: Distribute, Replicate and Lead! (Chair: Rick McGeer)
10:30 - 10:45   Poster pitches (Chair: Stefan Schmid)
10:45 - 11:00   Demo session (Chair: Stefan Schmid)
  • The Ignite Distributed Collaborative Scientific Visualization System
    Sushil Bhojwani, Matt Hemmings 
    (University of Victoria), Dan Ingalls (CDG, SAP), Jens Lincke (Hasso-Plattner Institute), Robert Krahn (CDG, SAP), David Lary (UT Dallas), Rick McGeer (CDG and US Ignite), Glenn Ricart (US Ignite), Marko Roder (CDG, SAP), Yvonne Coady and Ulrike Stege (University of Victoria)

11:00 - 11:20   Coffee break

11:20 - 12:35   Session 2: Dealing with Uncertainty (Chair: Kurt Tutschku)
12:35 - 14:00   Lunch

14:00 - 15:15 Session 3: Economics and Incentives (Chair: Marco Canini)
15:15 - 16:00   Poster session & Coffee break

16:00 - 17:15   Keynote: Cluster management at GoogleJohn Wilkes (Google) [slides]

Chairs and TPC

  • Marco Canini, Université catholique de Louvain (UCL), Belgium

  • James Kempf, Ericsson Research, Silicon Valley, USA

  • Stefan Schmid, Telekom Innovation Laboratories (T-Labs) & TU Berlin, Germany

TPC Members:
  • Theophilus Benson (Duke University)

  • Annette Bieniusa (TU Kaiserslautern)

  • Ken Birman (Cornell University)

  • Rajesh Bordawekar (IBM Research)

  • Ivona Brandic (TU Wien)

  • Marco Canini (co-Chair) (Université catholique de Louvain)

  • Paolo Costa (Microsoft Research)

  • Johan Eker (Ericsson Research)

  • Erik Elmroth (Umeå University)

  • Guy Even (Tel-Aviv University)

  • Indranil Gupta (University of Illinois at Urbana-Champaign)

  • Oliver Hohlfeld (RWTH Aachen University)

  • Aman Kansal (Microsoft Research)

  • Holger Karl (University Paderborn)

  • James Kempf (co-Chair) (Ericsson Research)

  • Ramin Khalili (Huawei)

  • Nikola Knezevic (IBM Research)

  • Rick Mcgeer (SAP)

  • Sanjai Narain (Applied Communication Sciences)

  • Jose Rolim (University of Geneva)

  • Stefan Schmid (co-Chair) (TU Berlin & T-Labs)

  • Don Towsley (University of Massachusetts - Amherst)

  • Kurt Tutschku (Blekinge Institute of Technology)

  • Laurent Vanbever (ETH Zurich)

  • Peter Van Roy (Université catholique de Louvain)

  • Marko Vukolic (IBM Research - Zurich)

Submission and Information

Call for Papers: See the full call for papers here. DCC 2015 accepts high-quality papers related to the distributed cloud, falling in one of the following categories:
  • Foundations and principles of distributed cloud computing

  • Experience with and performance evaluation of existing deployments and measurements (public, private, hybrid, federated environments)

  • Optimization and algorithms

  • Virtualization technology and enablers (network virtualization, software-defined networking)

  • Service and resource specification, languages, and formal verification

  • Economics and pricing

In addition to papers discussing practical problems and systems in distributed cloud computing, we also encourage theoretical and analytical submissions.

Submission Guidelines: Submissions are single-blind and should not exceed 4 pages in length (in ACM format). For an accepted paper, at least one author must attend the workshop. Submissions are handled by EasyChair: submit here

Publication: Accepted papers will appear in a Special Issue of the ACM Performance Evaluation Review (PER). Authors of accepted abstracts grant ACM permission to publish them in print and digital formats. There are no copyright issues with PER, and thus authors retain the copyright of their work with complete freedom to submit their work elsewhere. 


DCC 2015 will take place in Portland, Oregon, USA.


Registrations should be made directly through the FCRC registration site.


The workshop is supported by the EU project BigFoot.