Submission deadline extended until March 17, 2017
Held in conjunction with ACM HPDC 2017 in Washington DC, June 26-30, 2017.
Computational and Data-Driven Sciences have become the third and fourth pillar of scientific discovery in addition to experimental and theoretical sciences. Scientific Computing has already begun to change how science is done, enabling scientific breakthroughs through new kinds of experiments that would have been impossible only a decade ago. Today's "Big Data" science is generating datasets that are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the "grand challenges" of the 21st century. The support for data intensive computing is critical to advance modern science as storage systems have exposed a widening gap between their capacity and their bandwidth by more than 10-fold over the last decade. There is a growing need for advanced techniques to manipulate, visualize and interpret large datasets. Scientific Computing is the key to solving .grand challenges. in many domains and providing breakthroughs in new knowledge, and it comes in many shapes and forms: high-performance computing (HPC) which is heavily focused on compute-intensive applications; high-throughput computing (HTC) which focuses on using many computing resources over long periods of time to accomplish its computational tasks; many-task computing (MTC) which aims to bridge the gap between HPC and HTC by focusing on using many resources over short periods of time; and data-intensive computing which is heavily focused on data distribution, data-parallel execution, and harnessing data locality by scheduling of computations close to the data.
The 8th workshop on Scientific Cloud Computing (ScienceCloud) provides the scientific community with a dedicated forum for discussing new research, development, and deployment efforts in running all kinds of scientific computing workloads, services, and applications on Cloud Computing infrastructures. ScienceCloud focuses on the use of cloud-based technologies to meet new compute-intensive and data-intensive scientific challenges that are not well served by current supercomputers, grids, and HPC clusters. The workshop aims to address questions such as: What architectural changes to the current cloud frameworks (hardware, operating systems, networking and/or programming models) are needed to support science? How can cloud technologies enable and adapt to emerging scientific approaches that rely on remote sensors, streaming data, coupled simulation, and real—time experiments? How are scientists using clouds? Are there scientific HPC/HTC/MTC workloads that are suitable candidates to take advantage of emerging cloud computing resources with high efficiency? What are the gaps in commercial cloud offerings and how can they be adapted for running or hosting scientific applications and services? What benefits exist by adopting the cloud model, over clusters, grids, or supercomputers? What factors are limiting clouds use or would make them more usable/efficient?
This workshop encourages interaction and cross-pollination between those developing applications, algorithms, software, hardware and networking, emphasizing scientific computing for such cloud platforms. We believe the workshop will be an excellent place to help the community define the current state, determine future goals, and define architectures and services for future science clouds.