SC'21 SuperCompCloud: 5th Workshop on Interoperability of Supercomputing and Cloud Technologies

SuperCompCloud: 5th Workshop on Interoperability of Supercomputing and Cloud Technologies

Held in conjunction with SC'21, The International Conference for High Performance Computing, Networking, Storage and Analysis, in cooperation with IEEE TCHPC.

Time & Location

Held as part of the SC21 conference on November 19 (Friday) in room 222 and online via the SC21 HUB platform. The event times posted below are in local St. Louis, MO, USA time (central standard timezone).

Please refer to the SC21 website for up to date information. You must register as an SC21 workshop participant in order to attend. However, physical attendance is not a requirement! Accepted papers can be presented remotely online.

Workshop Agenda

8.30am - 10.00am Session I

8.30am - 8.45am Welcome

8.45am - 9.15am Invited talk: "Data Integrity in HPC/Cloud-based Artificial Intelligence Research" by Beth Plale (Indiana University)

Abstract: The integrity of data and AI models that use data are critical to the trustworthiness of the outcomes of AI research. Data and AI models could be corrupted by malicious actors on one hand and could be subject to restrictions on their use on the other. Both suggest the need for care. This problem takes on unique proportions when AI research requires large-scale HPC and cloud resources. In this talk I will speak to the issue of data integrity in HPC and cloud-based AI research, touching on current research at IU and within the context of the recently funded NSF AI Institute, ICICLE, Intelligent CI with Computational Learning in the Environment.

9.15am - 9.35am Paper: "Case study of SARS-CoV-2 transmission risk assessment in indoor environments using cloud computing resources" by Kumar Saurabh (Iowa State University)

9.35am - 9.50am Q&A Session I

9.50am - 10.20am Break

10.20am - 12.00pm Session II

10.20am - 10.50am Invited talk: "Towards Integrated Hardware/Software Ecosystems for the Edge-Cloud-HPC Continuum: the Transcontinuum Initiative" by Gabriel Antoniu (Inria)

Abstract: Modern use cases such as autonomous vehicles, digital twins, smart buildings and precision agriculture, greatly increase the complexity of application workflows. They typically combine physics-based simulations, analysis of large data volumes and machine learning and require a hybrid execution infrastructure: edge devices create streams of input data, which are processed by data analytics and machine learning applications in the Cloud, and simulations on large, specialised HPC systems provide insights into and prediction of future system state. All of these steps pose different requirements for the best suited execution platforms, and they need to be connected in an efficient and secure way. This assembly is called the Computing Continuum (CC). It raises challenges at multiple levels: at the application level, innovative algorithms are needed to bridge simulations, machine learning and data-driven analytics; at the middleware level, adequate tools must enable efficient deployment, scheduling and orchestration of the workflow components across the whole distributed infrastructure; and, finally, a capable resource management system must allocate a suitable set of components of the infrastructure to run the application workflow, preferably in a dynamic and adaptive way, taking into account the specific capabilities of each component of the underlying heterogeneous infrastructure. This talk introduces TCI - the Transcontinuum Initiative - a European multidisciplinary collaborative action aiming to identify the related gaps for both hardware and software infrastructures to build CC use cases, with the ultimate goal of accelerating scientific discovery, improving timeliness, quality and sustainability of engineering artefacts, and supporting decisions in complex and potentially urgent situations.

10.50am - 11.10am Paper: "Multi-tenancy Management and Zero Downtime Upgrades using Cray-HPE Shasta Supercomputers" by Sadaf Alam (CSCS)

11.10am - 11.20am Lightning talk: "Best Practices for HPC Applications on Public Cloud Platforms" by Seelam Seetharami (IBM)

11.20am - 11.50am Q&A Session II & Closing

Workshop Abstract and Topics

Exascale computing initiatives are expected to enable breakthroughs for multiple scientific disciplines. Increasingly these systems may utilize cloud technologies, enabling complex and distributed workflows that can improve not only scientific productivity, but accessibility of resources to a wide range of communities. Such an integrated and seamlessly orchestrated system for supercomputing and cloud technologies is indispensable for experimental facilities that have been experiencing unprecedented data growth rates. While a subset of high performance computing (HPC) services have been available within a public cloud environments, petascale and beyond data and computing capabilities are largely provisioned within HPC data centres using traditional, bare-metal provisioning services to ensure performance, scaling and cost efficiencies. At the same time, on-demand and interactive provisioning of services that are commonplace in cloud environments, remain elusive for leading supercomputing ecosystems. This workshop aims at bringing together a group of experts and practitioners from academia, national laboratories, and industry to discuss technologies, use cases and best practices in order to set a vision and direction for leveraging high performance, extreme-scale computing and on-demand cloud ecosystems. Topics of interest include tools and technologies enabling scientists for adopting scientific applications to cloud interfaces, interoperability of HPC and cloud resource management and scheduling systems, cloud and HPC storage convergence to allow a high degree of flexibility for users and community platform developers, continuous integration/deployment approaches, reproducibility of scientific workflows in distributed environment, and best practices for enabling X-as-a-Service model at scale while maintaining a range of security constraints.

This workshop will cover topics related to interoperability of supercomputing and cloud computing, networking and storage technologies that are being leveraged by use cases and research infrastructure providers with a goal to improve productivity and reproducibility of extreme-scale scientific workflows:

    • Virtualization for HPC e.g. virtual machines, containers, etc.

    • Storage systems for HPC and cloud technologies

    • On-demand and interactivity with performance, scaling and cost efficiencies

    • Resource management and scheduling systems for HPC and cloud technologies

    • Software defined infrastructure for high-end computing, storage and networking

    • Application environment, integration and deployment technologies

    • Secure, high-speed networking for integrated HPC and cloud ecosystems

    • Extreme data and compute workflows and use cases

    • Research infrastructure deployment use cases

    • Resiliency and reproducibility of complex and distributed workflows

    • Isolation and security within shared HPC environments

    • X-as-a-Service technologies with performance and scalability

    • Workflow orchestration using public cloud and HPC data centre resources

    • Authentication, authorization and accounting interoperability for HPC and cloud ecosystems

    • Workforce development for integrated HPC and cloud environments

Important Dates

    • Technical Paper/Extended Abstract Submission Deadline: August 06, 2021 September 3, 2021

    • Author Notification: September 13, 2021 October 7th, 2021

    • Camera-ready Deadline: October 14th, 2021 (hard deadline)

    • Pre-recorded video: October 20th, 2021

    • Workshop Date: Friday, 19 November 2021

Full Paper and Extended Abstract Submission

Submissions will be done through the SC submission site (powered by Linklings): https://submissions.supercomputing.org

All submissions must be in English and should fit in 6 to 8 pages using the IEEE conference format (double column, 10pt font). 2-page extended abstracts are also allowed. If an abstract is accepted, the full paper must be submitted by October 4. Submissions must be made as a single PDF file formatted for 8.5" x 11" (U.S. Letter) which includes all figures and references. Papers should not be submitted in parallel to any other conference or journal. For more details: https://www.ieee.org/conferences/publishing/templates.html

The submitted papers will go through a peer-review process and will be evaluated according to four main criteria: the relevance of the work, its technical soundness, its originality/novelty and the quality of the presentation. The workshop proceedings will be published through IEEE TCHPC and included in the IEEE Xplore digital library.

At least one of the authors of each accepted paper must register as a participant of the workshop and present the paper at the workshop, in order to have the paper published in the proceedings. Physical attendance is NOT a requirement. Accepted papers can be presented remotely online.

Workshop Committees

Organizing Committee

Organizing Committee (supercompcloud@googlegroups.com)

    • Sadaf Alam, Swiss National Supercomputing Center

    • François Tessier, Inria

    • David Y. Hancock, Indiana University

    • Winona G. Snapp-Childs, Indiana University

    • J. Michael Lowe, Indiana University

Program Committee

    • Joshua Bowden, Inria, France

    • Tim Robinson, CSCS, Switzerland

    • Jeremy Fischer, Indiana University, USA

Committee members are currently being solicited for future events. If you would like to participate in the committee please contact the workshop organizers at supercompcloud@googlegroups.com.