Held in conjunction with SCA'26.
To be announced.
Please refer to the SCA26 Schedule for the most up to date information.
The workshop will be held in mini-symposium format with invited speakers and will be communicated once established.
The call for participation is opened for SCA26, please consider submitting participation and engaging with us to identify diverse speakers.
Exascale computing initiatives are expected to enable breakthroughs for multiple scientific disciplines. Increasingly these systems may utilize cloud technologies, enabling complex and distributed workflows that can improve not only scientific productivity, but accessibility of resources to a wide range of communities. Such an integrated and seamlessly orchestrated system for supercomputing and cloud technologies is indispensable for experimental facilities that have been experiencing unprecedented data growth rates. While a subset of high performance computing (HPC) services have been available within a public cloud environments, petascale and beyond data and computing capabilities are largely provisioned within HPC data centres using traditional, bare-metal provisioning services to ensure performance, scaling and cost efficiencies. At the same time, on-demand and interactive provisioning of services that are commonplace in cloud environments, remain elusive for leading supercomputing ecosystems. This workshop aims at bringing together a group of experts and practitioners from academia, national laboratories, and industry to discuss technologies, use cases and best practices in order to set a vision and direction for leveraging high performance, extreme-scale computing and on-demand cloud ecosystems. Topics of interest include tools and technologies enabling scientists for adopting scientific applications to cloud interfaces, interoperability of HPC and cloud resource management and scheduling systems, cloud and HPC storage convergence to allow a high degree of flexibility for users and community platform developers, continuous integration/deployment approaches, reproducibility of scientific workflows in distributed environment, and best practices for enabling X-as-a-Service model at scale while maintaining a range of security constraints.
This workshop will cover topics related to interoperability of supercomputing and cloud computing, networking and storage technologies that are being leveraged by use cases and research infrastructure providers with a goal to improve productivity and reproducibility of extreme-scale scientific workflows:
Virtualization for HPC, containers technologies, and multi-tenancy
Storage systems for HPC and cloud technologies (on-demand and interactive)
Resource management and scheduling systems for HPC and cloud technologies
Software defined infrastructure for high-end computing, storage and networking
Application environment, integration and deployment technologies
Secure, high-speed networking for integrated HPC and cloud ecosystems
Use cases: Extreme data and compute workflows, research infrastructure deployment
Resiliency and reproducibility of complex and distributed workflows
Isolation and security within shared HPC environments
Workflow orchestration using public cloud and HPC data center resources
Authentication, authorization and accounting for HPC and cloud ecosystems
Workforce development for integrated HPC and cloud environments
Submission deadline: Early September, 2025
Notification of Acceptance: Late September, 2025
Program Published: November, 2025
Organizing Committee (supercompcloud@googlegroups.com)
David Y. Hancock, Indiana University
Winona G. Snapp-Childs, Indiana University
François Tessier, Inria
Sadaf Alam, University of Bristol
Maxime Martinasso, Swiss National Supercomputing Centre
Alex Lovell-Troy, Los Alamos National Laboratory
Committee members are currently being solicited for future events. If you would like to participate in the committee please contact the workshop organizers at supercompcloud@googlegroups.com.