EE HPC State of the Practice Workshop 2019
Energy Efficient HPC State of the Practice Workshop (EE HPC SOP 2019) August 5th, 2019 - Kyoto, Japan
In conjunction with ICPP 2019 https://www.hpcs.cs.tsukuba.ac.jp/icpp2019/
The facility demands for supercomputing centers (SCs) are characterized by electrical power demands for computing systems that scale to tens of megawatts (MW) and millisecond voltage fluctuations approaching 10MW for the largest systems. The demand for primary electrical distribution capabilities to current large-scale facilities can exceed 60MW, comprising multiple, redundant, and diverse medium-voltage feeders. Despite significant pressure on both Moore’s Law and Dennard scaling, the appetite for ever larger systems and the subsequent demand for both agile power and effective cooling for these systems continues to grow. Computing trends, in terms of highly optimized hardware platforms that may leverage accelerators or other non-traditional components, scalable and highly performing applications, and the requirements to manage exponentially larger data sets are driving facility demands not envisioned just a few years ago.
SC facilities must consider multiple elements, including the cost to extend or upfit existing primary distribution capabilities; the cost and consequence of both trapped and stranded capacity, ever-increasing heat densities for new systems that may render existing cooling mechanisms obsolete or ineffective, increased mandatory use of liquid cooling for portions of the heat load, and wet weights that exceed the carrying capacities of existing raised floor systems.
Additionally, the operational costs of these facilities must be balanced versus the demand from the systems owners and users for high availability, high utilization, and low-impact facility maintenance and service demands. To achieve this balance, many SCs continue to innovate their operational design practices and technologies. Solutions seek improved management of both the electrical and mechanical systems, and minimizing long-term facility costs through best practices associated their design.
Some SCs are early adopters and innovators in operational practices and technologies that are geared towards improving energy and power management capabilities. This workshop will explore these operational and technological innovations that span HPC computational systems as well as buildings and building infrastructure.
The purpose of this workshop is to allow for publication of practices, policies, procedures and technologies in formal peer reviewed papers so the broader community can benefit from these experiences. The nature of these papers is generally descriptive with hard experiential data generally gathered through surveys, case studies and research for practice.
Topics of interest for workshop submissions include (but are not limited to):
· Reports on experience gained with grid integration
o impact of large HPC power loads and rapid voltage swings on electrical distribution systems
o demand response and other ‘sustainability’ programs
o negotiations on contracts with electricity service providers
· Use cases, lessons learned and best practices from large-scale, production deployment of:
o integrated operational data collection and analytics
o energy and power aware job scheduling and resource management
o liquid cooling control systems for HPC facilities, systems or both
o standards and open interfaces (e.g., Power API, Redfish, GEOPM, READEX, PowerStack)
· Experiences from extending the L2/L3 power measurement methodology to other benchmarks (beyond HPL)
· Use cases, lessons learned and best practices from:
o energy and power considerations during procurement of HPC systems
o liquid cooling commissioning
o HPC facility preventative maintenance and management practices for RAS-M (reliability, availability, serviceability and maintainability)
o energy and power considerations during facility construction or improvement that supports HPC systems
o measuring and evaluating the value of ITUE (IT power usage effectiveness), similar to PUE but “inside” the system and TUE (total power usage effectiveness)