The workshop session will be organized as presentations and panel discussions with audience participation. The speakers will include active members of the Top500, HPCG, MLPerf, TeraSort, etc. and key personnel from industry, academia, and government that stimulate the engineers, scientists, and technologists into conversation. The workshop session is organized around the following themes:
How can we ensure that the next generation of supercomputers are well-designed and architected to meet the needs of the community?
How can we design benchmarks that challenge and inspire computational and computer scientists and engineers like HPL did with TOP500?
What applications and metrics are the most relevant to scientists? How can these metrics be captured for purchase/procurement decisions?
More specifically, the invited speakers and panelists will be asked to provide their viewpoints on the following questions:
Can TOP500 evolve, e.g., to accommodate emerging workloads from data science and artificial intelligence?
How could benchmarks encourage scientific creativity to counter the end of Dennard scaling and slowing down of Moore’s Law for a post-Exascale future?
How can benchmarks be inclusive of both supercomputing and cloud-computing needs/practices to stay market-relevant
Do you see the benchmarks of the future introducing sustainability and responsible use of HPC?
Benchmarks have traditionally been artifacts of research. Who should own the benchmarks? Community? Industry? A consortium?
Recent draft RFPs have emphasized on workflow creativity. Do existing benchmarks capture” workflow” elements?
Benchmarks play a pivotal role in the cost of innovation for chips, network, and software towards national competitiveness. How can we as a community drive innovation with benchmarks? Any Ideas?
HPL, MLPerfHPC, MLCommons – Three different leadership styles (academia, industry, open-source). Is there an opportunity to work together? How?
Limited to no cross-leverage between supercomputing and cloud-computing practices are contributing to duplicated efforts (at times reinventing concepts). How can this be bridged with benchmarks?
We will need the simplicity of TOP500, the inclusivity and openness of MLPerf/MLCommons. Please share your thoughts on best-practices to bring the best of these worlds and evolve at the pace of technology?