Speaker: Franck Cappello (Argonne National Laboratory)
Recent advancements have positioned Large Language Models (LLMs) as transformative tools for scientific research, capable of addressing complex tasks that require reasoning, problem-solving, and decision-making. Their exceptional capabilities suggest their potential as scientific research assistants, but also highlight the need for holistic, rigorous, and domain-specific evaluation to assess effectiveness in real-world scientific applications.
This talk describes a multifaceted methodology for Evaluating AI models as scientific Research Assistants (EAIRA)Â developed at Argonne National Laboratory. This methodology incorporates four primary classes of evaluations. 1) Multiple Choice Questions to assess factual recall; 2) Open Response to evaluate advanced reasoning and problem-solving skills; 3) Lab-Style Experiments involving detailed analysis of capabilities as research assistants in controlled environments; and 4) Field-Style Experiments to capture researcher-LLM interactions at scale in a wide range of scientific domains and applications. These complementary methods enable a comprehensive analysis of LLM strengths and weaknesses with respect to their scientific knowledge, reasoning abilities, and adaptability. Recognizing the rapid pace of LLM advancements, we designed the methodology to evolve and adapt so as to ensure its continued relevance and applicability. This talk describes the current methodology's state. Although developed within a subset of scientific domains, the methodology is designed to be generalizable to a wide range of scientific domains.
R&D Lead and Senior Computer Scientist, Argonne National Laboratory
Franck Cappello received his Ph.D. from the University of Paris XI in 1994 and joined CNRS, the French National Center for Scientific Research. In 2003, he joined INRIA, where he holds the position of permanent senior researcher. From 2008, as a member of the executive committee of the International Exascale Software Project, he led the roadmap and strategy efforts for projects related to resilience for Exascale supercomputers. During ECP (Exascale Computing Project: https://www.exascaleproject.org/), Cappello led the development of VeloC (checkpointing) and SZ (lossy compression) software. Cappello is now focusing on developing methods and tools to evaluate LLMs as scientific assistants. He is an IEEE Fellow, the recipient the 2025 Secretary of DOE Honor's award for the ECP leadership team, the 2024 IEEE CS Charles Babbage Award, the 2024 Europar Achievement Award, the 2022 HPDC Achievement Award, two R&D100 awards (2019 and 2021), the 2018 IEEE TCPP Outstanding Service Award, and the 2021 IEEE Transactions of Computer Award for Editorial Service and Excellence.
Speaker: Katie Klymko (Lawrence Berkeley National Laboratory)
The National Energy Research Scientific Computing Center (NERSC) is the U.S. Department of Energy's Office of Science mission high performance computing facility. In this talk, I will provide an overview of current initiatives at NERSC designed to equip both the center and its user community for the emerging landscape of quantum computing. I will highlight three strategic focus areas:
Quantum Computing Access at NERSC: This initiative provides access to classical supercomputing resources and select quantum hardware to enable cutting-edge research at the intersection of high-performance computing (HPC), quantum information science, and quantum simulation.
NERSC's Collaboration with QuEra Computing: Our ongoing R&D partnership explores the capabilities of neutral atom quantum hardware, aiming to accelerate breakthroughs in targeted scientific applications.
NERSC's Quantum Benchmarking Efforts: We are developing robust tools and methodologies to assess the performance, capabilities, and limitations of existing and future quantum computing systems.
Additionally, I will discuss NERSC's vision and roadmap for further advancing quantum computing integration into NERSC's workloads in the coming years.
Computer Systems Engineer, NERSC, Lawrence Berkeley National Laboratory
Katie Klymko received her PhD in 2018 from UC Berkeley where she worked on theoretical and computational chemistry. She was a postdoc at LBL from October of 2018 through September of 2021, focusing on developing quantum computing algorithms for eigenvalue calculations in molecular systems as well as algorithms to explore thermodynamic properties. In October of 2021, she became a staff member at NERSC where she is developing and implementing NERSC's quantum computing strategy.
Speaker: Steve Jahnke (Altera Corporation)
With the broad rollout of AI inference capabilities, opportunities to either replace or enhance traditional DSP functionality with an AI inference function presents itself. However, most silicon and platform offerings generally have separate acceleration for DSP and AI, even though functionally both operations are very similar at the compute level. Options to recover and reuse an enhanced DSP block (i.e., a "Dense Compute Block") in an FPGA for AI inference with variable fixed point data widths are explored, with practical examples highlighted in current FPGA architectures.
Principal Platform Architect, Altera Corporation
Steve Jahnke is a Principal Platform Architect at Altera, where his responsibilities include next generation dense compute functionality, from silicon/FPGA hardware acceleration and tooling through applications. He is the lead or sole inventor on over 25 issued patents, and holds a BSEE in Electrical Engineering from Northwestern University (Evanston, IL, USA) and a Master in Electrical Engineering from Rice University (Houston, TX, USA).
Speaker: Yuta Ukon (NTT Corporation)
NTT is advancing the IOWN (Innovative Optical and Wireless Network) initiative, a next-generation communications infrastructure, and developing a computer architecture capable of efficiently processing large volumes of advanced applications made possible by IOWN with high power efficiency. Disaggregated computing is a novel architecture that pools diverse computing resources, dynamically allocating only the necessary ones for efficient application execution.
This presentation will provide an overview of Optical Disaggregated Computing and related trends. Additionally, I will introduce Hardware Function Chaining, a technology designed to enable flexible, low-latency connections between hardware accelerators within resource pools, along with an FPGA concept implementation leveraging this technology.
Senior Manager, NTT Device Innovation Center, NTT Corporation
Yuta Ukon is an FPGA engineer at NTT Device Innovation Center. Since joining the company in 2012, he has been involved in the research and development of CPU-FPGA coupled architectures and FPGA-based network traffic monitoring systems. He is currently leading the development of a disaggregated computing system. He also holds a Ph.D. in engineering from Tokyo Institute of Technology (now Institute of Science Tokyo).