10:00-11:30
10:00 – 10:05
Michał Stęchły, PsiQuantum
10:05 – 10:35
Travis Humble, ONRL
While remarkable advances in quantum hardware have enabled quantum applications, these demonstrations typically require unique techniques to mitigate errors that manifest as point solutions for a given device and program. The underlying sensitivity to noise raises concerns about how to ensure quantum computing results are both reproducible and transferrable as hardware and applications evolve. What does it take to make quantum computing reproducible? We review the general question of reproducibility in quantum computing and then offer some more specific ideas for how the community can manage these concerns over the coming decade.
10:35 – 11:05
Wim van Dam, Microsoft
Quantum computing promises to solve various scientifically and commercially valuable problems that are intractable for classical machines, but to deliver on this promise requires large-scale quantum computers. It is an important open challenge to understand the hardware and software resources needed to compute such solutions within a practical time limit. To this end, we develop a framework that abstracts the quantum stack to estimate the resources that the different layers need to achieve the desired quantum solutions. Azure Quantum Resource Estimation is a tool that implements this framework, and it automatically assesses the physical and logical resources needed to execute an appropriately specified quantum algorithm. Using this tool, we show that to achieve quantum solutions we need hundreds of thousands to millions of physical qubits to encode the logical qubits and that the fault tolerant logical quantum operations must operate at a fast enough clock speed to complete the computation within a practical time limit. By measuring the execution of the algorithms in reliable Quantum Operations Per Second (rQOPS) we obtain a unit for fault tolerant quantum computing that provides a crucial guideline for analyzing different quantum hardware platforms.
11:05 – 11:30
Will Zeng
Ophelia Crawford
Dave Clader
Moderator: Kevin Obenland, Mit Lincoln Lab
11:30 – 13:00 Lunch break
13:00 – 13:00
Simon Tsang, Peraton Labs
13:00 – 13:30
Samuel Jacques, Oxford University
How long until quantum computers break today's cryptography? This simple question is incredibly difficult to answer, and this talk will cover some of the issues encountered in this estimation. A first problem is the scale of circuits: Shor's algorithm is simple, but still requires millions of gates, so automation and verification are essential. I will discuss some valuable insights gained from Q#, but also some drawbacks of this tool. Even once we design a circuit, we need to estimate its hardware cost. Surface codes are a well-accepted choice, but this creates more difficult estimation problems in terms of layout, teleportation, and more, and it can make previous circuit estimates obsolete. Though I will focus on Shor's algorithm, the lessons and problems apply to any early fault-tolerant algorithm.
13:30 – 14:00
Daniel Litinsky, PsiQuantum
We use Shor's algorithm for the computation of elliptic curve private keys as a case study for resource estimates in the silicon-photonics-inspired active-volume architecture. Here, a fault-tolerant surface-code quantum computer consists of modules with a logarithmic number of non-local inter-module connections, modifying the algorithmic cost function compared to 2D-local architectures. We find that the non-local connections reduce the cost per key by a factor of 300-700 depending on the operating regime. At 10% threshold, assuming a 10-μs code cycle and non-local connections, one key can be generated every 10 minutes using 6000 modules with 1152 physical qubits each. By contrast, a device with strict 2D-local connectivity requires more qubits and produces one key every 38 hours. We also find simple architecture-independent algorithmic modifications that reduce the Toffoli count per key by up to a factor of 5. These modifications involve reusing the stored state for multiple keys and spreading the cost of the modular division operation over multiple parallel instances of the algorithm.
14:00 – 14:30
Thomas Alexander, IBM
Recent advancements in quantum system development, marked by increased qubit counts with improved error rates, are enabling increasingly sophisticated applications and instilling optimism for achieving quantum utility. To meet and exceed the utility threshold, a symbiotic approach is crucial: applications must be co-designed and fine-tuned to harness quantum hardware capabilities effectively, and in turn, we must ensure quantum hardware is engineered to meet the requirements of quantum workloads. Applying Quantum Resource Estimation (QRE) allows us to predict quantum resource requirements—quantities like qubit counts, gate usage, and execution times—for target model applications. These requirements subsequently shape the design parameters for the overall quantum system and its constituent software and hardware sub-systems, many of which are not immediately apparent from the standard QRE outputs. Our work at IBM Quantum is focused on constructing high-performance superconducting quantum computers and is guided by an ambitious scalability and functionality roadmap. This presentation delves into the symbiotic relationship between quantum resource demands, quantum software, and control systems - focusing on the challenges we are encountering in our system engineering. We aim to underscore the essential role of QRE as a pivotal tool within the quantum systems engineering toolbox.
14:30 – 15:15 Coffee Break
15:00 – 15:05
Peter Johnson, Zapata AI
Omar Shehab, IBM
15:45 – 16:30