Workshop on
Performance Analysis of
Quantum Computer Systems

Held at ISPASS 2023: April 23, 2023

Quantum Benchmarking and
Characterization

As quantum computers scale to more and higher-quality  qubits, it is increasingly important to develop methodologies to characterize the performance of systems. This workshop is intended to introduce the ISPASS community to current techniques for quantum benchmarking and characterization, and also to facilitate cross-disciplinary discussions between experts in classical and quantum performance analysis.

The workshop will begin with an Introduction to Quantum Computing for Computer Engineers, for those who need a primer or refresher on the fundamentals of quantum computing. This will be followed by a Tutorial on Quantum Benchmarking and Characterization, introducing the concepts, techniques, and measures that have been used and proposed for contemporary and future quantum systems.

The second part of the workshop will feature contributed and invited presentations on state-of-the-art research in quantum benchmarking and characterization. We invite researchers to submit an abstract for consideration: see the Call for Presentations below.

Organizers and Program Committee

Schedule

The emergence of quantum computers as a new computational paradigm has been accompanied by speculation concerning the scope and timeline of their anticipated revolutionary changes. While quantum computing is still in its infancy, the variety of different architectures used to implement quantum computations make it difficult to reliably measure and compare performance. This problem motivates our introduction of SupermarQ, a scalable, hardware-agnostic quantum benchmark suite which uses application-level metrics to measure performance. SupermarQ is the first attempt to systematically apply techniques from classical benchmarking methodology to the quantum domain. We define a set of feature vectors to quantify coverage, select applications from a variety of domains to ensure the suite is representative of real workloads, and collect benchmark results from the IBM, IonQ, and AQT@LBNL platforms. Looking forward, we envision that quantum benchmarking will encompass a large cross-community effort built on open source, constantly evolving benchmark suites. We introduce SupermarQ as an important step in this direction. 

See related arxiv paper.

An Analysis of the Completion Time of the BB84 Protocol
Sounak Kar and Jean-Yves Le Boudec, EPFL

The BB84 QKD protocol is based on the idea that the sender and the receiver can reconcile a certain fraction of the teleported qubits to detect eavesdropping/noise and decode the rest to use as a private key. Under the present hardware infrastructure, decoherence of quantum states presents a significant challenge to conducting perfect/efficient teleportation, implying that a teleportation-based protocol has to be run multiple times to observe success. Performance analyses of such protocols thus usually consider the completion time, i.e., the time until success, rather than the duration of a single attempt. Moreover, due to decoherence, the success of an attempt is usually dependent on the durations of individual phases of that attempt, as quantum states must wait in memory while the success or failure of a generation phase is communicated to relevant parties.

In this talk, we present a performance analysis of the completion time of the BB84 protocol in a setting where the sender and the receiver are connected via a single quantum repeater and the sole quantum channel between them does not experience any eavesdropping. Assuming certain distributional forms for the generation/communication phases of teleportation, we compute a closed-form expression for the moment generating function of the completion time and subsequently derive bounds/estimates of the tail probability. We also provide an efficient simulation scheme to generate the completion time.

See related arxiv paper.

Entanglement as a benchmark for near-term quantum hardware
Kathleen Hamilton, Oak Ridge National Laboratory
Contributors: Nouamane Laanait, Akhil Francis, Sophia Economou, George Barron, Kubra Yeter-Aydeniz, Titus Morris, Harrison Cooley, Muhun Kang, Alexander Kemper, Raphael Pooser

Entanglement is a fascinating property of quantum systems.  Generating and controlling these correlations is important for implementing quantum computing tasks.  This talk will give a review of the different ways that entanglement has been used to benchmark near-term quantum hardware.  Then, we will present results from a volumetric benchmark for near-term quantum platforms based on the generation and verification of genuine entanglement across n-qubits using graph states and direct stabilizer measurements. Our benchmark evaluates the robustness of multipartite and bipartite n-qubit entanglement with respect to many sources of hardware noise: qubit decoherence, CNOT and swap gate noise, and readout error. We demonstrate our benchmark on multiple superconducting qubit platforms available from IBM (ibmq_belem, ibmq_toronto, ibmq_guadalupe and ibmq_jakarta). Subsets of n < 10 qubits are used for graph state preparation and stabilizer measurement. Evaluation of genuine and biseparable entanglement witnesses we report observations of 5 qubit genuine entanglement, but robust multipartite entanglement is difficult to generate for n > 4 qubits and identify two-qubit gate noise as strongly correlated with the quality of genuine multipartite entanglement.

See related arxiv paper.

Call for Presentations

We invite all interested researchers to present work in all areas of benchmarking and characterization of quantum computing systems. Topics include, but are not limited to:

Submit a 150-250 word abstract to EasyChair to propose a presentation. Presenters will be required to register for ISPASS.  There will be a one-day registration available, if that's preferable. The conference is in-person. (If you'd like to submit but are not sure you can attend in person, please contact the organizers.)

Students are encourage to submit and/or attend, and there will be the potential for student travel grants for the conference.

We will not publish a workshop proceedings, but presentations and/or open-access papers will be posted on this site, if permitted by the presenters.

Abstract submission deadline: March 17, 2023 March 27, 2023 (AoE)
Expected decision date: March 30, 2023
Workshop: April 23, 2023