Quantum Computing and Communications

Our KUARQ research group has been focusing on developing multi-feature multidimensional quantum convolutional classification (MQCC) techniques for quantum machine learning (QML) that leverage a novel multi-feature multidimensional quantum convolution operation with arbitrary filtering and unity stride in addition to quantum pooling based on quantum wavelet transform and quantum measurements. We have also been focusing on developing a cost-effective and reconfigurable emulation platform that can be used for interfacing Classical-to-Quantum (C2Q) and Quantum-to-Classical (Q2C) data conversions between classical and quantum systems and evaluating the performance of quantum algorithms for practical real-world QML and cybersecurity applications. Our emulation platform architecture uses advanced field-programmable gate array (FPGA) technology for scalable, high performance, high throughput, and highly accurate emulation of quantum algorithms and systems. Compared to existing state-of-the-art FPGA emulators, this emulation framework is the highest scalable, most accurate, and achieves the highest throughput, as demonstrated by encouraging experimental results. Finally, we have also developed a free-space optical (FSO) communication system that combines chaotic communications with quantum key distribution (QKD) to achieve greater security and range compared to existing FSO techniques. Some of our works in Quantum Computing and Communications can be found below. 

Leveraging Data Locality in Quantum Convolutional Classifiers

Quantum computing (QC) has opened the door to advancements in machine learning (ML) tasks that are currently implemented in the classical domain. Convolutional neural networks (CNNs) are classical ML architectures that exploit data locality and possess a simpler structure than fully-connected multi-layer perceptrons (MLPs) without compromising the accuracy of classification. However, the concept of preserving data locality is usually overlooked in the existing quantum counterparts of CNNs, particularly for extracting multi-features in multidimensional data. In this work, we present a multidimensional quantum convolutional classifier (MQCC) that performs multidimensional and multi-feature quantum convolution with average and Euclidean pooling and thus adapting the CNN structure to a variational quantum algorithm (VQA). Average pooling is based on the quantum Haar transform (QHT) and Euclidean pooling is based on partial quantum measurement. The experimental work was conducted using multidimensional data to validate the correctness and demonstrate the scalability of the proposed method utilizing both noisy and noise-free quantum simulations. We evaluated the MQCC model with reference to reported work on state-of-the-art quantum simulators from IBM Quantum and Xanadu using a variety of standard ML datasets. The experimental results showed favorable characteristics of our proposed techniques compared to existing work with respect to a number of quantitative metrics such as the number of training parameters, cross-entropy loss, classification accuracy, circuit depth, and quantum gate count. Click here to read the reference article

Multidimensional Quantum Convolutional Classifier (MQCC)

Optimizing Multidimensional Pooling for Variational Quantum Algorithms

Convolutional neural networks (CNNs) have proven to be a very efficient class of machine learning (ML) architectures for handling multidimensional data by maintaining data locality, especially in the field of computer vision. Data pooling, a major component of CNNs, plays a crucial role in extracting important features of the input data and downsampling its dimensionality. Multidimensional pooling, however, is not efficiently implemented in existing ML algorithms. In particular, quantum machine learning (QML) algorithms have a tendency to ignore data locality for higher dimensions by representing/flattening multidimensional data as simple one-dimensional data. In this work, we propose using the quantum Haar transform (QHT) and quantum partial measurement for performing generalized pooling operations on multidimensional data. We present the corresponding decoherence-optimized quantum circuits for the proposed techniques along with their theoretical circuit depth analysis. Our experimental work was conducted using multidimensional data, ranging from 1-D audio data to 2-D image data to 3-D hyperspectral data, to demonstrate the scalability of the proposed methods. In our experiments, we utilized both noisy and noise-free quantum simulations on a state-of-the-art quantum simulator from IBM Quantum. We also show the efficiency of our proposed techniques for multidimensional data by reporting the fidelity of results. Click here to read the reference article

Method 1: Quantum Average Pooling via Quantum Haar Transform

Method 2: Quantum Euclidean Pooling via Partial Measurement

Generalized Quantum Convolution for Multidimensional Data

The convolution operation plays a vital role in a wide range of critical algorithms across various domains, such as digital image processing, convolutional neural networks, and quantum machine learning. In existing implementations, particularly in quantum neural networks, convolution operations are usually approximated by the application of filters with data strides that are equal to the filter window sizes. One challenge with these implementations is preserving the spatial and temporal localities of the input features, specifically for data with higher dimensions. In addition, the deep circuits required to perform quantum convolution with a unity stride, especially for multidimensional data, increase the risk of violating decoherence constraints. In this work, we propose depth-optimized circuits for performing generalized multidimensional quantum convolution operations with unity stride targeting applications that process data with high dimensions, such as hyperspectral imagery and remote sensing. We experimentally evaluate and demonstrate the applicability of the proposed techniques by using real-world, high-resolution, multidimensional image data on a state-of-the-art quantum simulator from IBM Quantum. Click here to read the reference article

One-Dimensional Quantum Convolution

Multidimensional Quantum Convolution

Towards Complete and Scalable Emulation of Quantum Algorithms on High-Performance Reconfigurable Computers

Contemporary quantum computers face many critical challenges that limit their usefulness for practical applications. A primary limiting factor is classical-to-quantum (C2Q) data encoding, which requires specific circuits for quantum state initialization. The required state initialization circuits are often complex and violate decoherence constraints, particularly for I/O intensive applications. Existing Noisy Intermediate-Scale Quantum (NISQ) devices are noise-sensitive and have low quantum bit (qubit) counts, thus limiting the applicability of C2Q circuits for encoding large and realistic datasets. This has made the study of complete and realistic circuits that include data encoding challenging and has also led to a heavy dependency on costly and resource-intensive simulations on classical platforms. In this work, we propose a cost-effective, classical-hardware-accelerated framework for realistic and complete emulation of quantum algorithms. The emulation framework incorporates components for the critical C2Q data encoding process, as well as architectures for quantum algorithms such as the quantum Haar transform (QHT). The framework is used to investigate optimizations for C2Q and QHT algorithms, and the corresponding optimized quantum circuits are presented. The framework is implemented on a High-Performance Reconfigurable Computer (HPRC) which emulates the proposed QHT circuits combined with proposed C2Q data encoding methods. For performance benchmarks, CPU-based emulations and simulations on a state-of-the-art quantum computing simulator are also carried out. Results show that the proposed hardware-accelerated emulation framework is more efficient in terms of speed and scalability compared to CPU-based emulation and simulation. Click here to read the reference article

Method 1: Quantum circuit for C2Q data encoding with non-unitary state synthesis for emulation

Method 2: Quantum circuit for C2Q data encoding with unitary state synthesis for physical realization

Improving Quantum-to-Classical Data Decoding using Optimized Quantum Wavelet Transform

One of the challenges facing current noisy-intermediate-scale-quantum devices is achieving efficient quantum circuit measurement or readout. The process of extracting classical data from the quantum domain, termed in this work as quantum-to-classical (Q2C) data decoding, generally incurs significant overhead, since the quantum circuit needs to be sampled repeatedly to obtain useful data readout. In this paper, we propose and evaluate time-efficient and depth-optimized Q2C methods based on the multidimensional, multilevel-decomposable, quantum wavelet transform (QWT) whose packet and pyramidal forms are leveraged and optimized. We also propose a zero-depth technique that uses selective placement of measurement gates to perform the QWT operation. To demonstrate their efficiency, the proposed techniques are quantitatively evaluated in terms of their temporal complexity (circuit depth and execution time), spatial complexity (total gate count), and accuracy (fidelity/similarity) in comparison to existing Q2C techniques. Experimental evaluations of the proposed Q2C methods are performed on a 27-qubit state-of-the-art quantum computing device from IBM Quantum using real high-resolution multispectral images. The proposed QHT-based Q2C method achieved up to 15x higher space efficiency than the QFT-based Q2C method, while the proposed zero-depth method achieved up to 14% and 78% improvements in execution time compared to conventional Q2C and QFT-based Q2C, respectively. Click here to read the reference article

General Quantum-to-Classical (Q2C) Methodology

Quantum Haar Transform (QHT) Approach

Decoherence-Optimized Circuits for Multidimensional and Multilevel-Decomposable Quantum Wavelet Transform

Quantum Wavelet Transform (QWT), an important area of research in quantum computing, finds application in High Energy Physics (HEP) and remote sensing hyperspectral imagery. This is because of its significantly higher speedup compared to its classical counterparts. However, the circuits for QWT suffer from an inherent problem of deep quantum circuits and are not suitable for the current class of quantum hardware because of their sensitivity to external noise and small coherence time. Therefore, QWT circuits need modifications to make them suitable for current quantum hardware. These circuits can be optimized in terms of circuit depth to mitigate decoherence effects, provide high fidelity, and can be efficiently implemented on current noisy NISQ processors. The d-dimensional QHT operation for both sequential and parallel circuits can be performed by the optimized pyramidal or packet decomposition. The generalized packet and pyramidal circuits are shown in Figure 1. In packet decomposition, each level of decomposition is repeated and applied to all the data qubits, whereas in pyramidal decomposition, each level of decomposition operates on fewer data qubits. Click here to read the reference article

Quantum circuit for QWT using packet decomposition

Quantum circuit for QWT using pyramidal decomposition 

Quantum Dimension Reduction for Pattern Recognition in High-Resolution Spatio-Spectral Data

The promises of advanced quantum computing technology have driven research in the simulation of quantum computers on classical hardware, where the feasibility of quantum algorithms for real-world problems can be investigated. In domains such as High Energy Physics (HEP) and Remote Sensing Hyperspectral Imagery, classical computing systems are held back by enormous readouts of high-resolution data. A methodology has been proposed that utilizes Quantum Haar Transform (QHT) and a modified Grover’s search algorithm for time-efficient dimension reduction and dynamic pattern recognition in data sets that are characterized by high spatial resolution and high dimensionality. QHT is performed on the data to reduce its dimensionality at the preserved spatial locality, while the modified Grover’s search algorithm as shown in Figure 2 is used to search for dynamically changing multiple patterns in the reduced data set. By performing search operations on the reduced data set, processing overheads are minimized. Moreover, quantum techniques produce results in less time than classical dimension reduction and search methods. The feasibility of the proposed methodology is verified by emulating the quantum algorithms on classical hardware based on field-programmable gate arrays (FPGAs). The work also presents designs of the quantum circuits for multi-dimensional QHT and multi-pattern Grover’s search. Click here to read the reference article. 

Modified quantum circuit for multi-pattern Grover’s Algorithm 

Modifying Quantum Grover’s Algorithm for Dynamic Multi‐Pattern Search on Reconfigurable Hardware

Grover's search algorithm is a popular quantum algorithm for achieving potential speedup in unstructured data search. A modified version of the traditional multi-pattern quantum Grover’s search algorithm is presented to make it capable of processing dynamically changing input patterns (see figure below). Traditionally, two operations are performed in multiple iterations on this input state: phase inversion/oracle and diffusion. The oracle takes the input set, then identifies and inverts the coefficient on the search pattern(s). These are amplified in the diffusion operation, while the other amplitudes are attenuated.  However, this modified version only amplifies the number of states equal to the number of solutions/patterns being searched for. Then the permutation step is required to shift the target patterns to the target states in the output superimposed quantum state. The experimental evaluation of the modified algorithm using Field Programmable Gate Array (FPGA) hardware demonstrates successful emulation of multi-pattern Grover’s algorithm using up to 22 quantum bits, the highest and most efficient among contemporary work. Click here to read the reference article.

Modified Grover's oracle for single-pattern search

Modified Grover's oracle for multi-pattern search

Combining Quantum Key Distribution with Chaotic Systems for Free-Space Optical Communications

In this work, we propose a free-space optical (FSO) communication system that combines chaotic communications with quantum key distribution (QKD) to achieve greater security and range compared to existing FSO techniques such as N-slit interferometers. We utilize Lorenz chaotic transmitter and receiver models, which are inherently auto-synchronizable, to generate chaotic signals used as data carriers. Data are transmitted securely over a classical channel using the Lorenz chaotic communication system, while a quantum channel is used for securely exchanging critical synchronization parameters via a combination of QKD and public-key cryptography protocols. Because FSO communications have been utilized by spaces agencies including NASA and ESA, we provide a concept of operations for a space mission combining chaotic communications and QKD to achieve an end-to-end encrypted deep-space optical communications link. Our experimental work includes successful real-time transmission of high-resolution single-spectral and multi-spectral images, measurement of bit-error-rate over a range of noise levels, and an evaluation of security and robustness of transmissions with dynamic reconfiguration of the chaotic systems. Click here to read the reference article.

Chaotic Communications Secured by Quantum Key Distribution (QKD) Communications

Efficient Computation Techniques and Hardware Architectures for Unitary Transformations in Support of Quantum Algorithm Emulation

The simulation and emulation of quantum algorithms on classical platforms has become increasingly relevant as the field of quantum computing advances. However, software simulations require the use of large-scale, costly, and resource-hungry supercomputers, while previous hardware emulators using faster Field-Programmable-Gate-Array (FPGA) accelerators were limited in accuracy and scalability. By using Complex Multiply-and-Accumulate (CMAC) units (see figure below), a cost-effective FPGA-based emulation platform is proposed with improved scalability, accuracy, and throughput compared to existing FPGA-based emulators. CMACs are able to apply different computation techniques during computation. Using the quantum Fourier transform and Grover’s search as case studies, the emulation framework is prototyped on a High-Performance Reconfigurable Computing (HPRC) system and the results show quantitative improvement over existing FPGA-based emulators. Click here to read the reference article.

Complex multiply-and-accumulate (CMAC) unit

Dimension Reduction Using Quantum Wavelet Transform on a High-Performance  Reconfigurable Computer

Processing high dimensional data, such as those generated in High Energy Physics (HEP), can be very computationally resource intensive. Dimension reduction helps reduce that load. Combined with the speedups provided by quantum computers, the dimension reduction can be quite beneficial to areas such as HEP.  The dimension reduction technique used is Quantum Wavelet Transform (QWT). It is similar to the Fourier transform, except QWT preserves the spatial locality of data. This work uses a discrete version of QWT in the quantum domain called the Quantum Haar Transform (QHT). The input data is first decomposed using the QHT, followed by operations on the reduced dimensionality data, and then finally reconstructed using Inverse Quantum Haar Transform (IQHT) to get back the original image. The work also describes experiments to simulate the quantum algorithms on a reconfigurable computing platform and evaluate its performance. The experimental work was performed on a DS8 high-performance FPGA developed by DirectStream. Click here to read the reference article.

Dimension reduction using 2D-QHT and 2D-IQHT

Scaling Reconfigurable Emulation of Quantum Algorithms at High Precision and High Throughput

Access to quantum hardware is limited due to the cost and technical complexity of maintaining them. This means fast and efficient techniques for simulating quantum algorithms on classical hardware are of great value to researchers and academics looking to study quantum algorithms, as it makes research into quantum computing much more accessible. This work proposes two methods of improving the performance of simulation of quantum algorithms on classical computers. The first is a Complex Multiply-and-Accumulate (CMAC) based emulation model, which leverages the fact that quantum algorithms can be mathematically represented by a series of unitary transformations which can be combined into a single matrix. The CMAC is designed to handle complex numbers as inputs and outputs and the circuit function of a given quantum algorithm is therefore reduced to a singular vector-matrix multiplication operation computed by the CMAC. It also uses look-up tables, which store pre-computed values of complex calculations, and dynamic generation, which is used to save resources needed to store complex algorithm matrices, to further speed up the emulation. The second method uses a kernel-based emulation model, which is faster than CMAC but is applicable more to algorithms that involve a set of repeated core operations. The core operations are modeled as a kernel and the kernel operation is applied iteratively across all input state groups. Click here to read the reference article.