Quantum Machine Learning

Quantum machine learning is one of the important potential applications of noisy intermediate-scale quantum (NISQ) devices, and it has attracted lots of attention in the community in recent years. While this is largely still a relatively new field with many open questions, in some scenarios, it is shown to potentially offer faster training or better description capabilities than its classical counterpart, due to the inherent parallelism of quantum gates and the high dimensions of quantum state space. Moreover, it can not only process classical input/output data, but also process quantum input data (which cannot be processed by any classical machine). In recent years, popular architectures, such as parameterized circuits, offer flexible descriptions and ease-of-training (much like classical neural networks).

 

However, just like classical machine learning, quantum machine learning models also suffer from security loopholes. For instance, classifiers based on quantum circuits can potentially suffer from adversarial attacks, where noises imposed on input states will result in wrong classifications. There have been recent works that study the type of attacks and countermeasures [1], as well as ones that consider the inherent theoretical high-dimensionality of quantum models [2] that make them vulnerable to attacks. We are interested in exploring this topic as a future direction, on the security of quantum machine learning models (not just in classification) for both quantum and classical inputs, as well as potential countermeasures (or lack thereof, i.e. whether there is an inherent vulnerability) against attacks.

[1] L Sirui, LM Duan, DL Deng. "Quantum adversarial machine learning." Physical Review Research 2.3 (2020): 033212.

[2] N Liu, P Wittek. "Vulnerability of quantum classification to adversarial perturbations." Physical Review A 101.6 (2020): 062331.