This page is where I've written down things I didn't know or got stuck on technically while doing my research. 

If you are interested in quantum machine learning, it would be nice to think about it together.

If you are interested in this topic and have idea, please feel free to contact me.

Enigma 1 - Hamiltonian learning vs. Learning QNN vs. Trainable measurement?

<Quantum Hamiltonian Learning>

<Vanilla QNN Training>

<Trainable Measurement>


My research on measurement combined a rotation gate before the Pauli measurement and modeled it geometrically. For the proposed measurement method, when wires are interconnected, it's a unitary and Hermitian matrix. Also, when using the same loss function, it becomes identical to parametrized Hamiltonian learning. 

Since the mathematically represented space is the same, shouldn't there be almost no difference in performance if there's only one quantum machine learning model?

Enigma 2 - How to reparameterize a quantum shot? 

Quantum shot refers to the result obtained each time a given quantum circuit is executed in quantum computing. In quantum computers, the result can vary each time the same quantum circuit is run. This is because the measurement of the quantum state has probabilistic characteristics. A Variational Autoencoder (VAE) is one of the generative models used in deep learning. VAE encodes input data into a latent space and samples from the latent space to decode it back into the original data. During the sampling process, the reparameterization trick is used to facilitate learning in a differentiable manner. Both quantum circuits and VAEs include a probabilistic sampling process, so the sampling in quantum circuits and the reparameterization in VAEs can be seen similarly.

To add learnable noise in quantum circuits, the following methods can be considered:

These noises are mainly used to model the noise of actual quantum computers. However, to add noise for learning purposes, the noise must be designed to assist in learning. For this, it is necessary to repeatedly experiment while adjusting the shape or size of the noise. So, how can we apply a trick to this noise distribution?

Enigma 3 - How to make a tighter bound of fidelity regularization?

There's the issue of fidelity regularization. When training a quantum circuit that uses outputs from multiple layers, the fidelity regularizer aided in stable convergence. However, the presence or absence of learning was determined by the hyperparameter of the fidelity regularizer and the initialization of the quantum state. When analyzing this through the PAC learning framework, the bound of fidelity is set to 1. If this bound is used directly, one would rely heavily on the decaying rate of the learning rate for experimentation, and the mentioned phenomenon cannot be explained. Therefore, when training a quantum circuit that uses outputs from multiple layers, another regularization for the fidelity regularizer is needed.

Enigma 4 - Why does the PVM convergence bound diverge? Yet, why is its performance better than when taking an expectation?

When creating a classifier using a quantum circuit, there are two possible implementations. One approach is to apply a softmax function to the observable, and the second is to compute the probability density of finding a particle at a particular location using the square of the absolute value of the wave function. When calculating the convergence bound for these methods, both the first and second approaches have a constant term for the bound. However, when applying a logarithm to compute the bound, the softmax method is bounded, but the Projective Value Measure (PVM) method diverges. Despite this, why does the PVM method outperform the softmax method in experiments?