9.1 Invited lecutres
Four Lectures by Alan Aspuru-Guzik titled “Variational Circuits and Quantum Simulation”. Very inspirational in that human brain (as a collection of chemical moleculars) can also be simulated, especially the creation of quantum (ideas) operator or drop ideas of the quantum operator. I re-organized his lecture into text and made comments in another document “2019 Alan Aspuru-Guzik - Variational Circuits and Quantum Simulation.docx”. Alan figures out how NISQ (near-term intermediate scale) is useful in molecular energy computing. See https://quantumcomputingreport.com/our-take/applying-moores-law-to-quantum-qubits/ to get an idea when useful QC is available, and https://www.nature.com/articles/d41586-019-02936-3 for “Beyond quantum supremacy: the hunt for useful quantum computers”.
9.2 Encoding Classical Information
The following table tells 4 ways of encoding
9.5 Assignment 09_Discrete_Optimization_and_Ensemble_Learning
This assignment to me is somewhat steep since I never have any formal training in classical machine learning. It seems lots of 高手in this QML class can manage to ask great questions – even fining several deficiencies in the pre-assignment material.
An ensemble assembles multiple classical Neural net models. This can be done by classical Adaboost, but quantum algorithm can be used to improve Adaboost prediction. The reference of the Neven 2008 paper certainly shows some great research effort – these Google guys in fact proved their theory by QBoosting on a quantum computer with weights to get a more accurate prediction result. This theory smartly transforms a classical loss function into a quantum Ising model to tell people quantum computing is worthwhile.
9.5.1 MLP classifier
The pre-assignment material uses “perception” and “SVC” of sklearn.linear_model while the formal assignment uses MLP (multi-layer perceptron) classifier. The website “https://scikit-learn.org/stable/modules/neural_networks_supervised.html” documents the sklearn model for MLPclassifier. MLP can be regularized by an alpha parameter, which can have various L2 regularization values to avoid overfitting by penalizing weights with large magnitudes. For Model 1, alpha value “le-05” probably means nature log, ln (0.5) =-0.69314, and is the lowest value. For Model 2, I used an alpha value of 10 to pass the exercise check. Here is the code: