The program implementation is done in Google Colaboratory, also known as Colab. This is a live computing environment in the Google cloud that lets people build, run, and share Python programmes. Since it is in the cloud, users can get to Colab from anywhere with an internet connection and a web computer. This means that users don't have to run software or set up difficult setups on their local computers. This makes it easier for them to talk to each other and work on projects together. One of the best things about using Colab is that it works with other Google services, like Google Drive. Users can quickly move data from Google Drive to Colab. This makes it easy to work with data in the cloud.
Google Colab Official Website: https://colab.google/
Youtube Video for Google Colab: https://www.youtube.com/watch?v=RLYoEyIHL6A
Getting Started with Google Colab:
Importing Libraries: The code starts by importing the necessary libraries, such as PennyLane (pennylane), PennyLane's NumPy (qml.numpy), and Matplotlib (matplotlib.pyplot).
# Import the necessary libraries
import pennylane as qml
from pennylane import numpy as np
import matplotlib.pyplot as plt
Creating Training and Test Data: Here in this part we've tested using various Mathematical functions that are available like Square function, Cosine function and Polynomial function. Creating training and test data are similar to all the functions but the specific mathematical function needs to be defined properly.
Square Function: The code describes the training and test data for the Square function(f(x)=x^2). It makes five input datapoints (X) that are evenly spread from -1 to 1 and five squared outputs (Y) that match up with the inputs. In the same way, it makes five test data points (X_test) that are slightly different from the training data and figures out the squared results (Y_test) of those points.
# Create the training and test data for the square function
X = np.linspace(-1, 1, 5) # 5 input datapoints from -1 to 1
X.requires_grad = False
Y = X**2 # The outputs for the input datapoints
X_test = np.linspace(-1.2, 1.2, 5) # 5 test datapoints, shifted from the training data
Y_test = X_test**2 # The outputs for the test datapoints
Cosine function: The training data set is made up of five input datapoints (X) that have equal distributions from 0 to 2 and represent angles in radians. These inputs are fixed and do not need to be optimised. By calculating the cosine function for each input datapoint, the associated output values (Y) are generated. The test data, on the other hand, consists of five test datapoints (X_test) produced by gradually changing the training data to ensure generalisation. Similarly, the test output values (Y_test) are calculated by calculating the cosine function for the test datapoints.
# Create the training and test Data for the cosine function
X = np.linspace(0, 2*np.pi, 5) # 5 input datapoints from 0 to 2pi
X.requires_grad = False
Y = np.cos(X) # The outputs for the input datapoints
X_test = np.linspace(0.2, 2*np.pi+0.2, 5) # 5 test datapoints, shifted from the training data
Y_test = np.cos(X_test) # The outputs for the test datapoints
Polynomial Function: For creating this seperate Polynomial function needs to be defined before creating the Train and Test data. Following that, it creates five input datapoints (X) that are uniformly spaced from -1 to 1, indicating the polynomial function's input values. By evaluating the polynomial function for each input datapoint, the appropriate squared outputs (Y) are computed. In addition, to evaluate the model's performance, five test datapoints (X_test) are generated that deviate somewhat from the training data. The polynomial function is used to determine the squared results (Y_test) for these test datapoints.
# Define the polynomial function f(x) = 2x^3 - x^2 + 3x - 1
def polynomial(x):
return 2 * x**3 - x**2 + 3 * x - 1
# Create the training and test data for the polynomial function
X = np.linspace(-1, 1, 5) # 5 input datapoints from -1 to 1
X.requires_grad = False
Y = polynomial(X) # The outputs for the input datapoints
X_test = np.linspace(-1.2, 1.2, 5) # 5 test datapoints, shifted from the training data
Y_test = polynomial(X_test) # The outputs for the test datapoints
Creating Quantum Device and Defining Quantum Circuit: The code makes a quantum device using qml.device('default.qubit', wires=1), which defines a single qubit quantum device. The @qml.qnode decorator is used in the code to describe a quantum circuit. A datapoint and a set of settings are what the quantum circuit needs to work. Inside the quantum circuit, the input data is encoded with an RX rotation, a rotation gate (qml.Rot) is applied based on the angles in params, and the Pauli-Z operator's expected value (qml.PauliZ) is measured. The qml.qnode decorator makes it possible to run the quantum circuit on the quantum device (dev) that was described earlier.
# Setting up Quantum Device
# Created the quantum device with 1 qubit
dev = qml.device('default.qubit', wires=1)
# Create the quantum circuit
@qml.qnode(dev)# defining the decorator
def quantum_circuit(datapoint, params):
# Encode the input data using an RX rotation
qml.RX(datapoint, wires=0)# Applies an RX gate to the qubit, encoding the input datapoint into the quantum state
# Create a rotation based on the angles in "params"
qml.Rot(params[0], params[1], params[2], wires=0)# Applies a rotation gate based on three parameters (params) which are learned during the optimization process
# Return the expected value of a measurement along the Z axis
return qml.expval(qml.PauliZ(wires=0))# Returns the expected value of measuring the Pauli-Z observable on the qubit
Defining Loss function and Cost function: The code sets up a loss function (loss_func) that measures the squared loss between the expected outputs and the goal outputs. It goes through each data point, compares the quantum circuit's prediction (called "prediction") to the desired output (called "output"), and figures out the loss as the squared difference. The sum of all the loses is given back by the function. The code sets up a cost function (cost_fn) that uses the quantum circuit (quantum_circuit) and the loss function (loss_func) to figure out the total cost. It checks each point in the training set against the quantum circuit and sends the results to the loss function. The cost is found by adding up all of the loses.
# Define the loss function(Custom Loss function)
def loss_func(predictions): # defining the loss function to calculate total loss with predictions as inputs
total_losses = 0 # initializing total loss
for i in range(len(Y)):
output = Y[i]
prediction = predictions[i]
loss = (prediction - output)**2 # calculating squared error for current data point
total_losses += loss # adding squared errors
return total_losses
# Define the cost function
def cost_fn(params): # defining the cost function to return total cost (loss)
predictions = [quantum_circuit(x, params) for x in X]
cost = loss_func(predictions)
return cost
Defining the Optimizer and Parameters: The code sets up a gradient descent optimizer (opt) with a step size of 0.3. Gradient descent is used to change the values (params) over and over again to make the cost function as small as possible. The parameters (params) are set up by the code with an initial guess. The angles that are used in the rotation gates of the quantum circuit are set by these factors. When the requires_grad=True flag is set, the optimizer will change these values as part of the optimisation process.
# Define the optimizer
# Gradient Descent Optimizer with stepsize 0.3
opt = qml.GradientDescentOptimizer(stepsize=0.3)
# Make an initial guess for the parameters
params = np.array([0.1, 0.1, 0.1], requires_grad=True)
# Optimization loop
# Parameters are updated using GradientDescentOptimizer
# Loop performs optimization to find the best parameters to minimize the loss
for i in range(100):
params, prev_cost = opt.step_and_cost(cost_fn, params)
if i % 10 == 0:
print(f'Step = {i} Cost = {cost_fn(params)}')
Manual Gradient Descent Loop : In this code, the optimizer has been changed from qml.GradientDescentOptimizer to qml.grad. Also, the number of optimisation steps is now set by the num_steps variable.
# Optimization loop
# Manual gradient descent optimization loop
num_steps = 100
prev_cost = float('inf')
learning_rate = 0.3
for i in range(num_steps):
predictions = [quantum_circuit(x, params) for x in X]
cost = loss_func(predictions)
# Compute the gradient of the cost function
grad_cost = qml.grad(cost_fn)(params)
# Update the parameters using gradient descent
params -= learning_rate * grad_cost
if i % 10 == 0:
print(f'Step = {i} Cost = {cost}')
Cost function results of the various Mathematical functions are shown below by using the GradientDescent Optimizer and by excluding the Optimizer.
Results by using Optimizer: After optimisation, the code uses the optimised parameters (params) to analyse the quantum circuit for each test datapoint (X_test). Then, it uses Matplotlib to draw blue squares for the training outputs (X, Y), red circles for the test outputs (X_test, Y_test), and black crosses for the test predictions (X_test, test_predictions).
# Test and graph the results
test_predictions = [quantum_circuit(x_test, params) for x_test in X_test]
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.scatter(X, Y, s=30, c='b', marker="s", label='Train outputs')
ax1.scatter(X_test, Y_test, s=60, c='r', marker="o", label='Test outputs')
ax1.scatter(X_test, test_predictions, s=30, c='k', marker="x", label='Test predictions')
plt.xlabel("Inputs")
plt.ylabel("Outputs")
plt.title("QML results")
plt.legend(loc='upper right')
plt.show()