Insert COLAB Tutorial
HELLO WORLD PROGRAM IN COLAB IMPLEMENTATION
Importing Libraries: The code starts by importing the necessary libraries used to build and train the Quantum Machine Learning model. These include PennyLane (pennylane) for constructing quantum circuits, PennyLane’s NumPy (qml.numpy) for differentiable arrays, Matplotlib (matplotlib.pyplot) for plotting results, Seaborn (seaborn) for loading a real-world dataset, and Scikit-learn tools for preprocessing and splitting the data.
# Import the necessary libraries
import pennylane as qml
from pennylane import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
Loading and Preparing the Dataset: In this Hello World example, we load a real dataset (the penguins dataset) using Seaborn. After removing missing values, we extract one numerical feature (bill_length_mm). The data is then standardized to have a mean of 0 and a variance of 1, which improves optimization stability during training.
# Load dataset and remove missing values
penguins = sns.load_dataset("penguins").dropna()
# Extract one numerical feature
X_full = penguins[["bill_length_mm"]].values
# Standardize the feature
scaler = StandardScaler()
X_full = scaler.fit_transform(X_full).flatten()
Creating Training and Test Data: The dataset is split into training and testing sets. Shuffling is enabled to ensure the data is randomly mixed before splitting. This prevents ordered patterns from biasing the learning process and improves convergence during optimization.
# Split into training and testing sets with shuffling
X_train, X_test = train_test_split(
X_full, test_size=0.3, random_state=42, shuffle=True
)
X = np.array(X_train)
X.requires_grad = False
X_test = np.array(X_test)
Square Function: The code defines the outputs using the Square function f(x) = x^2. For each training datapoint, the squared value is computed. Since the quantum circuit outputs values in the range [-1, 1], the outputs are scaled to match this range. The same scaling is applied to the test data.
# Square function outputs
Y = X**2
# Scale outputs to match quantum circuit range [-1, 1]
y_max = np.max(np.abs(Y))
Y = Y / y_max
Y_test = (X_test**2) / y_max
Cosine Function: In this case, the outputs are generated using the Cosine function. Each training datapoint is passed through the cosine function to compute its corresponding output. The outputs are then scaled to remain within the quantum circuit’s measurable range.
# Cosine function outputs
Y = np.cos(X)
y_max = np.max(np.abs(Y))
Y = Y / y_max
Y_test = np.cos(X_test) / y_max
For the polynomial example, the function must first be defined. The polynomial used is
f(x)= 2x^3 − x^2 + 3x − 1
After defining the function, training outputs are computed and scaled similarly to the previous functions. The same polynomial is used to compute the test outputs.
# Define the polynomial function
def polynomial(x):
return 2*x**3 - x**2 + 3*x - 1
# Polynomial outputs
Y = polynomial(X)
y_max = np.max(np.abs(Y))
Y = Y / y_max
Y_test = polynomial(X_test) / y_max
Creating a Quantum Device and Defining a Quantum Circuit: The code creates a single-qubit quantum device using PennyLane’s default simulator. The @qml.qnode decorator is used to define the quantum circuit. Each data point is encoded in the quantum state via an RX rotation. A trainable rotation gate (qml.Rot) applies three learnable parameters. Finally, the expectation value of the Pauli-Z operator is measured to produce the model’s prediction.
# Setting up Quantum Device
dev = qml.device('default.qubit', wires=1)
# Create the quantum circuit
@qml.qnode(dev)
def quantum_circuit(datapoint, params):
# Encode input data
qml.RX(datapoint, wires=0)
# Trainable rotation layer
qml.Rot(params[0], params[1], params[2], wires=0)
# Measurement
return qml.expval(qml.PauliZ(wires=0))
Defining Loss Function and Cost Function: The loss function calculates the squared error between predicted outputs and actual outputs. The cost function evaluates the quantum circuit across all training datapoints and computes the total loss.
# Define the loss function (Custom Loss function)
def loss_func(predictions):
total_losses = 0
for i in range(len(Y)):
output = Y[i]
prediction = predictions[i]
loss = (prediction - output)**2
total_losses += loss
return total_losses
# Define the cost function
def cost_fn(params):
predictions = [quantum_circuit(x, params) for x in X]
cost = loss_func(predictions)
return cost
Baseline Cost Evaluator (No Optimization)
params = np.array([0.1, 0.1, 0.1], requires_grad=True)
print("Initial Cost =", cost_fn(params))
Using GradientDescentOptimizer (With Shuffling):
Shuffling is introduced here to:
Prevent ordered bias
Improve convergence
Improve generalization
Stabilize gradient updates