Learning Objectives
Explain the main idea of Support Vector Machines (SVM).
Describe the margin, support vectors, and the role of the kernel function.
Explain how Quantum SVM (QSVM) replaces the classical kernel with a quantum kernel.
Understand that both SVM and QSVM use the same optimization idea but different ways to measure similarity between data points.
Support Vector Machine (SVM) is a classical machine learning model for classification.
Each data point is written as a feature vector
X ∈ ℝᵈ
with a label
y ∈ {0,1}
SVM tries to find a decision function
such that the two classes are separated as well as possible. The sign of f(x) gives the prediction:
if f (x) ≥ 0, predict class +1
if f(x) < 0, predict class -1
The key idea is to find a hyperplane that maximizes the margin, which means the distance between the hyperplane and the closest points from each class. In the hard-margin case, this can be written as the optimization problem
Many real-world datasets are not linearly separable in the original feature space. To handle this, SVM uses a kernel function
where ϕ(x) maps the data into a high-dimensional feature space.
Common choices include:
Linear kernel
Polynomial kernel
Radial Basis Function (RBF) kernel
The important point is that we never need to compute ϕ(x) directly—SVM only needs the kernel values K(xi,xj). This is called the kernel trick.
3. What is QSVM?
Quantum Support Vector Machine (QSVM) keeps the same SVM optimization framework, but uses a quantum kernel instead of a classical kernel.
We first encode each classical data point x into a quantum state |ψ(x)⟩.
Then we define the quantum kernel as a similarity between two quantum states
This kernel is estimated by running a quantum circuit on a simulator or quantum device.
Once we have the full kernel matrix built from above, we plug it into the standard SVM solver, just like a classical kernel.