$$\sum_i=1^n\sum_j=1^n c_i c_j k(x_i,x_j) \geq 0$$
Positive definite kernels have many nice properties that make them suitable for approximation purposes. For example, they induce a reproducing kernel Hilbert space (RKHS), which is a vector space of functions that can be evaluated by inner products with the kernel. Moreover, they satisfy the interpolation property, which means that for any finite set of points x_1,...,x_n in X and any values f_1,...,f_n, there exists a unique function f in the RKHS such that f(x_i) = f_i for all i = 1,...,n.
Kernel-based approximation methods are techniques that use kernels to construct approximations of functions or solutions of differential equations. The basic idea is to choose a set of points x_1,...,x_n in X, called centers or nodes, and a set of values f_1,...,f_n, called data or coefficients, and then form a linear combination of kernel evaluations as follows:
$$s(x) = \sum_i=1^n f_i k(x,x_i)$$
This function s is called a kernel interpolant or a radial basis function (RBF) interpolant, and it belongs to the RKHS induced by the kernel. The data f_1,...,f_n can be obtained by various ways, such as sampling the target function at the centers, solving a linear system involving the kernel matrix K = [k(x_i,x_j)]_i,j=1,...,n, or minimizing some error measure between s and the target function.
How to implement kernel-based approximation methods using MATLAB?
MATLAB is a popular programming language and environment for numerical computing and visualization. It provides many built-in functions and toolboxes for various tasks, such as matrix manipulation, optimization, statistics, machine learning, and symbolic computation. MATLAB also supports user-defined functions and scripts that can be easily written and executed.
To implement kernel-based approximation methods using MATLAB, we need to define the kernel function k and the centers x_1,...,x_n. We can use either predefined kernels or custom kernels. Some examples of predefined kernels are:
Gaussian kernel: $$k(x,y) = \exp(-\gamma \x-y\^2)$$
Multiquadric kernel: $$k(x,y) = \sqrt\$$
Thin-plate spline kernel: $$k(x,y) = \x-y\^2 \log \x-y\$$
We can use the built-in function `kernel` to create a predefined kernel object with specified parameters. For example,
k = kernel('gaussian',0.5); % create a Gaussian kernel with gamma = 0.5 k = kernel('multiquadric',1); % create a multiquadric kernel with c = 1
k = kernel('thinplatespline'); % create a thin-plate spline kernel
We can also define our own custom kernels using anonymous functions or function handles. For example,
k = @(x,y) sin(x+y); % create a custom kernel using an anonymous function k = @mykernel; % create a custom kernel using a function handle
where `mykernel` is a user-defined function that takes two arguments x and y and returns the kernel value k(x,y).
Once we have the kernel function k, we need to choose the centers x_1,...,x_n. We can either generate them randomly or use some existing data points. For example,
n = 100; % number of centers x = rand(n,1); % generate n random centers in [0,1]
x = linspace(0,1,n)'; % generate n equispaced centers in [0,1]
x = data(:,1); % use the first column of data as centers
After we have the kernel function k and the centers x_1,...,x_n, we can construct the kernel matrix K = [k(x_i,x_j)]_i,j=1,...,n using the built-in function `kernelmatrix`. For example,
K = kernelmatrix(k,x); % construct the kernel matrix using a predefined or custom kernel object K = kernelmatrix('gaussian',x,x,0.5); % construct the kernel matrix using a predefined kernel name and parameters
The next step is to obtain the data f_1,...,f_n that determine the coefficients of the kernel interpolant. There are several ways to do this, depending on the problem we want to solve. For example,
If we want to approximate a given function f on X, we can sample f at the centers and use the sampled values as data. For example,
f = @(x) sin(2*pi*x); % define the target function y = f(x); % sample f at the centers
If we want to solve a boundary value problem of the form $$-u''(x) + u(x) = f(x), \quad u(0) = u(1) = 0,$$ we can use the method of collocation and solve a linear system involving the kernel matrix and the right-hand side vector. For example,
f = @(x) exp(x); % define the right-hand side function b = f(x); % evaluate f at the centers
b(1) = b(n) = 0; % apply the boundary conditions
y = K \ b; % solve the linear system Ky = b
If we want to fit a statistical model to some noisy data, we can use the method of least squares and minimize the sum of squared errors between the kernel interpolant and the data. For example,
x = data(:,1); % use the first column of data as centers
y = data(:,2); % use the second column of data as values
y = y + 0.1*randn(n,1); % add some noise to y
y = (K + 0.01*eye(n)) \ y; % solve the regularized least squares problem (K + lambda*I)y = b
Finally, we can evaluate the kernel interpolant s at any point x in X using the built-in function `kernelvalue`. For example,
x_new = 0.5; % define a new point s_new = kernelvalue(k,x,y,x_new); % evaluate s at x_new using a predefined or custom kernel object
s_new = kernelvalue('gaussian',x,y,x_new,0.5); % evaluate s at x_new using a predefined kernel name and parameters
Conclusion
In this article, we have introduced some basic concepts and properties of kernel-based approximation methods, and shown how to implement them using MATLAB. We have seen that kernels are positive definite functions that measure similarity or correlation between points, and that they induce a reproducing kernel Hilbert space of functions that can be evaluated by inner products with the kernel. We have also seen that kernel-based approximation methods are techniques that use kernels to construct approximations of functions or solutions of differential equations by forming linear combinations of kernel evaluations. We have demonstrated how to define kernels and centers, how to construct kernel matrices and obtain data, and how to evaluate kernel interpolants using MATLAB.
For more details and examples of kernel-based approximation methods using MATLAB, we recommend reading the book Kernel-based Approximation Methods Using MATLAB by Gregory E. Fasshauer and Michael J. McCourt, or visiting the website of the MATLAB Kernel-Based Approximation Toolbox. We hope that this article has inspired you to explore the power and beauty of kernel-based approximation methods using MATLAB.
a104e7fe7e