Deconvolution

The Hubble constant estimation from gravitational lensing requires high resolution imaging. One of the mathematical methods to obtain it, is deconvolution. Deconvolution is the method which helps to recover and separate signal from sources.

Deconvolution, simply saying, is a mathematical method for improving the resolution of astronomical images.

Theory

Astronomical images are obtained by finite size telescope and finite pixel size CCD cameras, so in principle all observed point objects have its finite size images. A finite image size of the point source seen through a telescope has two basic origin.

First, a point source seen through a telescope has an angular size inversely proportional to a diameter of a primary mirror, thus, since a telescope does not have an infinitely large mirror it wont give

a point image.

Second, if a telescope is ground based an image is additionally blurred by turbulent motion in the Earth's atmosphere.

There are two ways for improving the resolution of an image. One is technical (adaptive optics, out space telescope) other is numerical - the image processing (see Magain et al 1998). Since I did not have influence on the technical part of observations, we will focus on the image processing.

Mathematically speaking the observed image is a convolution of the real light distribution with so called "total blurring function" or "point spread function (PSF)".

So the image equation has form:

d(x)=t(x)*f(x)+n(x)

where:

d(x) - an observed light distribution,

f(x) - an original light distribution,

t(x) - a total point spread function (PSF),

n(x) - a measured error affecting data.

As I mention the finite pixel size of a CCD camera also has influence on an image.

In the mathematical point of view it causes that the functions f(x),d(x),t(x),n(x) have discreet values.

The goal of the deconvolution is having an image, (sampled vector of d(x) function) and a PSF, (sampled vector of

t(x) function), obtain an original light curve.

The deconvolution is an inverse problem which additional with noise is a problem without one unique solution.

The method for choosing the one interesting solution is the minimizing method.

This method is simply based on the difference minimum between the data and mathematical model. This method can be improved by putting the restriction on the result, for example the positivity of the signal, so that

solutions with negative intensities can be rejected.

The described general deconvolution theory can have few realization, most of them suffer

two basic problems, they produce artifacts in the vicinity of the objects, the ratio of the stars intensities is not constant.

The two problems are caused by a violation of the sampling theorem.

One of the solutions for that problems is to, instead of using function t(x) in equation, use a narrower function s(x), so that a deconvolved image has its own PSF, r(x).

The functions are related as following: t(x)=r(x)*s(x).

Function s(x) ensures that the sampling theorem is not violated and additionally an image shape of a point source is precisely known, it is r(x).

Now the original light distribution we can written as:

f(x)=h(x)+\sum_{k=1}^M a_kr(x-c+k)

where:

M -- is a number of point sources,

a_k -- are intensities of point sources,

c_k -- are coordinates of point sources,

h(x) -- is an extended component (background).

To the deconvolution algorithms often is added smoothing function given by equation:

H(f_1,...,f_N)=\sum_{i=k}^N \lambda_i(h_i-\sum{j=1}^N r_{ij}h_j)^2.

It cause that the calculated residuals are statistically distributed with the correct standard deviation in any subpart of the image.

The normalized residual images (observed data minus convolved model, divided by \sigma) are smoothed with an function, so that values of pixels are replaced by a weighted mean value of its on vicinity. The $\lambda$ is adjusted for the residuals to be close to 1 everywhere.

Application