# grid resampling

If a picture has no HD resolution we simple think is not good enough.

What happens in science or in other areas when we do have a binned image from which we want to recover more information? Keep reading and you'll see.

All the code for the image processing and resampling is available at my Github

1. having a good resolution image, we'll decrease its quality and create a low-res image
2. we will create 2 rectangular grids: one having high resolution, and the other having low resolution
3. using the low resolution image, and the 2 grids, we will use some mathematics to improve the image resolution
4. we will compare our generated-high-resolution image against the original high-resolution image

Using an image of Andromeda galaxy (the closest spiral galaxy, just about 3.2 million light-years away), we split it in the 3 colors composing the colored version: red, green, blue (RGB). From them we see different features has different power in each color.

We then decrease the resolution by a factor of 4. That means we average 4x4 pixels into a single pixel. Thus, we will have 1/4 pixels in each direction of the image: we see in facto the small structures are diffused and lost. To do this we simply the routine `PIL.Image.resize()`

Now the mathematics comes into the game.

We have 2 images, the original in high resolution, and the subsampled in lower resolution. We need to create 2 grids, to be used as fixed points to be superposed in the low-res image: these points will allow us to calculate an interpolation to generate more values, increasing the resolution.

How many new points we want to create inside the low-res image? As many as needed to reach the number of pixels in the original high-res image.

Why we want to match the number of pixels in the original high-res image? To compare the original image against the reconstructed image (from the low-res version)

Each low-res pixel will be divided in 16 sub-pixels, as the below image shows.

With the finer grid we use an interpolator to get the values at the generated positions, using the `scipy` method for grid interpolation:

The reconstructed image has the same number of pixels as the original one (at high-res). The difference at small scales pop-out but the large scale is preserved!

We can band-by-band calculate the difference and check where our approach is less effective:

Our algorithm fails at smaller scale, mainly because the sub-sampled image has already lost this small scale structures (stars around Andromeda galaxy). The reconstruction otherwise is powerful, and can be improved fine-tuning the parameters we need to be better recovered for the purposes of the problem.

Take home: even with a 1/16 of points we can guess (with high confidence) the original image!

Link for the jupyter notebook used for the above: Github