Images are usually represented with 8 bit numbers, which offer a palette of 256 shades for each pixel. However, in case one does not have that many colours at one's disposal, one can quantize with larger steps and use a smaller palette.
If 'quantizeSize' is the width of each bin, then a 8 bit colour value 'col' is mapped to a new colour using the transformation:
quantizedColour = ((np.floor(col/quantizeSize))* quantizeSize) + int(quantizeSize/2)
Direct quantization using the method shown above yields bands and contours in the image, which do not look good. Hence a simple way to mitigate that is to add some noise to the image before quantizing.
Far left: Original image
Left: The top row shows naive quantization, while the bottom row shows noise dithered quantization. Going left to right, 2, 4, 8, 16, 32 and 64 colours are used.
Plot showing MSE error vs size of quantization bin
As expected the MSE error increases if the size of the quantization bin increases (or equivalently the number of colours used decreases). However it seems to be inconclusive from the plot if the naive or the noisy quantization gives lower MSE. But perceptually noise dithered quantization looks better
import numpy as np
def quantizeImg(image, quantizeSize): #map to the nearest quantized palette colour
newimg = np.zeros(image.shape, dtype='uint8')
return np.array([[((np.floor(image[row,col]/quantizeSize))* quantizeSize) + int(quantizeSize/2) for col in range(image.shape[1])] for row in range(image.shape[0])]).astype('uint8')
def addNoise(img, noiseAmplitude): #Add uniformly distributed integer noise
return img + np.random.randint(-noiseAmplitude,noiseAmplitude,img.shape)
naive = quantizeImg(img, quantizeSize); noisy = quantizeImg(addNoise(img, quantizeSize/4), quantizeSize)
Error Diffusion based quantization is a class of algorithm that produces superior results. An image to be quantized is scanned in raster scan, and each pixel is quantized. However the quantization error is spread around to its neighbours.
Floyd-Steinberg (FS) Dithering is the most common and popular algorithm in this category. For the current pixel (denoted by 'X' below), it assigns 7/16 of the quantiation error to the left neighbour, 5/16 to the bottom, 3/16 to bottom left and 1/16 to bottom right, as shown below.
X 7/16
3/16 5/16 1/16
Similarly we have other algorithms of this class, which use a different weight matrix for the error diffusion. The weights are normalized (divided by total) before using. For more details see here.
Jarvis, Judice, and Ninke (JJN)
X 7 5
3 5 7 5 3
1 3 5 3 1
Stucki (S)
X 8 4
2 4 8 4 2
1 2 4 2 1
Atkinson (A)
X 1 1
1 1 1
1
Burkes (B)
X 8 4
2 4 8 4 2
Sierra (Si1)
X 5 3
2 4 5 4 2
2 3 2
Two-Row Sierra (Si2)
X 4 3
1 2 3 2 1
Sierra Lite (Si3)
X 1
1 1
Left: Left to right shows results for A, B, FS, JJN, S, Si1, Si2, Si3, Naive . Top to bottom shows 2, 4, 8, 16, 32 and 64 colours being used.
Bottom: The 2 colour quantization is shown zoomed in for the different algorithms
The last column shows the naive quantization technique, which produces the worst result. The other error diffusion techniques produce similar looking results. To compare them quantitatively MSE is used, as shown in the graphs below. Naive quantization yields the worst MSE but is the fastest. Floyd-Steinberg is pretty fast, while JJN and Stucki are the slowest as they have to perform the most number of operations. JJN produces the best results visually.
MSE vs quantization bin size for different algorithms
Mean time of different algorithms
def clamp(x, low, high): return max(min(x, high), low) #clamp x between low and high
def inRange(x, low, high): return x>=low and x<high #check if x is between low and high
def ditherByDiffusion(image, wtDict, quantizeSize): #general algorithm for all error diffusion algorithms
#wtDict is a dictionary with keys as relative index of neighbour and values as weights
newimg = np.copy(image).astype('float64')
wtDictNormalized = {k:wtDict[k]/sum(wtDict.values()) for k in wtDict} #to ensure that the weights sum to 1
for row in range(image.shape[0]):
for col in range(image.shape[1]):
newimg[row, col] = ((np.floor(clamp(newimg[row,col],0,255)/quantizeSize))* quantizeSize) + int(quantizeSize/2) #quantize
error = image[row, col] - newimg[row, col] #calculate error
for key in wtDictNormalized: #spread error to neighbours
if inRange(row+key[0], 0, image.shape[0]) and inRange(col+key[1], 0, image.shape[1]): #take care of image border
newimg[row+key[0], col+key[1]] += error*wtDictNormalized[key]
return np.floor(newimg).astype('uint8')
#Using ditherByDiffusion to perform Floyd-Steinberg with quantization size bin size 128. By using a different weight dictionary other algorithms can be implemented. Naive quantization is implemented if an empty dictionary is sent.
ditherByDiffusion(image, {(0,1):7, (1,-1):3, (1,0):5, (1,1):1}, 128)
Python 3 is used. Python 2 will perhaps not yield correct results as integer division is truncated in Python 2