This page describes Pixel Grouping, an algorithm for Bayer color-filter array (CFA) demosaicing that I developed and implemented on April 3rd, 2003.  I rewrote this article in June 2010, but the algorithm and the discussion still reflect my understanding of the subject in 2003.  I am sure others have come up with better algorithms in the many years since the inception of Pixel Grouping, and you should keep in mind that this article in no way represents the current state of the art.


Digital cameras see things differently than we do.  To capture color images, many digital cameras use a color-filter array, which allows different photo sensors to capture light of different colors.  Unfortunately, every sensor can capture only light of one specific color, so the image that the camera sees is incomplete (illustrated below, click to see at 300% magnification).

The goal of a demosaic algorithm is to reconstruct a full-colored image from these partial results.

Algorithm Description

I designed Pixel Grouping by combining the designs of previous demosaic algorithms.  If you already know Variable Number of Gradients and Linear Color Correction I, then Pixel Grouping should be pretty easy to understand.  Even if you have never seen a demosaic algorithm before, the description should (hopefully) be clear enough so that you can follow it without much difficulty.

Algorithm Input

The input below illustrates the data produced from a Bayer color filter array, which I will use to specify the behavior of the Pixel Grouping algorithm.  The input provides one color value at each pixel.  For example, at pixel #8, only the Green value is known, and the Red and Blue values are unknown.  At pixel #13, only the Red value is known, and the Green and Blue values are unknown.  The goal of Pixel Grouping is to compute those unknown values, which are missing from the input data.

Computing Green Values

Pixel Grouping works in two phases.  First, it computes all the unknown Green values.  Then, it uses the input data, along with the Green values computed in the first phase, to compute all the missing Red and Blue values.  I now use the (unknown) G13 value as an example to illustrate how Pixel Grouping computes an unknown Green value.

To compute the Green value G13, Pixel Grouping first estimates the luminance gradients (ΔNΔEΔWΔS) in the four directions (North, East, West, South) from pixel #13.

ΔN = | R13 − R3 | × 2 + | G8 − G18 |
ΔE = | R13 − R15 | × 2 + | G12G14 |
ΔW = | R13 − R11 | × 2 + | G12G14 |
ΔS = | R13 − R23 | × 2 + | G8 − G18 |

The vertical bars denote the operator for computing an absolute value.  For example, | −3 | = | 3 | = 3.  Compare the four gradient estimates; the formula for G13 depends on which gradient is the smallest.

If ΔN is the smallest, G13 = (G8 × 3 + G18 + R13 − R3) / 4
If ΔE is the smallest, G13 = (G14 × 3 + G12 + R13 − R15) / 4
If ΔW is the smallest, G13 = (G12 × 3 + G14 + R13 − R11) / 4
If ΔS is the smallest, G13 = (G18 × 3 + G8 + R13 − R23) / 4

The algorithm for computing the Green value at a Blue pixel is analogous.

Computing Red and Blue Values at Green Pixels

I now use the (unknown) B8 and R8 values (both at pixel #8) to illustrate how Pixel Grouping computes unknown Red and Blue values at a Green pixel.  The computation uses an auxiliary function called hue_transit, which I define as follows:

function hue_transit (l1, l2, l3, v1, v3) =
if (l1 < l2 and l2 < l3) or (l1 > l2 and l2 > l3)
then v1 + (v3 − v1) × (l2 − l1) / (l3 − l1)
else (v1 + v3) / 2 + (l2 × 2 − l1 − l3) / 4

The hue_transit function returns different values depending on if the l1, l2, and l3 values indicate a slope (then branch), a ridge (else branch), or a valley (else branch).  Pixel Grouping uses hue_transit to compute B8 and R8 as follows:

B8 = hue_transit(G7, G8, G9, B7, B9)
R8 = hue_transit(G3, G8, G13, R3, R13)

Computing Blue Values at Red Pixels

I now use the (unknown) B13 value at pixel #13 to illustrate how Pixel Grouping computes an unknown Blue value at a Red pixel.  First, estimate the luminance gradients in the NESW direction and in the NWSE direction as follows:

ΔNE = | B9 − B17 | + | R5  R13 | + | R13  R21 | + | G9  G13 | + | G13  G17 |
ΔNW = | B7  B19 | + | R1  R13 | + | R13  R25 | + | G7  G13 | + | G13  G19 |

Compare the two gradient estimates; the formula for B13 depends on which gradient is the smaller.

If ΔNE is smaller, B13 = hue_transit(G9, G13, G17, B9, B17)
If ΔNW is smaller, B13 = hue_transit(G7, G13, G19, B7, B19)

Computing Red Values at Blue Pixels

Pixel Grouping computes a Red value at a Blue pixel with the same algorithm that it computes a Blue value at a Red pixel, so I will not repeat the description here.


I performed a very rough evaluation, which applies Pixel Grouping to a test image and compares the results with the output from two other demosaic algorithms.
  • Variable Number of Gradients (top row)
  • Linear Color Correction I (middle row)
  • Pixel Grouping (bottom row)
Click on the image to see the image crops at 300% magnification.


I designed Pixel Grouping specifically to process artificial scenes with large areas of uniform colors that are separated by clear boundaries, such as the first four scenes in the evaluation.  As a result, Pixel Grouping works best with images of printed matter or manufactured objects that does not have too much fine-grained texture.  Below, I discuss three types of artifacts that are present in the evaluation images.

The zipper artifact (shown above) affects Variable Number of Gradients and Linear Color Correction I.  This artifact does not affect Pixel Grouping.  The zipper artifact is caused by inconsistencies between different parts of a demosaic algorithm.  If an algorithm computes unknown values at Green pixels in a way that is inconsistent with how it computes unknown values at Red and Blue pixels, the output image will contain residual structures from the Bayer color-filter array pattern.

The isolated-dots artifact (shown above) affects Variable Number of Gradients and Linear Color Correction I.  This artifact does not affect Pixel Grouping.  The isolated-dots artifact is caused by misguided efforts of an algorithm to produce sharp edges, which can overcompensate and produce bright (or dark) dots at the other side of a sharp edge.

The color inconsistency artifact (shown above) affects all three algorithms in the evaluation. Correct rendering of the scene should produce a uniform metallic grey that only differs in brightness. The color inconsistency artifact is caused by drastic change in luminance over a small area (less than 3×3 pixels), which makes it difficult for algorithms to determine the correct hue of each pixel.

Note that this list of three artifacts is by no means exhaustive, and it is probably unavoidable that every demosaic algorithm will produce some artifact in some specific circumstances.  Hopefully, with more research, new demosaic algorithms will produce fewer artifacts in common images, or change the nature of the artifacts so that they are less visible to the human eye.


Wikipedia has excellent articles on color-filter arrays and demosaicing.
ImageCooker is a RAW image processing program for Konica Minolta digital cameras.
The reference implementation of Pixel Grouping is a patch to ImageCooker v0.3.

I released the reference implementation under the GNU General Public License Version 2.


I want to thank Ting Chen for his survey of demosaicing algorithms, which introduced me to the field.  The survey, unfortunately, seems to have disappeared from the web (as of June 2010).  I would like to thank Jeremy Rosenberger for developing ImageCooker and releasing it under an open source license.  His clean and well-organized style of programming allowed me to develop the reference implementation of Pixel Grouping in a very short amount of time.  Dalibor Jelínek provided lots of technical assistance, and his analysis of Pixel Grouping helped me understand how it behaves in many different situations.  Finally, I must thank my parents for buying me a DiMAGE 7i digital camera.  Without that camera, none of this would have happened in the first place.