Digital Photography Noise

Types of Noise

 

         Low light photography supplies the conditions for potentially noisy images. Indeed, there is no escape from the problem, because the two practical circumstances that create the most noise are high ISO settings and long exposures.

 

         Substituting handheld shooting for locked-down and vice versa simply exchanges different causes for the noise. And a problem it is because, while it invites comparison with film grain, it is aesthetically much less agreeable. The aesthetics of imagery are, of course, largely a matter of taste, but to date no photographer to my knowledge has chosen to use noise in a deliberate and positive way. With film, on the other hand, many photographers used the grain pattern from high-speed emulsions or enlargements of sections of film for texture and grittiness.

 

         To understand why this is so, and also to help decide what actions to take in suppressing and removing noise, the purpose of these pages is to explain how noise originates and to examine its appearance in detail. Both film grain and noise are artifacts, but have different causes. The graininess of film is related to granularity, the random visible texture in a processed photograph that comes from the black silver grains from which the image is constructed (in black-and-white film). Fast films achieve their high sensitivity partly by increasing the size of the light-sensitive grains. The image in processed black-and-white film is in fact a mosaic of black silver grain clumps in a clear matrix, and so grain clumping is integral to the image. It is for this reason that film grain has usually been tolerated and even exploited for its graphic effect. One of the very few, or only, useful effects of noise is in suppressing discretization effects, such as banding. However, even this is better achieved for a more agreeable result by adding Gaussian noise, which has a clustered appearance.

 

         With digital noise, the situation is different as in principle digital images are “clean.” In practice, at low sensor sensitivity (ISO 100 to 200) and with plenty of light, the results are also clean. The appearance of noise in an image can be considered a question of the efficiency of the sensor and the way the signal is processed. A close analogy is audible noise on a sound signal—crackling over a piece of music. Put another way, a digital photograph with less noise is more true to the original image projected by the lens, while a film photograph with less visible graininess simply has a tighter structure. The availability of numerous software filters to add the effect of film grain attests to its appeal, despite the dubious underlying logic.

 

         There are several causes and therefore types of digital noise, but fundamentally it has to do with sampling errors. Photons of light are random, and the job of each photosite in a sensor is to convert them into electrons, which are then counted at the end of the exposure in the readout. Clearly, the more photons that fall across the sensor, the lower the sampling error will be. When it is darker, relatively few photons strike, so each has a greater effect on the photosite it affects. The upshot is that in shadow areas there is less chance that the photosites will make an accurate representation. This kind of noise, sometimes called “photon noise” may not be a problem if those deep shadow areas remain very dark, but opening them up in post-production will reveal them.

 

         There are two ways of improving the photon count from dark areas. One is to increase the sensitivity of the sensor, the other to leave the sensor exposed to the light for longer so that more photons strike and a better average can be made. These two solutions, as you can see, are why the second half of this book is divided into two chapters: handheld for higher sensitivity and locked-down for longer exposures. But each of these introduces another kind of noise.

 

           

 

         DETAIL AND NOISE

 

         The blue box indicates an area of sharp focus and relative light where it is important to check for noise.

 

         Increasing the sensitivity of a sensor is done by increasing the gain of the readout amplifier. This is what happens when you dial up the ISO setting, and as in any electrical system, more amplifier gain means more noise, like static in a radio signal. Indeed, amplifier noise is the main culprit in the readout. This kind of noise is sometimes called “amp noise” or “readout noise” or “bias noise.” Unlike photon noise, it is repeatable and predictable. As the only significant variable is the amplifier gain, this kind of noise can be treated by making a “noise profile” of a particular camera at a specific ISO setting, which we’ll see in action here.

 

         A third kind of noise is known as “random noise,” which in a sense is a catch-all term for the unpredictable and unrepeatable differences in the timing and behavior of different electronic components. It tends to increase with fast signal processing, and it is difficult to predict how much it contributes to the overall total. Advanced cameras usually take some internal steps toward suppressing it.

 

         As mentioned above, increasing the exposure time is the other solution, but this brings into play another major contributor to noise, known as “dark noise.” This is caused by imperfections in the sensor, and also by the addition of electrons to the photosite generated by heat. In principle this can be reduced by cooling the sensor (as happens in astrophotography), but this is impractical in a normal camera. As a rough guide, a temperature increase of around 6–10ºC doubles the amount of dark noise. Fortunately, this kind of noise is independent of the image, meaning that if you were to follow a long exposure with an identical second exposure with the lens cap on, you would have in the second frame an exact record of the same dark noise, which for this reason is also known as “fixed-pattern noise.” This is the principle of “dark frame subtraction” which advanced cameras use. Dark noise also accumulates predictably over time, so that an exposure twice as long will have more or less twice the amount of noise. Practically, this is an issue for tripod photography rather than handheld, but it seems more convenient to include it here in this general overview of noise.

 

         In terms of appearance rather than cause, there are two main types of noise: luminance, and chrominance. Luminance noise appears as tonal fluctuations, and is usually the most prominent. In digital imaging, color actually contributes relatively little to the content, and you can check this for yourself by taking a noisy image in Photoshop and converting the Mode to Lab (Image > Mode > Lab Color). Lab color space uses a luminance channel and two color channels, a (red/magenta to green) and b (blue to yellow). Zoom in to 100 percent on a noisy part of the image—in a relatively featureless area such as sky or flesh. Go to Channels and select each of the channels in turn. You will see that most of the noise—indeed, most of the subject detail also—is in the Luminance channel. Chrominance noise is less prominent, though where it does exist it is usually more disagreeable. It is related to the Color Filter Array over the sensor that interpolates the color information of the pixels; the “wrong” pixel information from noise tends toward one of the color channels.

 

         In addition to all of this, noise goes beyond physics and into perception. This is extremely important when it comes to the reduction or removal of noise, and goes back to the definition of noise as an artifact. Even if noise exists in an image, we have to recognize it as such, and how does the eye and brain do that? Noise is random texture (spatial variations of tone and color), and at 100 percent magnification the only way to distinguish it from real subject detail is perceptually. This becomes hugely complex, as the brain must use familiarity, context, and expectations to decide. What this means is that total noise removal from detailed areas of an image (where the level of detail is similar to that of the noise frequency) cannot be automated. In other words, all noise reduction needs your guidance as the user.

 

 Luminance noise

 

           

 

         Original

 

           

 

         luminance channel

 

           

 

         a channel

 

           

 

         b channel

 

 Artifacts

 

         Taken from its original meaning of a man-made object, an artifact in imaging is a visible data error introduced into the image by the process. It is thus separate in origin from the scene in front of the camera, and is a defect. Aesthetically there may be reasons for wanting to keep, alter, or enhance it, but from the point of view of the signal that generates the image it is an intruder. Both film grain and digital noise are artifacts. Analyzing the source of these artifacts for any particular image makes it possible to reduce their appearance without overly damaging the “true” image.

 

 

 Noise-Reduction Software

 

         So far, I’ve concentrated on how noise occurs, what it looks like, and the problems of detecting it. With all types of noise-reduction software, there are two principal steps.

 

         The first is detection, selecting those pixels considered to be rogue artifacts. The second, which we’ll now look at, is the procedure for replacing them. At pixel level, noise is not simply removed, because that would leave a pattern of “holes,” whatever their color. It has to be replaced with values that blend it into its surroundings. The simplest way is to use a blurring filter, which has the effect of smearing the pixel values over a chosen radius. A Gaussian Blur filter is the standard, default means of digital blurring, although for noise removal it is not necessarily the best choice. It works by averaging across the radius selected and, in the case of noise, pixels that are very different in tone from their surroundings may not smear them completely. A Median filter, however, works by selecting the most representative pixel in the group. So, for example, if in a selected area there are just black-and-white pixels, but slightly more of the black, a Gaussian filter will return a darker-than-average gray, while a Median filter will return black. The downside of a Median filter is that it can produce an artificial-looking “plastic” texture.

 

         In either case, blurring by its nature destroys detail and reduces resolution, one of the essential problems of noise reduction. There are other methods. One is frequency analysis, using the Fourier transform, in which the image is converted into frequencies that are then searched for structures that do not conform to standard representations of edges and texture patterns. This method is also flawed, though, as frequency analysis sometimes incorrectly assumes patterns when none are present in reality, and so overcorrects by introducing strange textures. A more advanced form of frequency analysis uses wavelet transform, which enables the noise reduction to adapt to different noise conditions in different areas of an image.

 

         As we saw before, noise is intimately related to de-mosaicing (the standard procedure that all Raw converters have to apply in order to reconstruct the colors from the Bayer filter over the camera’s sensor). So, better ways of de-mosaicing, meaning more sophisticated algorithms, can significantly reduce noise by doing a better job of identifying it. A good example of this is the latest version of Lightroom, in which the newly developed de-mosaicing algorithm searches for patterns and consistencies. In this way it more easily finds the inconsistencies that represent noise, and suppresses them rather than enhances (which is what primitive Raw converters do).

 

         The blurring problem is one aspect of a more fundamental issue that I already touched on—how to distinguish and remove noise only from the areas where it is objectionable. Ultimately this is a personal, subjective assessment. One researcher’s comment was “one person’s noise is another person’s signal.” And, as Jim Christian, Noise Ninja’s developer, explains, noise reduction is essentially heuristic, meaning the methods work largely by trial and error. In the end, whatever noise-reduction method you use, you should be able to choose where on the image it is applied. Fortunately, one common principle applies to most viewers and most images, and this is that noise is most apparent in areas of an image that are otherwise smooth and lacking in detail. This becomes obvious when you examine an image such as this, which contains the four-way contrast of in-focus detail, out-of-focus detail, in-focus featureless subject, and out-of-focus featureless subject. Well-designed software can locate otherwise detail-free areas and apply more aggressive noise reduction to these, while protecting areas of fine detail. There are limits to how efficiently this can be done, so that it is important to add your own user decisions to the process. Some noise-reduction software allows this (such as Lightroom via its Detail control), but even if not, you can at least use Layers or some other non-destructive method to fade or restore the filter’s effects selectively. Selective sharpening in one form or another is essential.

 

 Noise Ninja

 

           

 

           

 

         Original image

 

           

 

         The default settings. As the window shows, the camera and ISO setting has been recognized, and the preloaded profile applied.

 

           

 

           

 

         The default settings can now be checked and tweaked according to taste and the image, using the Noise Ninja recommended procedure, as follows. Step one is to raise Strength to maximum.

 

           

 

           

 

         Smoothness is then set to zero.

 

           

 

           

 

         Smoothness is raised until just below the point where speckling becomes noticeable.

 

           

 

           

 

         Finally, Strength is lowered until the balance between detail and smoothness is to your taste.

 

           

 

           

 

         Sharpening, always intimately connected to noise, is then adjusted to taste.

 

           

 

           

 

         DETAIL AND NOISE

 

         The blue box indicates an area of sharp focus and relative light where it is important to check for noise.

 

         While the better programs apply noise reduction selectively, the way in which they do it is not always transparent, meaning that you are largely taking the developer’s word for it that this is being handled efficiently. Until you have settled on one program and use this regularly, it remains important to check the results of the filter, as always at 100 percent. Again, some programs facilitate this (Noise Ninja, for example has a single button that toggles backward and forward between the uncorrected and corrected states), but if not, it may be worth running the noise reduction on a duplicate layer if the filter is available as a Photoshop plug-in, or copying the corrected image onto the uncorrected original in Photoshop. Toggling the upper layer off and on is effective at showing the results, detail by detail. Areas that you feel have been treated too aggressively can then be easily retouched by partial erasing.

 

         As a result of the many variables in the creation of noise, the different ways of reducing it, and its perceptual nature, it is hardly surprising that there are many software applications for dealing with it, each with its own characteristics. As each requires user input, making comparisons is unreliable, not to mention time-consuming. Here we show a few cross-platform programs. Do not take the comparisons shown here as absolute references—the differences have as much to do with the particular images and my own choices in applying the effects as the design of the software.

 

 Case study:

 

         ISO 1000 noise reduction

 

         This section of a shot, of a small squid fishing boat on a sandbank in Thailand, is very much on the edge of technical acceptability. Taken handheld from a moving boat well after dusk, it was one of the very few of a set of 16 frames in which the main subject was sharp. The shutter speed was 1/8 sec, which was highly optimistic shooting at 93mm EFL (even though this was a Nikon Vibration Reduction lens). The sensitivity was set to ISO 1000, which always delivers pronounced noise. To hold the color of the green fluorescent striplight (used for attracting the squid), I took the precaution of underexposing the rest of the image, which prevented clipped highlights but laid up trouble later for noise. The Raw converter would certainly be able to pull up the shadows, but at the cost of a very noisy image, as the noise is more pronounced in the dark shadows.

 

           

 

 Lightroom

 

           

 

           

 

         Lightroom has a powerful noise-reduction algorithm, and levels of control include both Luminance and Color sliders. The Detail sliders are crucial in allowing you to apply more or less noise reduction to the all-important areas of the image that contain detail.

 

 Neat Image

 

           

 

           

 

         DETAIL AND NOISE

 

         The blue box indicates an area of sharp focus and relative light where it is important to check for noise.

 

           

 

           

 

         AUTO PROFILE / MANUAL SETTINGS

 

         The noise-reduction procedure in Neat Image begins with Auto Profiling. The program automatically seeks a smooth area, in this case sky. The software also provides manual levels of control to fine-tune noise reduction.

 

         Noise reduction introduces a workflow issue, because of the different approaches taken by different software, and this is particularly crucial if you are shooting Raw. If you are satisfied with the noise-reduction performance built into your primary Raw converter or image-editing program, then the decision is straightforward—simply apply it during the Raw conversion or, if the image is a JPEG, as soon as you open it. Whatever means are used by the software to judge and then reduce noise, it works more reliably when the image has been least altered.

 

         However, the several independent noise-reduction programs, as we’ve just seen, lay claim to doing the job better. Before anything else, then, it is important to try out the various choices and decide for yourself which you prefer. There is a strong element of personal preference in this, as noise reduction is not a “clean” procedure. It always carries some side effects, and there are trade-offs between efficiency. Speed of processing in batches may be more important to you than spending more time on individual images. And, even taking both these factors into account, you are likely to find that some images respond better to one noise-reduction software than others.

 

         As with blur repair, there are two stages followed by noise-reduction filters. First, the noise has to be analyzed. Then a particular algorithm has to be applied that reduces the variation in tone and color between the noisy pixels and their neighbors. Different noise-reduction software has different algorithms, and this more than anything accounts for differences in the final image appearance. There is an element of taste and judgment here, both on the part of the software developer and the user. Complete noise removal from a zone of the image may also give it a plastic-like, artificial texture—or rather, an absence of texture.

 

         Another side effect is softness, because reducing sharpness is inevitably part of noise reduction. Distinguishing between featureless, smooth areas and those containing detail is difficult to automate.

 

         Noise-reduction software that depends on camera-sensitivity modules prepared by the software manufacturer and loaded up in advance absolutely needs the purest image data to work on. There may well be a dilemma here, in that the noise-reduction software (Noise Ninja is one example of this kind) will work better if you do little to the image during the preceding Raw conversion, whereas Raw conversion now offers more and more attractive procedures for optimizing the image in other ways, such as lightening shadows and recovering highlights. My best advice is to run a few tests, contrasting minimum-alteration Raw processing against full alteration, both followed by noise reduction. Judge the results at high magnification, at least 100 percent. Note that “dark frame subtraction” procedures, being repeatable, are best performed either in-camera or with the Raw converter, and specific noise-reduction programs such as Noise Ninja or Neat Image work very well afterward to take care of other kinds of noise.

 

 Noise Ninja

 

           

 

         The image as opened in Noise Ninja. At this stage, the profile is either loaded from the library, or created from analyzing the image.

 

           

 

         The result following the procedure described earlier; increasing Strength to maximum, raising Smoothness just below the point at which speckling becomes obvious, then reducing Strength to taste.

 

           

 

         Uncorrected

 

           

 

         Lightroom

 

           

 

         Neat Image

 

           

 

         Noise Ninja

 

         NOISE AND DETAIL

 

         The results, compared side by side. This shows that it is less a matter of better or worse, and more a trade-off between less noise, and maintaining realistic detail.

 

 

 Adding Flash

 

         Strictly speaking, low light photography is about shooting without the aid of photographic lights, of which even a built-in camera flash is one.

 

         Nevertheless, it really depends to what extent you use it to shape the final image, and among the most common techniques is to add a relatively small amount of flash to the end of an exposure otherwise metered for the ambient conditions. This rear-curtain flash technique is well-established, and most digital SLRs have a setting that allows it. Briefly, the flash is computed to fire at the end of the exposure, as the final curtain of the focal plane shutter is closing, rather than at the beginning or during. This is an important distinction for any shot that is at a relatively slow shutter speed (say, slower than 1/30 sec), because motion blur, either from the camera, or the subject, or both, will create streaking, while the flash freezes everything sharply. Clearly, if the flash is at the start, the streaking will continue on from this, and if it is somewhere in the middle of the exposure the image will be even more confusing, with streak tracks on either side of the sharp details. The strongest case is for the flash terminating the entire exposure—most images are more legible this way.

 

         The principle of combining low light streaking with a flash-frozen punctuation is that the two quite different visual effects complement each other. Arguably the most effective balance is when the main component is the low light streaking, with the flash providing just enough exposure to “close” the image and make sense of what might otherwise be moving details that are hard to understand. Personal preference, of course, is a major factor, and the only way to decide on your ideal combination is to try out several for yourself. There are clearly many permutations of shutter speed and flash strength, and these are affected also by the situation and content. To list just a few, there is the amount of subject movement, the basic readability of the subject (some things just don’t make sense unless they are sharp and clear), whether the subject is light in tone against a dark background or the opposite, and the color balance of the ambient lighting (orange tungsten makes a strong contrast with white flash). If there is time to prepare, run a few different combinations of shutter speed and flash amount to see which appears to work best. There are usually several choices of camera setting to balance and combine these two; there’s no reason to list these here as they vary by make and model and are fully detailed in the camera’s instruction manual.

 

           

 

         REAR CURTAIN FLASH

 

         Here, the on-camera flash is used at full strength (exposure bias 0.0) and slow speed (1/13 sec), so that the figure is well exposed, while car headlights behind are bright enough to create streaking patterns from the pan. ISO 1600, f/6.3, and a wide-angle 18mm EFL.

 

           

 

         REMOTE FLASH FOR EFFECT

 

         Off-camera and triggered remotely (or as in this case held by a companion out of sight), flash can also be used as a practical light, simply for effect. Here, a single flash is used to add a little atmosphere to a moonlit shot of pyramids. Exposure time 30 sec at f/2.8 and ISO 400.

 

 Flash positioning

 

         Depending on positioning, flash units can make dramatically different-looking images. Here are some alternate results from the same shot, and plan views of the setup required.