Prime lenses (that is, those with a fixed focal length) used to be the standard for professional photography, until their status was gradually eroded by zoom lenses with excellent optical qualities and remarkable zoom ranges.
Yet one of the advantages of prime lenses is that they can be designed with very wide maximum apertures. This is not new photography technology. Ultra-fast lenses have been around for a while, from the early Ermanox from 1924 with its Emostar f/1.8 (not great by today’s standards but remarkable then) to the Leica Noctilux-M 50mm f/0.95 lens of today. Extremely high resolution is a necessity, because, as we’ll see on the following pages, a large maximum aperture results in shallow depth of field, and this usually concentrates the focus on a tiny area, making its definition all the more crucial. The main problem for most people is the cost; good, fast primes are very expensive. Other issues are that they tend to be relatively bulky and heavy, a consequence of all that extra glass, and many, surprisingly, have manual rather than automatic focus. The decision to invest in a fast lens for low light shooting largely rests on how much of this kind of photography you intend to do. The performance advantage, however, can be considerable. Of course, unlike film photography, where the bottom line was the amount of light entering through the lens, digital shooting allows quick changes to the ISO. Nevertheless, a fast lens allows you to hold off the moment at which you dial up the sensitivity and start to lose image quality to noise.
What counts as fast in a lens? Always, of course, the faster the better, but in the context of modern lenses, the maximum aperture should be greater than f/2. The fractional numbers at this end of the aperture scale belie the actual differences. A lens of f/1.4 captures twice as much light as f/2, and four times as much as a “regular” f/2.8 lens. The cost of doing this is considerable in terms of precision, type of glass, weight, bulk, and money. Hidden issues are the aberrations and edge performance failings that appear at very wide apertures. Nikon’s 58mm Noct-Nikkor, for example, was designed especially to overcome sagittal coma, in which point sources of light appear crescent-shaped (or, as Nikon charmingly put it, like “a bird spreading its wings”). This aberration, disastrous for city lights and stars, disappears when the lens is stopped down just slightly, but to correct it for wide-open use the front element was ground aspherically, and highly refractive glass was used to allow all the elements to have shallower curvatures.
Wide maximum apertures at extreme focal lengths are even more difficult to manufacture, and practically this restricts the choice of fast lenses to focal lengths not far from standard. The range from 35mm to 85mm is the norm. With a fast lens being a considerable investment, deciding which to choose is important. The arguments for a wider angle are coverage and less risk of camera shake (as there is less movement of the image in the frame at a wider angle), but a possible disadvantage is that wide-angle views are generally, through familiarity, expected to be more or less sharp throughout the scene. The arguments for medium telephoto are that it makes a perfect low light portrait lens, and that the strongly blurred out-of-focus areas can enhance the sharpness of the focused zone; against this, coverage is restricted and camera shake more likely. We deal with focusing issues in more detail on the following pages.
SAGITTAL COMA
This disturbing blur shape, noticeable in sharp focus and even more so with soft focus, is a distortion effect that fast lenses are computed to avoid. The example here is particularly bad, with a slow wide-angle lens at close to full aperture—f/5.6. Note that the coma “points” toward the center of the lens and is at its worst close to the edges of the field.
LIGHTS DETAIL
The detail below shows the lights from the photograph above. Taken with a 58mm Noct-Nikkor lens at full aperture .2) there is no sign of any coma “tail.” The radial flare around the various lamps is pronounced, but pleasant.
If you are used to the forgiving focusing range of a normal lens (and these days, full aperture on many zooms is f/3.5–f/5.6) then working to precision with a fast lens wide open can come as a shock.
For example, on my Nikon DSLR, using a Zeiss 85mm Planar at f/1.4 and focusing on a subject at 16.4 ft (5 m), the depth of field is around 7.4 in (19 cm). Compare this with the 30 in (77 cm) when the lens is stopped down to f/5.6. Depth of field is at best a fuzzy concept, and the calculations are not to be taken literally as it is ultimately based on the concept of “sharp enough,” involving both judgment and the definition of the circle of confusion. The circle of confusion is the smallest diameter that an imaged point can be and still look like a point. If your eyesight is good, and you are examining the final photograph closely and critically, the outer limits of this depth of field may not satisfy you. This, as you might expect, tends to happen with photographers investing in and using one of these costly lenses.
In almost all short-to-medium-distance scenes, only a very small part of the image can be sharply focused at a wide aperture. This is even more pronounced if you use a medium telephoto portrait lens. Basically, depth of field is not an option, and there is usually no point in trying to maximize it. The only strategy worth pursuing to get more than one point in the image sharp is alignment—re-positioning yourself or some objects in the scene so that they are at the same distance.
Typically, however, the first consideration is identifying the exact part of the image on which to focus. Usually this is obvious, though not always, and there may be a conflict of choice, as in the example opposite. Having decided, be careful when relying on the autofocus to deliver the best result. As the image illustrates, in a deep area of the scene it is easy for the focus to be slightly, but significantly, out.
With the camera steadied to avoid the risk, focus at full aperture with a long focal length (here 85mm at f/1.4) on a subject containing sharp detail, in this case the metal grille of an audio speaker. Using manual focus, or manual focus override, take a series of frames with very slight differences in focus—though all should be visually sharp through the viewfinder. The focus indicator in the viewfinder will also help in this. Here, examining the frames together at 100 percent shows the slight focus shift, although at the time of shooting any of these would have seemed correct. For the purposes of this demonstration, I chose this speaker grille because of its slight curvature, and marked the sharp focus point with arrows.
ALIGNMENT TO SUBJECT
When composition and camera viewpoint permit, one standard method of maximizing the shallow depth of field at full aperture is to shoot perpendicular to the main axis of the subject. In the case of this mantis, the length of the insect obviously makes this worthwhile.
The problem is compounded greatly with manual focusing. As you may have noticed, some modern fast lenses are still made without autofocus—so why would a reportage photographer deliberately make life difficult by buying one of these? I can answer this question personally, as my last lens purchase was the Zeiss 85mm f/1.4 Planar just mentioned, even though I might have chosen the Nikon model for around the same price, with the same focal length, same maximum aperture, but with autofocus, which you might think essential in getting the focus exactly right. In many cases, it is faster and more accurate than manual focus, but it depends on the subject, and in low light photography, as we’ve seen, images can easily confuse automated systems of all kinds. Specular highlights and large, dense areas of shadow can easily throw autofocus out, with the added risk of your shooting without realizing this. Manual focus, while slower, gives more personal control, and the focusing indicators in the viewfinder frame are particularly helpful. But a more important reason is that fast lens design is at the specialized, craftsmanlike end of lens manufacture, and much attention is paid to the subtle qualities of out-of-focus appearance, contrast, and general, idiosyncratic feel, in addition to build quality. These lenses are not at the heart of mass-production, and so can be treated differently. The Zeiss lens, for example, is a Planar, which means it is optimized for a flat field across the frame.
One technique, which takes up more time, but helps guarantee the result, is to shoot extra frames with very slight shifts to the framing with the autofocus on. This slight re-framing will give you a range of sharp-focus points from which to choose later. Another precaution, which also delays shooting, is to set aside several seconds to examine the image at magnification on the LCD screen. Obviously, doing either of these depends on the kind of subject—one that is not changing or moving very much.
RANGE OF SHARPNESS
The gradual shift from sharply focused to unfocused across the plane of the image is often no bad thing, and the blurring can be attractive, particularly if it creates a kind of color wash. All that’s needed is to focus on a logical point, of which there may be several. Here, with a glass restaurant wall, the first letter of the name is the logical choice.
All kinds of detailed decisions are involved in where to focus whenever there is choice. In the case of this museum shot with a medium telephoto, slight adjustments to framing and the movement of the people in shot gave two possibilities. The basic framing is trying to juxtapose the paintings of standing women with real visitors, but full depth of field is impossible. In the first shot, the woman in the middle distance seems to be the natural focus of attention, but in the second, the action of the woman in the distance, holding up a camera and framed more neatly against the strip of white wall, shifts the attention to her, and so I shifted the focus to her also.
COMPARISON
This comparison makes it easy to see the two focus points—check the well-defined belt buckle and bag strap.
Sharpness is almost always critical in low light photography, simply because the technical limits of depth of field and shutter speed are constantly being pushed.
First, however, it’s important to understand that sharpness is subjective—an overall, personal assessment from a combination of factors which can include the resolution of the lens, the focus, noise, contrast, motion blur, and how closely someone examines the image. Most of the time in photography, sharpness is an ideal, or at least a desirable quality, but by no means always. There may well be other qualities that override the importance of sharpness. Consider a shot that captures a particular gesture elegantly but is ever so slightly less sharp in some respect than the following frame; yet the second frame is not quite so well timed. The content may well tip the balance towards the first. And in case we get too fixated on sharpness, let’s recall Henri Cartier-Bresson’s comment: “I am constantly amused by… an insatiable craving for sharpness of images. Is this the passion of an obsession?… Do these people hope.. to get closer to grips with reality?”
A useful exercise is to make test shots whenever there is time, in advance of an important shot, and check them on the LCD screen at full magnification. Lettering of any kind, such as signs or vehicle number plates, is a good subject; the idea is to become familiar with what is an acceptable sharpness. It is very common, when editing a low light shoot, to find a proportion of images slightly unsharp for one reason or another (wrong focus point, subject movement, camera shake). Because these technical issues cannot be taken for granted in this kind of photography, editing takes longer. When examining the images in a browser or database, each one needs to be viewed at high magnification—full size or three-quarter size. Beware when deleting images not to throw away those that may have some use when repaired.
An important perceptual consideration is that wide-aperture shooting creates two different textures in the image, equally important. One is the small in-focus area, the other the surrounding, usually much larger areas that are out of focus. The quality and “feel” of these out-of-focus zones makes an important contribution, and is the reason why many photographers tend to be partisan about specific lenses. Lens designers do indeed work toward a particular style of “out of focusness,” particularly with fast lenses. Not only are the out-of-focus zones important in their own right, but perceptually they contribute to the concentration of sharpness in the small focused details. Compositional techniques such as positioning a small sharp area against the extreme blur of a strongly defocused background aid the impression of sharpness.
Sharpness also needs to be considered in the context of digital sharpening, which we deal with shortly. At some point before final output, all digital images need sharpening, because of the way they are captured, which introduces some softness, and because of the characteristics of the printer or other output device, so that the final image has the desired crispness and sparkle. These standard needs are not always easy to distinguish from the more localized and personal issues, such as repairing or enhancing details in an image which were not focused or frozen adequately—which happens frequently in low light photography. On the following pages we’ll look at this as a separate sharpening procedure.
Stopping down from the maximum aperture is clearly one way of reducing the risk of imperfect focusing, but the cost is the noise from racking up the ISO—negating the very reason, of course, for using a fast lens. Nevertheless, this is one of the possible trade-offs discussed earlier, see here.
KEY SHARP DETAIL
For perceptual reasons, as long as one key detail in the image is sharp, the eye and brain accept and assume a greater sense of overall sharpness than may, in fact, exist. Here, a telephoto shot of men in a Burmese temple is sharply focused only on the eyebrow-to-chin profile of the further man and on the tobacco smoke he is exhaling. This is enough, however, to give a satisfactory general impression of sharpness.
Depth of field is the distance between the nearest and farthest points in a subject that appear acceptably sharp in an image. The smaller the lens aperture, the greater the depth of field. It is ultimately subjective, and influenced by the following:-
Focal length
Circle of confusion
Aperture
Subject distance
Viewing distance of final image
When slightly out of focus, a point is imaged as a tiny circle. The circle of confusion (CoC) is the smallest that this circle can be and still appear sharp to the eye at a normal viewing distance. This is subjective, but most lens manufacturers work to between 0.025mm and 0.033mm for a 24 x 26mm frame (that of 35mm film). Smaller sensors call for a smaller CoC.
DEPTH OF FIELD
These berries are in sharp focus, but the high aperture means that closer foreground (the leaf) and background are not in focus.
In normal bright-light or flash photography, you might go straight to image selection on aesthetic grounds, on the reasonable assumption that the images will be technically competent.
Any serious issues, such as auto-focus incorrectly targeted on the background instead of the subject, would have flagged themselves at the time of shooting. Not so in low light photography, and particularly not so with handheld. As already discussed, the combination of shutter speed, aperture, and ISO setting is, for much of the shoot, likely to put you on the edge of acceptability.
Whichever browser, database or workflow program you use for selecting which images to process, the very first step is to check for sharpness, while also keeping an eye on the worst effects of noise. Sharpness, as we’ve seen, is more complex than it at first seems, and involves a heavy dose of personal judgment; grounds for rejection are generally poor focus, camera shake, and subject motion blur. At a finer level there are your own standards for what you consider truly sharp as opposed to “just” sharp—here the expressions “pin-sharp” and “tack-sharp” spring to mind. More complex is deciding between images in which the point of sharp focus varies, as this is more of an issue with wide apertures and longer focal lengths.
As one of the key strategies for handheld shooting at slow shutter speeds is to spread the risk by taking several-to-many frames, you will often have a sequence of very similarly composed images, and this helps the technical check enormously—provided that the software you use for selection allows them to be ranged alongside each other on the screen for a detailed side-by-side comparison. The software packages here show the various alternatives, with some clearly better at comparative display.
Opinions vary on what to do once you’ve made a judgment. There are three technical categories possible: unacceptable, acceptable, and best. Some photographers prefer to bin the clearly flawed images, but given that some repair may be possible, there are valid arguments for simply tagging them and keeping them. It’s also worth bearing in mind that future software may be able to achieve more effective repair than at present. Added to this are creative or aesthetic judgments touched upon already, such as preferring the composition of one frame which may not be the absolute sharpest, but which is still acceptable technically.
DXO OPTICS PRO
A four-up display in the Organize section of the workflow can be zoomed in on equally.
LIGHTROOM
Here Lightroom is shown in comparison mode. When one picture is being viewed, simply click anywhere to zoom the view to 100%.
APERTURE
Aperture has a built-in loupe which can be called up and dismissed by simply pressing the “~” (tilde) key. The zoom ratio defaults to 100%, but can be adjusted.
There are two basic kinds of blur, which is to say two basic causes of unsharpness. One is focus blur, in which point sources in the subject reproduce not as points but as discs, and the other is motion blur, in which features in the subject that should be sharp in the image reproduce as streaks or smears. And there is every possibility that both kinds of blur appear combined.
Let’s first consider focus blur, which through familiarity tends to be the standard against which other varieties can be compared. As we saw on the previous pages, focus blurring increases steadily according to the depth of the scene away from the point of focus. Most scenes shot at a wide aperture, therefore, contain many different degrees of out-of-focusness.
Perhaps the key quality of focus blur is that each point becomes a circle. We already saw this when we examined circles of confusion and depth of field. In digital terms, each pixel becomes a circle of pixels centered on the original point. This provides the basis for specialized image-restoration software to work. As the blur is isotropic (equal in all directions), knowing the blur width helps in attempting to reverse the blur toward the original sharpness. Technically, it is the Point Spread Function (PSF) that defines the amount of spreading—that is, blurring. As we’ll see on the following pages, unsharp masking, which is the standard procedure for most image sharpening, including in Photoshop, is not necessarily the ideal for repairing this kind of optical blur. A point to note here is that with a perfect lens (an impossibility), the PSF predicts a perfect circle, but in real life the blur shape is often different, if only slightly so. Sophisticated blur repair needs to take this into account, and as the examples on these pages show, blur shapes can differ. Unsharp masking, incidentally, assumes that the blur radially follows a Gaussian (bell-shaped) curve (see here).
The ideal feature in an image to use for identifying focus blur is a point that contrasts strongly with its setting, and the perfect example of this is a specular highlight. While it is impossible to predict or determine which bright points of light are indeed points and not circles to begin with, reflections of a light in a curved surface (sun on raindrops, distant electric lamp in spectacles) and refractions (light sparkling in jewelry) tend to be small enough in a typical image to have no measurable dimension. When sharply focused, these appear as points, but the more defocused they are, the larger the diameter of disc they become, as in the sequence here. More often, however, there are no such easy point-source clues, and you may have to fall back on other ways of examining the image. One clue is to find parts of the image that commonsense tells you are closer or further than other parts, and to look at magnified details to see if they differ in sharpness.
Alternatively, there is the heuristic approach, a fancy way of saying try out focus-repair procedures to see if they have any effect. In the case of the program used on the following pages for repair, Topaz InFocus, this involves letting the software analyze details, and also alternating between focus-blur repair and motion-blur repair. This is a rough and ready method, but can work.
Unsharpness due to movement has special qualities that sets it apart from defocused blur, and it can be further subdivided into camera motion blur and subject motion blur.
DEFOCUSED SPECULAR
The specular highlights of the sun in the rounded chrome-metal top of a simple stapler are ideal for a focus-blur test. The image is defocused in stages, and the blur shape from a high-quality lens (a Zeiss Planar) stays circular. Note, however, that one of the characteristics of this very sharp lens is to create an annulus (outer ring) to the blur circle, in a color that derives from axial chromatic aberration.
DISTORTION INCREASING WITH BLUR
Details from three separate exposures at different focus points made of the Pudong skyline from Shanghai, on a 28mm PC shift lens. The distortion, stretched toward the center of the lens’s field of coverage, increases with the amount of defocusing.
COMA
Sagittal coma shows itself as a curved, stretched shape that points toward the center of the image, and is worst at wide apertures with wide-angle lenses near the edges of the frame, as shown here.
Motion blur differs from focus blur in that it has direction, and if you know or can calculate this direction, it is easier to restore than focus blur.
The effect is a kind of smearing, though it’s important to distinguish between camera shake and subject motion blur. In the first, the entire image is smeared by the same amount because the camera rather than part of the subject has been moved during the exposure. Subject motion blur involves relative motion—one thing moving against a background, such as a car. This kind of relative motion is more difficult to restore than camera shake, because it needs different amounts of correction across the frame.
There are further subdivisions, and ultimately a wide variety of motion-blur images. Camera shake involves both time and direction; the longer the exposure, the longer the streaks will be. Deliberate motion blur by moving the camera often involves changes of direction. A special case, more relevant to tripod shooting, is a medium-long exposure in which the camera shake, from mirror slap or a heavy finger on the shutter release, returns to the original position, creating a very lightly separated double image. In subject motion blur, a major distinction is between the way a light subject moves against a dark background, and vice versa. Time and direction also matter.
Most focus blur is not only radial but its effect on the image follows a Gaussian distribution. This means, in effect, that the blurred image of a point source of light fades away from the center like the 2-D projection of a 3-D Gaussian bell-shaped curve, as shown here. You can see this in action in Photoshop by creating a white dot on a black background and then applying Gaussian blur: Filter > Blur > Gaussian Blur (to see the effect clearly, give the dot a radius of at least a few pixels).
A specular reflection in a convex surface, such as this lacquer bowl and silver containers, is probably the ideal feature in a photograph for displaying the effect of focus blur. In these example sequences of details from the same image, the first row shows progressive blurring from sharp as the lens at full aperture f/1.4 is progressively focused forward (closer to the camera). The second row shows the same focus settings at the smaller aperture of f/6.3, and the third row is at the much smaller aperture of f/16. Note that the classic radial blurring of point sources is really only obvious at a wide aperture; the blurred points at smaller apertures retain more of their original shape.
f/1.4
f/6.3
f/16
f/1.4
f/6.3
f/16
UNI-DIRECTIONAL CAMERA SHAKE
Specular highlights in the metal surface are good indicators of the direction, uniformity, and length of motion blur. As it affects the entire image, this is clearly camera shake. Extremely straight blur movements like this are also typical of motion blur caused by slippage or knocks to a camera mounted on a tripod.
Before correction
After correction
Topaz infocus
BASIC FOCUS BLUR
The problem here, as revealed by a magnified view, is an overall softening of focus, typical of focus blur.
Original
After Noise Ninja repair
SLIGHT MULTIDIRECTIONAL CAMERA SHAKE
A magnified view of the selected area shows a consistent pattern that reveals a 13-pixel shift—just within the limits to expect repair—but with a curve at one end like a hook, which will limit the effectiveness of repair tools.
PRONOUNCED MULTI-DIRECTIONAL CAMERA SHAKE
The camera shake here is deliberate and long—and moved around the subject—but it also combines with subject motion blur. Analyzing two areas of the image shows the curved movement of the bulls’ horns—specific and different from other movements such as the matador’s—but the area of the ground upper left shows the actual camera movement, a curve toward the lower right, with a jerk at the end.
PRONOUNCED UNI-DIRECTIONAL CAMERA SHAKE
A magnified view of the man’s head (and we can see that he appears to be standing still) shows at least two clear points of movement, in the same direction, of some 40 pixels, which is too great to attempt any repair.
PROGRESSIVE FOCUS BLUR
In most images of scenes with some depth to them, you can expect a progressive increase in focus blur. In this scene inside a Burmese temple, magnified views of three sections show a visual defocusing in the direction of the arrow. Using Focus Magic to make an analysis of the amount of blur by clicking the Detect button confirms the progression.
SIMPLE, SHORT SUBJECT MOTION BLUR
A fishing trawler entering a harbor has a predictable and known movement in one direction, and at a slow shutter speed because of the light (pre-dawn), you would expect some short motion blur. The obvious places in the image to look are narrow edges perpendicular to the expected movement (the vertical rigging) and indeed they do reveal blurring, in contrast to the horizontal rigging, which doesn’t.
COMPLEX SUBJECT MOTION BLUR
The standing figure moves predictably from right to left, by approximately 5% of the image frame, equivalent to about 200 pixels. However, different parts, such as the arms, can be expected to move in different directions and by different amounts.
In principle, sharpening can be put into three categories (though the practice may be different). The first is sharpening to overcome softness introduced during capture, of which there are two components: optical softness from the lens, and digital softness from the sensor.
Most lenses show soft spots in different parts of the images area and at different apertures. The sensor and the processing of the signal it captures also introduces softness when continuous tones are translated into pixels on a regular sampling grid, and details that are finer than the sampling frequency are averaged by interpolation. The second category is sharpening for creative or repair reasons, applied to details identified by the photographer. The third is sharpening targeted to the needs of the output device, such as a printer. Although capture-repair-output is a logical time sequence for performing sharpening, standard professional practice in photography tends to demand that images stay unsharpened until final use. Stock agencies in particular, and a few clients, are familiar with sharpening artifacts, and may well reject images that have been sharpened in such an obvious way.
ORIGINAL
Lightroom offers both procedural sharpening and brushwork options. The procedural sharpening is controlled by Amount, Radius, Detail, and Masking.
Result
This, however, is a contentious area, because the real reason for not wanting sharpening performed early in the workflow is the danger of artifacting. The key artifacts in question, which are always sufficient for a stock agency, for example, to reject the image, are scattered dark pixels and halos, both bordering edges. Acceptable, and usually not even visible, on a printed page, these are usually obvious and objectionable in an image viewed onscreen. The reason they appear at all is that throughout photography and printing, the usual way of sharpening an image is to increase the contrast at edges within the image. This has everything to do with the psychology of perception, because we judge sharpness most in local detail, and particularly in terms of contrast.
In practice, the first two purposes are usually combined, and can usefully be kept separate from the third. Procedural sharpening passes are always best left until the last minute, and on a copy, because the amount of sharpening is partly determined by final use—the size of reproduction and the characteristics of the output device. This leaves creative/repair sharpening, which is much less amenable to procedural techniques, as we’ll now see. It also needs to be done circumspectly, paying special attention to avoiding tell-tale artifacts as evidence, for the commercial reasons just mentioned.
First, procedural repair. This means an imaging operation, such as a filter, applied algorithmically, usually across the entire image. The opposite is direct brushwork. There are two schools of thought here. The argument in favor of procedural sharpening is that if the cause of the blurring had an overall effect and can be analyzed, it can be reversed with an algorithm. The major argument against is that sharpness is perceptual and so largely a matter of individual judgment, so sharpening the entire image is overkill. The argument in favor of brushwork applied to chosen details is that this can address just the areas that you decide are important, and do so without pixilated artifacts. Against this is the ethical point that brushwork verges on alteration and manipulation.
DXO OPTICS PRO
As one part of its overall image processing, DxO uses a painstaking analysis of specific lens-sensor combinations to calculate their exact blurring effect in each part of the frame and at every aperture setting. The algorithm reverses this blurring, with the default auto setting being the software engineer’s judgment. A slider allows the user to lessen or increase the strength. Because this deblurring is applied to known defects in the lens and sensor, it differs from ordinary USM sharpening in that it is applied selectively.
The default settings in Lens Softness using DxO Optics Pro. This applies pre-calculated corrections to a lens and camera combination that has already been analyzed and is in the software database.
A better procedural choice is dedicated deblurring software, of which there is surprisingly little available commercially. I say surprising because deblurring and image restoration receives a great deal of scientific attention, and is used in military applications. One program available on both Windows and Mac platforms is shown here, Topaz InFocus. It works on both focus blur and motion blur. In the former, the blur is radial outward, and the software attempts to reverse this process by moving pixel edges back toward the center of what it computes to be the original source. In the case of motion blur, the process is in principle the same, but the pixels are shifted in one direction only. As explained in the box “Convolution and deconvolution,” deblurring is based on analysis of the original capture. InFocus does this by having the user examine the blur shape and put numbers to it. DxO Optics Pro, as part of its Raw conversion package, refers to lens modules that the company has already prepared by analyzing the performance of individual lenses used with particular sensors.
There are limits to what deblurring software can do, and it works best when there are plenty of linear boundaries to work on. The practical limits are effectively less than 20 pixels, meaning less than 20 pixels blur-width in the case of focus blur, and smear-length in the case of motion blur.
Returning to Photoshop and other image–editing programs, most have similar filters, and there are procedural alternatives to USM (Unsharp Mask). Smart Sharpen allows you to select sharpening aimed at optical blur via the Remove drop-down menu. Lens Blur deals with focus blur, and Motion Blur does as it says, requiring the user to set the angle. Another effective treatment, with little noise effect and reasonable control over halos, is High Pass sharpening. This is a two-step layered procedure, in which a duplicate layer is made of the image. A high pass filter (Filter > Other > High Pass) is then applied to the duplicate layer, then this is changed to Hard Light mode. Overlay mode is slightly less strong in its effect, and Soft Light less strong again. The High Pass filter retains edge details where maximum color transitions occur. However, it must be said that at this time the Photoshop deblurring filters are not at all sophisticated in their operation, and fall short of the dedicated third-party software just described.
This software works by first estimating the blur radius in pixels, and then applying a special algorithm designed to reverse the process that created the blur. Set Blur Type to Unknown/Estimate and click on a part of the image with clearly defined edges. Some trial and error experiments at different radius settings are useful in judging the most effective one.
Processing optical blur and lens softness is not a simple procedure, as it means first identifying the amount and type of blur and then attempting to reverse the process mathematically. Lens blur is described mathematically by convolution, repairing it is deconvolution. Convolution is a mathematical operator that combines two functions to make a third, and one example in optics is a shadow, which is the convolution of two things—the shape of the light source and the object that casts the shadow. Optical blur is the convolution of the sharp image with the shape of the aperture diaphragm in the lens. In other words, apply the aperture shape at a particular f-stop in a certain way to a point in the image, and the result will be a specific shape and size of blur. Motion blur is also a convolution, of the sharp image and a linear movement.
In the example here, convolution can be thought of as a moving average applied to each pixel over a certain radius. At its simplest, a black pixel on a white background is averaged to a pale gray. A solid color, say all black or all white, is unchanged. Applying this across the entire image, pixel by pixel, has the effect of blurring sharp points and edges but leaving solid areas untouched. This is the blurring process.
The aim of deconvolution is to reverse what happened, and so de-blur the image. It depends very much on being able to describe the original blurring precisely. The PSF (point spread function) describes this mathematically, but is difficult to calculate unless the software designer knows the exact performance of the lens in combination with the sensor. The shape of the blur from a single point can vary considerably from the simplest case of a round disc, depending on the characteristics of the lens and sensor, its position in the image frame, the f-stop, and so on.
Deconvolution (or deblurring) effectively sharpens the image, and the difference between this and normal sharpening is largely a matter of terminology. In a sense, deconvolution/deblurring is a form of sharpening with a specific goal, while normal sharpening is open-ended.
In focus blurring, a black pixel on a white background would be convolved to gray…
…but black blurs to black…
…and white blurs to white.
Non-procedural restoration works in an entirely different way, and is more intuitive and hands-on. There are few “silver bullet” solutions here that will magically and quickly improve the image; instead there is painstaking brushwork at 100 percent and 200 percent, calling on the skills of the miniaturist rather than the computer engineer. The key steps can be summarized as follows: identify where in the image the eye will travel to most insistently to search for detail, then work on these at a high magnification to enhance local contrast and edges, working with a variety of brushes. Which brushes is a matter of personal preference, as is the choice of brush hardness, but the Clone Stamp and Smudge tools are both useful, not least because they make use of existing pixels and pixel distribution.
The examples here tell the story best, as this kind of retouching does not lend itself to generalization. Brushwork can, of course, be combined with procedural sharpening, either to make very detailed selections before applying a filter, or to take over in key details when the filter has done as much as is realistically possible.
No correction
DxO
Lightroom
InFocus
A FOUR-WAY COMPARISON
From top: No correction, DxO, Lightroom, and InFocus. The most successful in this example is InFocus, with no unnecessary edge-sharpening with consequent halos (as happens marginally with Lightroom). DxO also handles the lettering well, but is less successful than either Lightroom or InFocus with the lace outlined against the flames.
Although an image that is completely out of focus and nowhere near sharp is usually considered a reject (though not always), gradations of sharpness are an absolutely normal part of photography.
Because of the way the eye and brain work, we are only rarely ever conscious of this kind of optical blur in real life, as our gaze flicks rapidly from one part of a scene to another, refocusing almost instantaneously. Nevertheless, more than a century of looking at photographs has made depth of field (see here) a normal feature of imagery. Particularly in a long-focus picture, it is completely acceptable to have blur at the front and back around a limited sharp zone, and even attractive.
Because low light shooting handheld inevitably means a wide aperture, almost all images shot with a long focal length will have very shallow depth of field. Obviously, one of the shooting strategies is to use this creatively, making compositions that work around a single sharp point, perhaps using the blurred zones to point this out or else as a color wash. At other times, we would really have preferred to have greater depth of field but have to settle for shallow.
For some distance beyond the depth of field, the soft focus can, in principle, be treated in the same way as we’ve just seen with overall soft-focus blur. If it’s possible to isolate these zones somehow, they can be processed with a focus filter. Dividing an image according to depth is at best a fairly arbitrary procedure. By doing it after the event, there is no direct measurement possible to construct a depth map. The only evidence is a commonsense reconstruction from what you can see in the image, but in a case like the example here, there are no ambiguities. We know what a truck looks like, and which parts will be in front of which others. Remember also that here we’re after visual effect, not precise measurement. The color-coded focus plan is included to show approximately the plan for extending the depth of field. It is not for any procedural use, just a quick analysis of the different planes of softness.
The principle is to first decide how much of the soft zones beyond the depth of field can realistically be brought back toward sharpness. Bearing in mind the 20-pixel maximum practical radius (see here) and the fact that large-radius filtering runs the risk of looking unnatural, it may be worth selecting two or three zones of increasing blur, then running the filter at different settings for each.
There are several options for doing this, but all depend on some form of manual brushing. One is to paint a mask on the image to cover a zone (Quick Mask in Photoshop), then apply the filter to that, repeating the process for other zones. Another, which allows a better overall view, is to apply the filter at a particular setting to an underlying duplicate layer, and then selectively erase the top layer. A third method is to apply the filter, then use the History Brush to restore the already-sharply focused areas to their original state.
This is the extent in front of and behind a sharply focused distance that still appears to the eye to be acceptably sharp. It depends on aperture, focal length, and the distance between camera and subject, but above all is a subjective impression. The smaller the aperture, the shorter the focal length; and the further the scene from the camera, the greater the depth of field. Nevertheless, what one person deems acceptably sharp may not be the same as judged by someone else.
MOVING VEHICLE
The original image of a truck on a Canadian highway was shot from an overpass with an extremely long and fast telephoto lens. With a focal length of 600mm, and at f/4, the depth of field was extremely shallow. This is the result after extending the focus.
FOCUS BLUR ANALYSIS
Comparing different parts of the truck shows the progression of focus blurring towards the back.
FOCUS PLAN
For the purposes of this demonstration, the image is divided into focus zones, each needing different amounts of correction.
INFOCUS CORRECTION
Each zone is selected by painting in Quick Mask, and specific correction applied. This is zone 4 on the plan.
Going back to the types of blur identified see here, the key difference between focus blur and motion blur from the point of view of repairing them is not how they were caused, but rather the shape of the blur.
The Point Spread Function for focus blur gives a radial pattern—from point to disc, in other words—but motion blur is unidirectional and at a particular angle. You could say that both focus and motion blur are versions of the same effect, and for repairing them the software problems are similar.
The procedure, too, is similar. First the motion blur has to be calculated; in terms of how many pixels in length and in what direction, measured as an angle. Then the software has to attempt to move edge-defining pixels back to where they should have been, by exactly that many pixels and in that direction. This is no easy calculation, because it is applied to the entire image and within this the algorithm must distinguish between edges and more amorphous zones. Photoshop has such a filter, which can be found by choosing Filter > Sharpen > Smart Sharpen and, under the Remove heading, selecting Motion Blur. However, as the example here shows, this is really just a variant on unsharp masking, and while it may improve the appearance to an extent (depending on the specific image), it is not actually reversing the motion blur.
The more effective software is InFocus which, up to a blur distance of several pixels, can impressively reveal detail that was not visible in the original image. An acid test of the efficiency of motion-blur repair is what the software does to a distinct, simple line that is more or less perpendicular to the blur angle. Typically, it appears in the original as a thicker band instead of a line. Effective repair will restore it to a line.
NUN
Small details that contain lines or edges running in different directions are ideal for assessing direction and amount. Here, the fold in the cloth is sharper than the edge. Matching the direction of the fold by eye with the Blur Direction setting confirms the direction at 100º.
BEFORE
Having established the blur direction, using the plentiful detail in the bead jewelry, experiment with different Blur Distances. Here 1 pixel is too little, and 4.5 pixels too much. The ideal setting is 3.7 pixels. See the effect on the eyes and on the white beads.
DETAIL VIEW
As ever, the best way to see your work is to view it at 100% or a multiple thereof (200%, 300%, 400%, and so on). These scales do not require interpolation to be displayed, so are the most accurate.
Too much
Too little
Just right
AFTER
BEFORE
INFOCUS
For this image, setting Motion Angle to around 60% with a Blur Radius of just over 2 produced the best results.
AFTER
The key to this kind of correction is finding the most appropriate detail in the image. Circles and other predictable curves are good because they identify a progression. This ring also contains specular highlights, making it even easier to measure the blur. For confirmation, look at a similar frame that is sharply focused.
BEFORE
INFOCUS
Using InFocus’ Unknown/Estimate setting produced a distinctly sharper image.
AFTER
In the example here, the motion blur is so strong that it is an inextricable part of the image. In fact, it is due to a slow shutter speed and a panning action.
The immediate issue is whether or not the photograph makes it at all. Does it succeed, or rather could it succeed with post-production help? At this degree of motion blur, there are no generalizations. Everything is in the specific. Opinion also tends to be specific, and personal. Let’s say that in this case (a mistake, actually, as I had the shutter speed set slower than I had thought) I had intended the picture to be sharp and was disappointed that I had got it wrong, but that later I grew to like it. The problem, again always specific, was that there was no anchor-point of real sharpness anywhere in the frame. In particular, the eyes at least needed to be sharp to some degree. In general, the eyes are the focus of any face, and it’s almost impossible to escape the need for them to have sharp edges.
Using the techniques already described, the motion blur here can be analyzed to a distance of many more pixels than the capability of focus-repair software. Also, at this amount of movement, the smearing is wobbly, which lends itself even less well to procedural treatment. The only reasonable procedure is manual, in other words brushing by hand.
The interesting point here is that very few parts of the image need to be sharp in order to make it visually acceptable. What rules is a knowledge of human perception. The content, even though unfamiliar to most Westerners, is nevertheless quite understandable. A Vietnamese man wearing a military pith helmet of the Vietnam War vintage, is smoking a large bamboo pipe. This reads at a glance. All that is needed technically is to introduce some key sharpness at a few well-chosen points, after which the remaining blur will be completely acceptable. The details that need sharpening are precisely located but obvious, and can be summarized as follows: the eyes as always, a re-definition of the mouth, and nearby distinct edges. No more than that.
BEFORE
AFTER
SPOTTING THE BLUR
The metalwork on the pith helmet is an ideal indicator of the direction of the motion blur, which can be fixed in InFocus. But applying it to the face creates unacceptable artifacting.
AREAS FOR MANUAL CORRECTION
Instead, the key zones for manual retouching are identified as the eyes, ear, and nose.
EYE PLAN
The plan of attack for the eye is to use the Smudge tool in specific directions, followed by a darkening with the Burn tool.
NOSE AND MOUTH PLAN
The Clone tool settings chosen for sharpening the edges of the nose and mouth, working from the outside for the nose and from the inside for the lip.
EAR PLAN
The Clone tool settings chosen for sharpening the edges of the ear.