Advanced Photographic Techniques

Advanced Photographic Techniques

White Balance

In most shots, I use Auto White Balance and photograph a set of DGK white-balance cards to provide a neutral-tone reference for setting the white balance later in raw processing. Although these cards are a useful aid, they have their quirks that I will cover later. For now, I’ll just describe how I use them. When the environment is sunny or overcast, shooting a frame with the cards is straightforward. I hold them out in front of the camera, angle the glare away and shoot without caring about focus. I’ll error towards overexposing the cards since Adobe recommends a higher midtone value for a white balance reference to avoid noise interference. If the scene is mixed lighting, such as a sun-lit background and a shaded foreground, then I shoot the cards in the shade since I don’t have any choice. If correcting for the shade’s bluish cast in post-processing exaggerates the background’s warm hue, then I independently control each color cast in Camera Raw or Photoshop using the techniques described in later chapters.

DGK White Balance Cards

Polarizers

Polarizer Woes

A polarizer is a powerful tool, but you have to be careful how you use it. A common problem is over darkening a blue sky, sometimes to near black. Back when I used contrasty films like Fujichrome Velvia, what looked good in the viewfinder turned out too dark in the slide. Back then, and even now, I find it difficult to judge what the resulting effects of a polarizer are by looking through the viewfinder. Even if the LCD displays a pleasingly saturated blue sky, it may be difficult to print due to the printer’s gamut limitation. Unless I intentionally want a dark-blue or black sky when the light is highly polarized, I take a few “bracketed” shots with the polarizer at different settings to be sure I don’t go overboard.

Another pitfall is using a polarizer with a wide-angle lens. Besides the risk of vignetting, the bigger problem is the field-of-view spanning a changing level of polarization that results in an unnatural blue gradient (see photo below). This is the same problem with a panoramic shot. In either case, you can’t rely on the LCD display since it can be difficult to notice. Instead, I would recommend shooting bracketed levels of polarization to be safe. If there are some clouds, then a gradient may be camouflaged somewhat and is less a problem. Nevertheless, I feel it’s best to forget the polarizer when using a wide-angle lens or doing a panoramic shot. You can selectively darken a blue sky later in post-processing.

Wide-angle Gradient: This was taken with a 17mm lens. Due to the eye’s high dynamic range, a gradient like this isn’t necessarily obvious in the viewfinder.

Using a Polarizer

My favorite use of a polarizer is during sunrise and sunset when shooting somewhat normal to the sun’s direction, or roughly north or south. It improves the appearance of the sky, especially when it’s a bit wimpy. Another handy use is as a poor-man’s ND filter. If you want to slow down the shutter speed to create a frothy effect of flowing water, a polarizer can slow the shutter speed by up to 2-stops. Note that if you want a really exaggerated frothy effect, you’ll need a dedicated ND filter for a much longer shutter time.

Also, don’t put your polarizer away in overcast or diffused lighting situations. Even under heavy overcast there may be glare lurking, especially on shinny or wet foliage. At times it’s not overtly noticeable, so it’s prudent to check the scene with the polarizer. If the improvement is only minor, then skip the polarizer because it is always better to shoot with as few filters possible. Plus, there are times I want a bit of glare to add some sparkle because too much polarization can make the image a bit dull and lifeless

Neutral-Density Graduated Filters

Neutral-density graduated filters (or grad filters for short) were and still are a traditional tool for landscape photographers. Today, though, many photographers have abandoned grad filters in favor of shooting only HDR. I feel that’s a mistake because in my experience, the grad filter is still an essential tool — even when shooting HDR. How grad filters and HDR live side-by-side will be discussed extensively in Chapter-11.

What is a Grad Filter?

The grad filter simply reduces the light from the scene’s brightest area, making it the simplest contrast control technique that doesn’t involve HDR software or complicated Photoshop processing. For example, a 2-stop grad filter holds back two stops of light at one end while allowing all the light to pass through at the other end (see figure below). The transition between the two is either gradual or abrupt (termed soft or hard respectively). A grad filter shows the final results on your camera’s LCD rather than later in post-processing. Also, it is a one-shot process and subject movement is less an issue than with HDR’s multi-exposure method.

ND Graduated Filter

There are also “reverse” grad filters where the maximum density starts in the middle and tapers off towards the top. The reverse filter is ideal when the light is concentrated in the middle frame, such as in sunrise or sunset shots. This prevents the upper sky from becoming too dark. They are available in multiple stop values, but the 3-stop (ND 0.9) is probably the most useful for sunrise and sunset shots.

A popular grad filter brand is Cokin. I had used their 85mm P-series filters successfully for many years and they were the least expensive. The one drawback to the P-series grad filters were their short 4-inch height, which limited their positioning range within the frame. To resolve that, I switched to Formatt-Hitech filters that are 1/4-inch taller and cost only a little more. Pros tend to favor the premium-priced Singh-Ray or LEE brands. Because I handle grad filters a lot, I tend to drop them; and dropping a less-expensive filter is a lot less traumatic.

Vignetting

The disadvantage of 85mm filters is the holders are not wide-angle friendly and can cause vignetting. You might get away with just hand-holding them without a holder, but that requires more coordination than I’m capable of. The best solution is to upgrade to the 100mm (4-inch) filters and use a special wide-angle lens adapter on the filter holder. The wide-angle adapter is a clever design that allows the filter holder to sit back closer to the lens. Both Lee and Formatt-Hitech offer this type adapter. Another advantage to a 100mm grad filter is its 6-inch height. This allows the greatest positioning range of the grad filter’s transition within the field-of-view.

Below is the Formatt-Hitech 100mm holder and ring that I use. The wide-angle adapter ring attaches to the lens and then secures to the holder with a thumbscrew (not visible). Since I’m too lazy to remove my UV protection filter, I get only a hint of vignetting at 14mm (at f/22) that can be either ignored or easily fixed in post-processing.

Formatt-Hitech 100mm Filter Holder and Wide-Angle Adapter Ring: You’ll need to remove the outer filter slots to completely avoid vignetting (I use only one slot). Note that the above items have been superseded by newer models.

What Stop Value to Use

Grad filters are available from one to four stops in one-stop and, in some cases, half-stop increments. They are available in both hard and soft transitions. I mostly use a 3-stop graduated filter since it suits most sunrise and sunset lighting conditions I shoot under. The exception is photographing towards the sun’s direction during morning or evening where the 3-stop filter may not be strong enough to capture both sky and foreground detail.

Although I favor the 3-stop grad filter, you may find a 2-stop or 2.5-stop filter more to your tastes. I lean towards underexposing the highlights a bit to increase the richness of a warm-lit background during sunrise or sunset; hence my preference for a slightly stronger filter value. The drawback is the stronger 3-stop filter can be a pain to deal with for reasons I’ll soon explain.

Positioning the Grad Filter

A morning shot of distant mountains illuminated by the rising sun with the foreground still in shade is a common scenario requiring a grad filter. Typically, I would position the grad filter’s transition in the middle of the mountain range. This holds back the full stops in the sky (usually the brightest area) and a little less on the sunlit mountains due to the filter’s transition. You have to judge between using a hard or soft-edge filter. Rarely is a scene’s contrast a straight line, so I mostly use a soft-edge filter. The exception is when using a telephoto lens where a hard-edge is better due to the telephoto’s shorter depth-of-field.

Remember that the filter’s gradient narrows at smaller apertures, so use the aperture stop-down preview or live-view to position the filter. Never adjust the filter with the aperture wide open. Also, if the filter is positioned high in the frame, make sure the filter’s bottom-edge doesn’t unintentionally protrude into the lower field-of-view.

Grad filters have one annoying trait, especially with higher ND values. In uneven contrast, the filter (especially the hard-edge) is prone to over-darken shadow areas or create hot spots in the highlights near the filter’s transition. Usually, the over-darkening effect is the most noticeable problem. This problem is exacerbated when the filter is not properly aligned in the frame; which underlines the importance of stopping down the aperture when positioning the filter. To mitigate this problem, the soft-edge filter is a good choice for most shooting situations.

Dark Zone: Note the underexposed middle area where the filter’s transition was positioned too low.

Exposure

Because the grad filter is balancing the overall scene illumination, the camera’s autoexposure is reasonably accurate the majority of times. Still, check the histogram and adjust the exposure compensation dial for any minor clipping at either end. Excessive clipping at one or both ends means you need a stronger filter. If you need a 4-stop grad filter, then consider HDR instead. Conversely, if you misjudged the scene and the histogram is compressed in the middle, then use a lower stop value. If you don’t, the resulting image will lack contrast that later needs correction in software. That in turn leads to image problems, like exaggerated noise if the contrast correction is large.

When Grad Filters are Impractical

The obvious situation is where there is no delineation between the contrasting areas. But the more common situation is when a foreground subject, like a tall tree, crosses through the grad filter’s transition. That results in a dark (underexposed) treetop and a normally exposed tree bottom.

Another situation is a lake or river that is in the foreground shadow with reflections of the lit portion of the upper scene. The amount of light the grad filter holds back in the upper scene causes the reflections to appear brighter. That contradicts the natural lighting since a reflection is one to two stops darker than the image being reflected. This isn’t a problem with a one-stop filter, but a two- and certainly a three-stop filter is problematic.

Finally, a grad filter is a problem with strong side lighting. If the side lighting casts large adjacent shadows, the grad filter drives the shadows even darker (see figure below).

Darker Shadows: A grad filter further darkened the adjacent shadows in the upper cliff.

Software Grad Filter

You can simulate a grad filter with the help of Photoshop. You simply shoot two exposures — one for highlights and the other for shadows — and simulate the grad filter in Photoshop.

This approach has two minor drawbacks: it requires a little Photoshop skill and you can’t preview the results in the field. The advantage is the complete control over the gradient’s transition. In Photoshop, you control the transition’s size and placement, and can address hot or dark spots near the transition, including objects that cross the gradient. Inter-frame subject movement is less an issue since the two images are joined, not blended like with HDR.

Noise Averaging

Cameras are getting better at lowering digital noise, yet it is still problematic for some depending on their camera brand, model, and age. The best defense is to use the lowest ISO setting possible and apply noise filtering in post-processing. That works to a point until the noise filtering begins to soften the image.

One way to avoid or limit the post-processing noise filter dilemma is simply shoot the scene multiple times and average them in Photoshop CC. This obviously carries one major caveat, the scene must be completely static during the exposures. Below are the steps to shoot and average multiple frames in Photoshop.

Shoot two to four redundant exposures (four being ideal) using a tripod. Best to shoot at manual exposure and focus, but you probably can get away with using auto.

After editing the raw files in Camera Raw (and skipping noise reduction for now), select all the files and click Open.

In Photoshop, click: File → Scripts → Load Files into Stack. Then select ‘Add Open Files’ and ‘Create Smart Object after Loading Layers.’ You can optionally select ‘Attempt to Automatically Align Source Images.’

After the conversion is completed, select the new Smart Object layer. Now click: Layer → Smart Objects → Stack Mode → Mean. You can also select ‘Medium’ which is the center value, but the results are likely to be similar. If necessary, you can apply some noise filtering.

Based on my testing, four averaged ISO 800 frames had nearly the same noise level as one ISO 100 frame. Even at ISO 1600, the averaged image was very low in noise. I also compared the test images to my Canon 5Ds’ in-camera frame averaging feature, and a side-by-side comparison of the noise showed nearly identical results.

Bonus Feature

Do you yearn to create those dreamy water scenes taken with extremely long shutter times? Usually, you need a strong neutral density filter to create sufficiently long exposures. But what if you don't have an ND filter? Use as long an exposure time you can muster and then take repeated exposures spaced over time. Now, use the frame averaging in Photoshop CC and, voilà, you have a dreamy waterscape

Photographing the Sun

I photograph the sun only when it’s near the horizon and I can comfortably look at it with my eyes. But many photographers love to include the overhead sun, and that begs the question of eye safety and damage to the camera. I have yet to find adequate answers to this question. Canon and Nikon manuals say basically the same thing: don’t point at the sun or look at the sun’s image (and not much else).

Eye Damage

This is a concern only with cameras that have optical viewfinders, such as all DSLRs. There is no question that viewing the sun through a viewfinder poses a potential risk. The biggest danger factor is the lens’ focal length. The sun’s focused power is exponentially related to focal length, doubling for every 41% increase in focal length. On the positive side, the sun’s intensity is mitigated by whatever filters are being used (such as a polarizer), lens speed, and that the sun is focused on the focusing screen and not your eyes like binoculars.

Except for extreme precautions (such as using special solar filters), it isn’t feasible to define guaranteed safe viewing conditions for everyday shooting, so your best indicator is your eyes. Quite simply, if you can’t comfortably look at the sun’s image in the viewfinder, then obviously you shouldn’t look at it. And even that rule-of-thumb can be misleading when you fool yourself into thinking the sun isn’t that bright. The best defense is to always look at the sun’s image peripherally as you frame the composition. Blocking the sun for a rim-light effect or using the stopped-down aperture preview further reduces the risk. Also, because the risk increases exponentially with focal length, using only a wide-angle lens is your safest bet.

Camera Flambé?

A more nebulous question is if the sun can damage the camera. The biggest concern is the sensor, but other potentially vulnerable areas are the autofocus sensors, shutter curtain, focusing screen, and light meter sensor. As for the image sensor, typically the exposure is very short; but if you are using live-view or a mirrorless camera, the sun is focused on the sensor continuously.

To get a feel for how much a threat this is, I used a 100mm lens at f/5.6 to focus the noon sun on newspaper. The newspaper began smoldering after a few seconds. From the paper’s point of view, that meant it was heated to at least 4000F within a relatively short time.

CMOS does not take kindly to those temperatures. However, that experiment doesn’t account for the sensor’s thermal conduction, emissivity, reflectance, and so on; and all that works to its favor. Nevertheless, when you consider the variability of typical shooting conditions, such as lens speed, exposure settings, the sun’s latitude, and prevailing weather, there is no way to define guaranteed safe conditions other than just common sense.

There is usually no concern when the sun is just cresting the horizon or it is behind clouds or thick haze. If you can stare at the light comfortably (usually when it still retains a warm color), then the conditions should be safe for the camera. But under any other conditions, and especially with long exposures using a telephoto lens or using live-view or mirror-lockup regardless of focal length, then it’s a gamble. Also, if it’s your habit to tote your tripod-mounted camera over your shoulder, keep the lens cap on to avoid unintentional “staring” into the sun.

Don’t let this issue create undo anxiety, especially when accidentally photographing the sun. Canon assured me that sun damage to a camera’s sensor is very rare. Yet, rare doesn’t mean never. During the 2017 solar eclipse event in the US, Lensrentals (a major photo rental company) received back some of its equipment in various smoldering conditions. To view the casualties, tap here.

Long Beach Harbor, California: With a heavy haze and low-setting sun, I have no concern over damaging either my eyes or equipment.

Flare

Flare is often subtle and not overtly noticeable in the viewfinder. More often, it only reduces contrast slightly and doesn’t exhibit the classic color burst. Anytime I take a daylight shot with the sun anywhere near the shooting direction, I waive my hand or hat around the edge of the lens to check if any flare is sneaking in. As for relying on a lens hood, I don’t because it gets in the way of using a grad filter or polarizer. When there is a potential for flare, I wouldn’t trust a lens hood anyway and still wave my hand around to make sure flare wasn’t creeping in.

Sneaky Flare: This portion of a vineyard scene has reduced contrast in the lower-right due to flare. This flare was sneaky and I never noticed it in the viewfinder.

Unavoidable Flare

The previous discussion was about preventing flare, but there are times when that is impossible. For example, shooting towards the sun during sunset or sunrise — even when it’s below the horizon — can blast the image with flare. Unless you want the flare effect, there isn’t much you can do until post-processing. Still, there are a few tricks you can try to mitigate the problem.

The most effective method is to use a prime lens instead of a zoom. Barring that, then remove any filters, including protective UV filters. If you’re using a grad filter, you may have to ditch that and use HDR instead. Small aperture settings can also exasperate the problem. Use the depth-of-field preview to judge the flare’s intensity and, if necessary, open up the aperture if you are able to. If that helps but you need the extended depth-of-field, then try focus stacking (discussed later in this chapter).

Finally, shoot throughout the entire sunrise or sunset. The reason is you may capture a usable shot just before the flare is overwhelming. A small amount of flare is generally easy to remove in post-processing. In Chapter-14, I’ll cover techniques for removing flare in Photoshop.

Give flare the thumbs down

The intensity of the flare drives how difficult it is to fix in Photoshop. In extreme cases, you may have to layer two different shots and extract the usable portions. In these cases, you can make the task easier with a simple trick. Take two successive shots, but in the second shot use a finger or thumb to block the sun while avoiding as much possible the rest of the scene. That will block most of the flare in the rest of the scene and make merging the two shots easier in Photoshop. Use manual exposure to make sure both shots have the same exposure setting.

Camera Shake

There’s a “Whole Lotta Shakin' Goin' On”

All DSLRs share a common bane: mirror and shutter vibration. The mirror is the major offender, but the shutter can also contribute. The severity of the problem is a function of focal length, how the camera is supported, and of course how much of a rattletrap your camera is. The extent of blurring is also a function of shutter speed. Relatively slow shutter speeds — roughly 1/60-sec to 1-sec — cause the most problem. Then on top of all this is any external forces such as wind or accidental contact with the camera or tripod. Mileage will vary, but when using focal lengths beyond 100mm (65mm APS-C), you have to start being careful about camera vibration.

Controlling Mirror and Shutter Shock

Intuitively, to resist camera-induced shock, the support structure must be as rigid as possible. The alternative is to absorb the shock, such as with a beanbag. The latter usually lacks practicality, so the tripod is the usual means to ensure razor-sharp images; though choosing the right combination can be a challenge. The problem is that a tripod-mounted camera is a complex mechanical structure involving a lens, camera body, quick-release plate, tripod head, tripod legs, and a mix of various materials.

Heavier the better?

The common notion is that heavier is better, but different materials such as carbon fiber or wood can counter that notion. And even for similar materials, structural differences can make a bigger difference than weight. Nevertheless, acknowledging that conundrum, a reasonable rule-of-thumb is you want as heavy a tripod you can tolerate. Preferably, it should be carbon fiber with fewer leg sections and a head that provides the shortest and most direct connection between camera and tripod base (for example, a ball head instead of a pan head). But even after embracing the “army tank” approach, the conundrum remains where subtle factors can impact performance. For example, my heavier pan head vibrates more than either of my two lighter-weight ball heads.

Controlling the shakes

Consider the following when shooting at focal lengths that are problematic for your camera’s support system.

You must engage a DSLR’s mirror lockup. This eliminates most vibration problems. An alternative to mirror lockup is live-view if your camera supports it.

Mirrorless cameras have a double whammy since the first curtain must close first before the exposure can begin, thus doubling the shock. Consider enabling Electronic Front (or First)-Curtain Shutter. This opens the shutter first before taking the exposure and eliminates its vibration.

Use a cable release or the camera’s self-timer. Make sure the cable doesn’t tap the tripod or tug at the camera when firing the shutter. I often tie a loop around any convenient point on the tripod to form a strain relief that prevents accidental tugging on the camera.

Make sure the center column is completely down and firmly locked.

Keep your eye on the viewfinder before firing the shutter for any sign of wind movement. Some hang their camera bag on the tripod to steady it, or place something like a heavy beanbag on top of the camera.

Many light-weight telephoto lenses support an optional tripod mount-collar. This improves handling and balance and “should” help with vibration resistance. However, the combination may lack sufficient mass and becomes a less solid support than direct attachment to the tripod head, so results can go either way.

Many older-generation image-stabilized lenses can dither slightly when mounted on a tripod; so play it safe and shut it off unless the manufacturer says otherwise.

Last, test your equipment. Shoot at different focal lengths with and without mirror lockup and electronic front-curtain shutter to see when they’re needed. If you see blurring below 70mm from just normal shooting, you may want to consider a new head and/or tripod (or even a new camera).

Panoramas

Basics

In most cases, it is possible to score an excellent panorama using just your camera and an ordinary tripod head. The basics are:

Level the tripod’s base first unless the pan rotation is directly below the camera’s mounting plate.

Mount and level the camera both horizontally and vertically. The camera should be positioned directly over the rotation axis, such as on a ball head. For vertical shots, you’ll need an L-bracket to keep the camera aligned to the axis.

Switch to manual exposure so each frame is the same exposure.

Frame the scene leaving some surrounding space for any border loss created by the stitching software.

Unless you have a special panoramic tripod head, avoid close foreground objects that may cause parallax error and give the stitching software problems. Also, minimize camera tilt to avoid panning in an arc. If necessary, move further back using a longer focal length or lower your position.

Focus on the center scene and disable autofocus.

If using a zoom lens, try to favor the middle range for minimum distortion. Remember to apply the lens profile during raw processing to further straighten the image before stitching.

Shoot the required number of frames with approximately 30% overlap.

White Balance Caution

This applies to when you later process the raw files. If your camera is set to Auto White Balance, the white balance may change between frames. If your raw converter’s white balance is set to “As Shot.” you may not notice that the white balance setting varied between frames. So remember to check all the white balance settings and synchronize them to a common value in the raw converter. Note that this is not an issue when you merge panoramas in Camera Raw or Lightroom since the white balance for all the frames is set by the first frame.

Tilt-Shift Lens

One way to avoid tilting the lens is to use a tilt-shift lens. This allows you to aim the camera straight and shift the lens vertically to acquire the desired composition without tilting. Shifting is made possible because the image circle of a tilt-shift lens is larger than on a conventional lens.

In addition, you can shift the lens in the horizontal direction to avoid the need to pan. This eliminates parallax error of foreground objects and maintains a rectilinear perspective for the entire panorama. For example, a 3-frame panorama using a 45mm lens with 30% overlap represents an angular horizontal field-of-view of approximately 65-degrees. When the same three frames are shot with a 45mm tilt-shift lens that is shifted to its limit at each end (11mm), it produces the same angular coverage.

Panoramic Tripod Head


You ideally want to rotate the camera around the lens’ nodal point. That avoids parallax error with up-close objects that cause software stitching headaches. A panoramic head positions the camera’s rotation at the lens nodal point. Some even accommodate multi-row panoramas. There are a variety of styles and features available, so research carefully before buying.

Nodal Ninja 6 with Nadir Adapter: Panoramic head that accommodates multi-row panoramas.

Extending Depth-of-Field

Tilt-Shift Lenses

With a tilt-shift lens, you pick the foreground and background points you want in focus and then adjust both the focus and tilt (typically downward) until both points are in focus. The tilt-shift lens tilts the normally vertical focus plane to where both the foreground and background points are on the same focus plane. Everything between the two points will be in focus. Objects above or below the tilted focus plane will recede from focus as a function of aperture. Since the lens’ tilt-shifting mechanism rotates, you can shoot in either the horizontal or vertical position. Given the typical perspective of most landscape scenes and in combination with a small aperture, you can achieve incredible focus depth. Canon and Nikon offer focal lengths ranging from 17mm to 135mm.

Canon 24mm Tilt-Shift Lens: Canon is rumored to introduce a mirrorless tilt-shift lens with autofocus — an industry first.

When conditions are a bit windy, a tilt-shift lens allows wider apertures with correspondingly faster shutter speeds while still providing good depth-of-field. As an added bonus, you can shift the lens by moving it laterally up, down, or sideways to control perspective, especially of tall objects or structures (or help with panoramas as previously mentioned). With my view camera, it was my practice to start with the camera level in all axes and use the shift first to frame the scene and changed the pitch axis only if necessary. This maintained the most natural perspective unless there was a reason to do otherwise. Do note that a large combination of tilt and shift movements can cause vignetting.

Drawbacks

Tilt-shift lenses do have their limit. If the close-up subject spans the top and bottom of the frame, a tilt-shift lens is of no use. Similarly, any point that is well above or below the tilted focus plane may be noticeably blurred. How this distracts from the image is dependent on content. The photograph below of a wagon wheel has the top of a post well above the tilted focus plane and is slightly blurred.

Bodie State Historic Park, California: Inset shows out-of-focus post top while the wagon wheel to background structures are in sharp focus.

Finally, tilt-shift lenses lack the versatility of an autofocusing zoom since they are only available in fixed focal lengths and must be manually focused. In addition, the camera’s auto exposure may only be accurate when the tilt and shift is set to zero. Beyond zero, you either correct the exposure to the zero setting or auto-bracket.

Focus Stacking

Focus stacking is the software answer to the tilt-shift lens (and decidedly cheaper). You shoot successive frames with the focus adjusted a little further back each time and then merge the frames with special software. Some newer cameras now incorporate a focus stacking mode (sans the actual merging of the frames). The major advantage to focus stacking is, unlike the tilt-shift lens, it maintains complete fore-to-aft sharpness from top to bottom. Its biggest drawback is the image must remain static throughout all the shots. Helicon Focus and Zerene Stacker are two popular software apps for focus stacking. Photoshop has a similar feature, but it’s prone to artifacts that I’ll cover later in Chapter-15.

Number of frames

I’m not sure if there’s an easy method to determine the minimum number of frames to shoot. You need to start with finer increments for the up-close objects to coarser increments for the background as the depth-of-field increases. Ideally, each frame should have overlapping depth-of-fields. I prefer to keep it simple and use a small aperture and eyeball what seem to be appropriate increments. I think four to six frames should cover most landscape situations, but error on the high side when in doubt. Macro shots usually require more frames depending on magnification.

Keep in mind that as you adjust the lens focus, the magnification may change slightly (called “focus breathing”). This is particularly noticeable in macro shots. Since the software scales all the frames to the same size, the image size is set by the frame with the largest image, usually the frame with the farthest focus point. Make sure you compose the image based on the largest image to avoid unexpected cropping after merging. Finally, to avoid brightness variation between frames, set the camera to manual exposure.

Shoot for the Moon

Including the full moon in your composition often requires celestial preplanning. Fundamental is knowing exactly where and when the moon is in a favorable position relative to your intended composition. I usually time my shots when the moon is nearest the horizon, free from obstruction, and when the sun is just setting or rising. I find this the most photogenic time and there are less, if any, contrast issues with the moon’s brightness.

Once you have a location in mind, consult The Photographer’s Ephemeris (TPE) to determine the full-moon’s position and its rise or set time relative to the sun’s rise or set time. TPE also indicates the moon’s elevation at any date and time, and the elevation of any intervening obstacle. This is important to determine if the moon is clear of any obstacle at the intended shooting time and location. An additional aid is TPE 3D (discussed back in Chapter-5) that simulates a visual representation of the scene, including the moon’s position.

Two Field Examples

Joshua Tree National Park

Using TPE, I determined that at the location seen below, the setting moon was 5.4-degrees above the horizon at sunrise. This was nearly perfect with the moon still sufficiently above the horizon to compose it properly while catching the first rays of sunrise from behind me. The timing of the emerging sun provided enough light to level the contrast between the sky and moon.

Moonset, Joshua Tree NP at Sunrise

Plan ahead

Below is a full moon image taken 19-minutes before sunrise and 53-minutes before moonset. However, what I would have preferred was to capture the moon next to or near Lone Pine Peak (the foreground mountain on the right) just when the mountain peaks received the first rays of sunlight. So, to determine when and if the shot I really wanted was possible, I used TPE and TPE 3D to research the shot.

Lone Pine, California: I need to determine when the moon will be in this same approximate position during sunrise.

Using TPE first, I placed the red pin on my shooting location (the same location as in the above photo) and the grey pin on Lone Pine Peak, which indicated an elevation of +10.2°. I then used TPE’s Visual Search function to find when or if the full moon will be near the desired position during sunrise. With a search starting date of September 2017, the Search function yielded a hit on April 1, 2018. Next, I used TPE 3D to verify that the lighting and moon alignment were correct. As shown in the TPE 3D screenshot below, you can see that the moon alignment and mountain illumination are perfect. (Note: when the appointed time finally arrived, the entire area was socked in — oh well, c’est la vie!)

Lone Pine Moonset on April 1, 2018: TPE 3D verified that everything is in the desired alignment on that date.

Focal Length

I prefer the moon to complement the scene rather than dominate it. This means the moon has to be appropriately scaled to the scene — neither a white speck or too large to dominate the composition (which is fine if that’s what you want). I find that around 100mm to 200mm (full-frame) works well for many scenes. In fact, the popular 70-200mm zoom is probably the ideal lens for shooting moonscapes. The shorter range allows capturing a sufficiently expansive landscape with adequate depth-of-field while keeping the moon recognizably large. The longer range allows enough versatility to further enlarge the moon and still maintain some semblance of a landscape vista.

If your objective is a really large moon, then estimate how much of the frame you want it to take up. For example, a 6mm image fills 25% of a full-frame’s horizontal height. Multiply that by 115 and the required focal length is approximately 690mm. The same rule applies to the sun’s image size.

Shutter speed

As the focal length increases, so does the moon’s relative movement within the frame. Nevertheless, even with a long telephoto lens, shutter times of up to a few seconds shouldn’t be a problem. In most situations it’s unlikely you’ll ever have exposure times that exceed that.

Moon Exposure

Ideally, you want the moon to be properly exposed and not a white orb. This is why shooting when the sun is near the horizon is important because there’s sufficient ambient light to level the contrast between the moon and the rest of the image. When there is no (or very little) sunlight, you now have to figure out how to deal with a large white “dot” in your image.

Hide it

One approach is to partially hide the moon. In the example below, the moon is partially obscured by the distant mesas, which I feel is better than a free-floating and featureless white orb. Another option, and if you’re lucky to have some clouds hanging around, the moon’s ethereal backlighting may be just the right touch. Finally, just avoid a full moon and shoot a crescent moon instead. A crescent-shaped moon is esthetically more interesting than a white orb.

Bryce Canyon Moonrise: A partially concealed moon adds a bit of mystery.

Two exposures

You can take two separate exposures and paste in the moon in Photoshop. To expose for just the moon, try f/16 and 1/60sec at ISO 100 as a starting point. While this sounds straightforward, it may not be a simple cut-and-paste operation but rather a skillful layer-blending procedure. This is because the longer exposure will emphasize any surrounding glow around the moon from atmospheric haze, and that may complicate superimposing the exposed moon in Photoshop.

HDR and/or graduated ND filter

If there's some ambient light and the contrast isn’t too extreme, then HDR, a grad filter, or a combination of both may work. If the moon moved between frames in the HDR series, you’ll need to use the deghost option in the HDR software.

Vignette for Seasoning

This is not a photographic technique but a post-processing consideration for your moon compositions. Vignetting is an effective way to draw more attention to the moon or sun, especially when it’s near the horizon. An example is the Mono Lake moonrise image below. It was a nice moon shot, but it needed more pizazz. So, I vignetted around the moon to emphasize its glow and to reduce the bright foreground reflection (see second image). This resulted in a darker and more saturated foreground, and made the moon appear as the main light source.

Mono Lake, California: The bright foreground competes with the moonlight.

After Vignetting: The foreground is now darker and more saturated, making the moon appear to be the major light source. It also corrected the over-bright reflected moon.

Photographing the Milky Way Galaxy

Astrophotography is a popular pursuit for many landscape photographer. While star trails are a favorite subject, I prefer shots of the Milky Way Galaxy. Shooting the Milky Way is a largely eastern-to-southernly viewed seasonal event and surprisingly easy to do. The galaxy is visible from March through October in various orientations ranging from vertical to a rainbow-like arc.

Basic Steps

Apps

First, I recommend you find an app that provides both the time and positioning of the galaxy. I recommend both PhotoPills and TPE 3D. They show the orientation and position of the galaxy as you advance the timeline. PhotoPills’ augmented reality feature overlays the galaxy on your phone’s camera image so you can frame the shot. TPE 3D shows the galaxy on its simulated terrain feature, which is handy when researching offsite.

Another useful app is one that shows a map of light pollution levels. While you may have seen photos of a bright Milky Way, that isn’t what the naked eye sees. Instead, you need the camera’s ability to collect light over an extended period of time, and that requires absolute darkness (preferably with no visible moon). There are many available light pollution apps, but I find it convenient just to use the included light pollution map in TPE. To access it, tap the multilayer icon and then tap the street/light-pole icon.

Lens choice

I highly recommend a fast, wide-angle lens that is at least f/2.8 or more (ideally f/1.8 or f/1.4). The reason is to reduce shutter time and ISO to limit star trails and image noise. Also, because the galaxy spans a wide field-of-view, I recommend full-frame focal lengths roughly between 14mm to 24mm, although you can probably stretch it to 35mm.

You can first try whatever wide-angle zoom you have, but if the results aren’t satisfactory, then consider a fast prime lens. All you need is a less-expensive manual lens sans auto-focus and auto-exposure. A popular choice among astrophotographers are lenses from Rokinon (aka Samyang). My choice was a Rokinon 20mm f/1.8; however, I would not recommend these lenses for general photography due to their lack of auto exposure and focusing.

Rokinon 20mm F/1.8 Prime Lens

Exposures

Once you have framed your shot with the help of PhotoPills and TPE 3D, you set your lens focus to infinity and full aperture. Listed below are recommended starting exposures for the three most popular Rokinon/Samyang full-frame lenses for astrophotography. Shutter settings are the maximum time to limit star trails. If you need to increase the exposure, you can only increase the ISO or change to a faster lens. You may have to judge exposure mostly by the LCD’s image since the histogram may not be that informative.

14mm f/2.8: 36 sec at ISO 3200

20mm f/1.8: 25 sec at ISO 1600

24mm f/1.4: 21 sec at ISO 800

The maximum shutter times to limit star trails were determined by using the traditional 500 rule, which is 500 divided by the focal length. You will find variations of this rule, including the so-called NPF rule. PhotoPills has a Spot Stars feature that calculates maximum shutter speeds using both the 500 rule and NPF rule. When I had “pixel-peeked” a 50-megapixel image using the 500 rule, I did see some slight elongation, but it was completely unnoticeable at typical print sizes. I recommend you start with the 500 rule until you see a reason to deviate from it.

Artificial light

Since the foreground will be in total darkness, a flash or high-intensity flashlight are useful for illuminating a foreground subject (see image below). Also, consider using flash gels for more dramatic lighting.

Milky Way Galaxy: Shot in Joshua Tree National Park. Horizon glow is from the nearby Palm Springs area.

Starry Landscape Stacker

The first thing you’ll notice with any starry night shot is a large amount of noise due to the long exposure at a high ISO setting. You can try the noise averaging technique previously described in this chapter and then tediously labor in Photoshop to align the moving stars. The better solution is to use an app that does it for you. I use StarryLandscapeStacker (Mac only) that takes several repetitive shots and performs the averaging and image alignment. Sequator is a similar app for Windows.

StarryLandScape Example: This is an enlarged portion of a nighttime scene. Using only four frames, the StarryLandscapeStacker (right) is noticeably less noisy than a single shot frame (left).

ISO ‘Invariant’ and ISO ‘Variant’

Astrophotography usually requires high ISO levels and long shutter times, which provides an opportunity to digress to the topic of ISO invariant and ISO variant. Admittedly, this subject is more of academic interest than anything else. However, the subject itself does address an interesting question. Assume you have taken an exposure of f/8 at 1-sec and ISO 800. Now, would there would be any difference if the same shot was repeated except with the ISO set to 100 and the exposure later increased 3-stops in the raw processor?

Noise

Before answering that question, first a very simplistic overview of image noise. The two main sources of noise stem from the photo sensor and the backend signal processing. Between the sensor and backend processing is an analog amplifier that amplifies the sensor’s output signal. The level of amplification (that is, gain) is set by the ISO level. When the gain is high (for a high ISO), the sensor noise along with its signal are both amplified. Then, the backend processing adds additional noise that varies by design.

ISO invariant

There are two ways to increase the sensor’s output signal. One is through the analog amplifier and the other is by increasing the Exposure slider in the raw processor (digital multiplication).

If no photon and electrical noise were present when capturing an image, it could be argued that it doesn’t matter how you increase the signal, either with an amplifier or in the raw processor (although I’m ignoring some minor technicalities for the sake of simplicity). In this case, you could underexpose any scene at ISO 100 and later correct the exposure in Camera Raw with nearly similar results as having increased the camera’s ISO setting instead. This is defined as “ISO invariant.”

Of course, in the real world we have noise, but it’s still possible to be ISO invariant. In a more realistic situation where most of the noise is dominated by the front end, it won’t matter if it’s the amplifier or the raw converter that is amplifying the signal. That is because in either case, the noise would be equally increased, thus maintaining the definition of ISO invariance.

ISO variant

When there is excessive backend noise, the raw converter now sees the total of the front and backend noise. Increasing the exposure in the raw converter now amplifies both front and backend noise, which results in a noisier image. In this case, it’s better to increase the amplifier’s gain (by setting a higher ISO level) so that the sensor signal can dominate over the backend noise; or in other words, improve the signal-to-noise ratio. This is defined as “ISO variant.”

Example of ISO variant and invariant

Below is a comparison of the Canon 5Ds against the newer Canon R5. In dim light, I photographed a closeup of the edge of a small yellow box at ISO 100 with a 3-stop underexposure. In Camera Raw, I increased the Exposure slider by 3-stops. I then compared the ISO 100 “pushed” shots to the same scenes shot at ISO 800 (3-stops above ISO 100). Below are a highly magnified portion of each test shot.

ISO Variant Example: The Canon 5Ds is much noisier when pushed 3-stops from ISO 100 (left) versus shooting at ISO 800.

ISO Invariant Example: The Canon R5 3-stop pushed image (left) rivals the ISO 800 shot.

My bother with this?

You may rightly ask why anyone should care about this, but there are a couple reasons. First, as you increase the camera’s ISO level, you lower the dynamic range. Therefore, there is a potential benefit to shoot at a lower ISO to preserve highlights and correct the exposure later in the raw processor. Example situations might be shooting an indoor rock concert or nighttime firework display. Of course that applies only for ISO invariant cameras and even then, you’ll probably be limited to roughly 3-stops of exposure increase.

The other reason that has more practical relevancy is exposure recovery in a high-contrast image. A common photo editing problem is trying to open up deep shadows. This is usually handled with adjusting the Exposure slider or using Curves along with other various tools and techniques. What often happens is you end up with a lot of noise in the recovered area. Using an ISO invariant camera allows more exposure increase before noise becomes objectionable. To determine if your camera is ISO invariant, either test it yourself or check its review at DPReview that covers this subject.

Stopping the Wind

Some photographers take advantage of blustery winds to create surreal images of swaying trees, grass, or whatever. But if that’s not your intent, then the wind becomes a royal pain. A typical scenario is when using flowers or some other wind-sensitive subject as a foreground subject to a background vista. You want the flowers to remain sharp. This problem is most exasperating during morning and evening where the dim light and small aperture for increased depth-of-field result in long shutter times.

You could wait for a lull, but when the wind is persistent you need a Plan-B. You can increase the ISO to achieve a suitably high shutter speed and suffer the noise penalty. But the better solution is to shoot both a low and very-high ISO shot. Then in Photoshop, you paint in the sharper foreground portions while leaving the bulk of the low-noise image intact.

Plants and flowers generally have busy detail that tends to camouflage the noise. You’ll still need to run noise reduction on the high ISO shot that may sacrifice sharpness, but that’s better than a blurry image. The ease or difficulty of merging the two images depends on the ferocity of the wind and the nature of the image. Generally it should be fairly easy, especially if there is some neutral separation between the moving elements and the rest of the image.

Wind Blur Example: Many of the foreground flowers were wind blurred. Using a higher-ISO shot, I painted out the offending flowers. The inset shows a before flower.

Tip

This method does have its limits when dealing with multi-second exposures, which are common in morning and evening shots. For example, if you need a 10-second exposure at ISO 100, increasing the ISO to 3200 still results in a 1/3-second shutter speed. To help with that, open the aperture a couple stops to further shorten the shutter time and refocus on the foreground. Losing some background sharpness doesn’t matter since you’re only interested in the foreground subject. Even if the shutter speed is still a bit long, at least now you have a better chance of catching a suitable lull in the wind.


Chapter 10: HDR Introduction

High Dynamic Range, or HDR, addresses the situation where your camera can’t capture all the highlights and shadows in one exposure. Too much light saturates the sensor and too little causes the sensor signal to be dominated by noise. HDR solves this by taking multiple exposures of a scene and merging them with special software to encompass the entire tonal range into one image.

Dynamic Range

Defining Dynamic Range

The range of light that is captured by your camera’s sensor is the camera’s dynamic range. That means, in plain talk, the brightest light that detail is still retained, and the darkest light where detail is also retained without excessive noise. Dynamic range is expressed as a ratio of the brightest area to the darkest. If the brightest reflection of a sunlit scene is 250,000 cd/m2 (candela/square-meter) and the darkest is 5,000 cd/m2, that is a light ratio of 50. The number of times the light intensity doubles within the ratio of 50 is the number of stops of dynamic range. That calculates to 5.6 times or a dynamic range of 5.6 stops. The reason to express dynamic range in stops is because the world of photography is centered on the way our brain differentiates tones, which is in stops of light and not by subtle light variations.

Typical Dynamic Ranges

In conjunction with the iris, the human eye’s dynamic range can be up to 24 stops. But when fixed on a scene with a constant pupil opening, the dynamic range is closer to 10 to 14 stops. Regardless of the latter point, your eyesight will still function to process a much-wider dynamic range than any camera. Typical cameras have dynamic ranges around 12 to 14 stops at their lowest ISO. By the way, a non-HDR computer monitor is around 10 stops and a photo print is around 5 stops.

HDR Examples

First Example

In the untouched image below, the camera captured the entire dynamic range of the scene without any highlight or shadow clipping. Nevertheless, the shadowed foreground is much darker than what our eyes would have perceived in the actual scene.

High Contrast Image: Unprocessed raw file image

To remedy the problem, you would increase the foreground exposure in Camera Raw by using the Shadow slider or the Linear Gradient tool. While Camera Raw allows considerable exposure correction on a raw file (especially with an ISO invariant camera), there’s a limit before noise and loss of detail dominate. The better solution would be to shoot bracketed shots and combine them in HDR software. In the images below, the left image demonstrates Camera Raw’s prowess in improving exposure, but the adjacent image shows the superiority of HDR.

Post Processed: Camera Raw (left) and HDR (right)

Second Example

Below is an extreme example that was shot towards the sun’s direction after sunset. The histogram inset shows the massive clipping in both the highlights and shadows, indicating a dynamic range in excess of the camera’s ability. Trying to produce a usable image with a single exposure is impossible, but by using HDR you can create the next image.

Valley of Fire, Nevada: Image with camera set to autoexposure.

Valley of Fire, Nevada: HDR version.

Third Example

During daylight shooting, you’ll likely capture the image’s contrast with one exposure and successfully process it in Camera Raw or Develop. Unlike the first HDR example that was dominated by a large foreground shadow, typical sunlit landscapes have dispersed dark tones throughout the scene that have a more normal appearance.

That said, I recommend shooting an HDR series for daytime scenes for two reasons. First, you have a recourse if you misjudged the contrast and later realized you need to recover some shadow or highlight detail. The other reason is more artistic than technical. Compared to HDR images, single-frame images generally look more natural when routinely processed in Lightroom or Photoshop. Other times, the image may need a bit more drama, which HDR does with its greater midtone contrast and eye-popping shadow and highlight detail. The single-exposure image below was well captured tonally and produced a good image. The HDR version in the second image added a bit more drama and lushness. Which image is better, of course, is totally subjective.

Processed Single-Exposure Scene

Same Scene Using HDR

HDR Processing

HDR File Format

To create and edit an HDR image requires a way to digitally represent the larger span of numerical data needed to define the broader luminance range. In other words, we need to represent image data differently than the traditional 8-bit JPEG and 16-bit TIFF image files. These traditional files represent only so much dynamic range before running out of “numerical steam."

Adding more bits alone won’t solve the problem because of the way image data is stored. The software formats data to conform to the way the human eye separates tones by stop increments (called gamma correction), and merely adding more bits is an inefficient way to increase dynamic range. The solution is to change from the 16-bit integer-based file format to a larger 32-bit format using floating point. Now you have essentially an unlimited numerical range that stores all luminance values with equal precision and forgoes the gamma correction. Eventually, the file is converted back to a conventional gamma-corrected 16-bit (or 8-bit) file for editing in Photoshop.

To accomplish this task, we use an HDR software application that is typically separate from your image editor. Some operate as a stand-alone application or a plug-in to Photoshop or Lightroom. Adobe’s HDR is contained within Photoshop and Lightroom. The input to the software is multiple frames of the same image taken at different exposures. The software looks at the file’s exposure data to determine luminance levels and then combines the frames into a single 32-bit file that is edited within the HDR application. The image editing tools at 32-bits vary by application, but after conversion to a 16-bit file, you may need further editing in Photoshop or Lightroom to complete the image.

Feeding the HDR Software

HDR software is almost like a bottomless pit that swallows whatever dynamic range you toss into it. What gets tossed in are multiple exposures that capture both the highlights and shadows, and something in between. The exposures must not vary by more than 2 stops apart. Typically, all you need are three to five exposures, and maybe more in extreme cases; but too many exposures may cause problems for reasons I’ll explain later.

Tone Mapping

When editing is completed in the HDR application, the 32-bit data is “tone mapped” into a standard 16-bit or 8-bit file. The tone mapping process is very sophisticated and handled in different ways with often different results. Some tone mapping engines render pseudo-realistic images that Salvador Dalí fans might like, but they’re not the type of images I want for traditional fine-art landscape photography. Fortunately, most of the newer HDR applications are now more adept at rendering a natural image.

HDR’s Paranormal Behavior

With the complexity of HDR tone mapping comes the problem of artifacts. There are two common artifacts that I’ll refer to throughout this book that I’ll define now to avoid later confusion. They are “halos” and “ghosting” (along with “deghosting”). Halos are an artifact of the tone mapping process. Ghosting and deghosting are related to any subject movement between frames that appears as a blur in the composited image. Deghosting is the process to remove that blur.

Halos

While there are other types of HDR artifacts, halos are the most common. Halos occur between high-contrast borders, usually between the sky and the rest of the image where the sky suffers most. In the image below, note the bright aura along the mountain ridge. Other times halos appear as a thin, bright line running along or around high-contrast borders. Halos can usually be repaired (a Chapter-20 subject).

Halos: A bright aura borders above the mountain ridge. Ironically for this image, most viewers would not notice it.

Ghosting and deghosting

This is where wind and HDR don’t mix well. Blowing brush or trees will obviously become a blurred mess after they are merged. It’s also the same problem with animals or cars that move between frames. Most HDR programs have a feature called “deghosting” to address this problem. Basically, it attempts to identify a moving object between frames and replace it with an image from a single frame that you usually assign. Unfortunately, the software’s ability to do this correctly can vary widely. Often, it may create more problems than it solves.

Tip: Before enabling deghosting, you first have to determine if there is inter-frame motion. You should never enable deghosting routinely. The following is a quick method I use. In either Bridge or Lightroom, select all the bracketed frames and press the space bar for an enlarged view. Then, press and hold either the right or left arrow key on your keyboard. This will animate all the frames, making it easy to spot any inter-frame motion.

Popular HDR Applications

My library of HDR software consists of:

HDRsoft Photomatix Pro 6

Nik's HDR Efex Pro 2

Adobe’s Photoshop CC (2022) and Lightroom CC

Skylum's Aurora HDR 2019

The first two programs were the mainstay of HDR software for several years. Adobe became a major player when it introduced its HDR Merge feature. The fourth application, Aurora 2019, is a newer product that has made quite a splash in the HDR software world. There are also many other capable applications available, such as Pinnacle's HDR Expose 3 and easyHDR.

Photomatix Pro 6

Photomatix had been the gold standard in HDR software for several years and was my first primary HDR application. Unfortunately, Photomatix grew long in the tooth and began to lag behind its competitors. Their last release, Version 6, unfortunately stuck with its older menagerie of methods and outdated interfaces.

While Photomatix Pro 6 is still a competent product and has some unique features, I have nevertheless retired it and will not be covering it. The reason is Photomatix consists of an assortment of tone mapping methods rather than a unified application with one common interface. Though the interfaces are simple among the various methods, the tone mapping controls interact and conflict with each other, making them frustrating to use. Also, the fewer editing controls mean that, compared to the other applications, more extensive post editing in Photoshop or Lightroom is usually required.

HDR Efex 2

When HDR Efex Version 2 came out, it became my primary HDR application and replaced Photomatix. Eventually, as with Photomatix, it grew long in the tooth and since been sidelined by better software. The last update by its new owners, DxO, addressed mainly stability issues. I no longer use HDR Efex and, for that reason, will not cover it in this book.

Adobe’s HDR

Both Photoshop and Lightroom have multiple options: HDR Pro tone mapping, HDR Merge, and 32-bit editing. HDR Pro tone mapping has been around for a while and was finally made usable in Photoshop CS5 with additional improvements in CS6. In spite of these improvements, I have never produced an image with HDR Pro that was better or equal to the other applications.

On the other hand, HDR Merge and 32-bit editing are a different story. Both can satisfy a reasonable portion of all your HDR needs. Of the two, I use and recommend HDR Merge over 32-bit editing.

HDR Merge is integrated within Camera Raw and Lightroom, making it fast and convenient to use. It uses the familiar and powerful Camera Raw (or Develop) interface that makes it the easiest and most intuitive of all HDR programs to use. Renderings are very natural. It is also relatively resistant to image artifacts and file sizes are smaller. Unfortunately, dynamic range is more limited and, at times, highlights or shadows will not render as well as the other programs. Note that HDR Merge uses a 16-bit (rather than 32-bit) floating-point file, which explains the smaller file size and limited dynamic range.

Aurora HDR 2019

Aurora HDR made quite a splash when it was introduced back in 2015. Aurora HDR 2019 is the fourth update and, though it still retains some flaws, is in my opinion the leader of the pack. Aurora is excellent at rendering highlights and shadows while still retaining a natural look. It has an extensive and intuitive Photoshop-like toolset; and especially notable is the Layers feature. It also is relatively resistant to halo artifacts and noise.

More is Better

Realize that HDR software has a mercurial nature. While you can define some general attributes to individual HDR applications, results are still unpredictable and no HDR application is the best 100% of the time. If you intend to become a serious HDR user, I recommend you have at least two applications. Unless you’re willing to accept less than optimum results (or spend a lot of time in Photoshop), when it comes to HDR, you need more than one option.


Chapter 11: How to Shoot HDR

Capturing a proper set of frames for HDR is not just setting your camera to auto-bracketing and away you go. Lighting situations vary and not all scenes are handled the same. In this chapter I’ll cover the steps that, over the years, I’ve found produced the best results.

Basic Camera Setup

Shoot Raw

There is only one excuse not to shoot raw and that is your camera doesn’t support it. Raw files are superior to JPEG for any type of photography, not just HDR. If your camera doesn’t support raw, set your picture style (or whatever term your camera brand uses) to as neutral a JPEG image as possible. Settings like “Landscape” or “Vivid” may create more problems in the HDR software than a neutral image. If your camera allows you to control the amount of JPEG sharpening, set it to none or the lowest possible level.

Auto Exposure Bracketing

Auto Exposure Bracketing (or auto-bracketing for short) allows the shooting of multiple frames at different exposures. Capabilities vary, but most cameras allow either three or five frames of bracketing. Some high-end cameras allow up to nine frames. In any case, check your camera’s manual to learn how to fire all the exposures automatically with a single push of the shutter. This is important when shooting HDR because speed is critical to avoid inter-frame subject motion. Furthermore, you want to limit manhandling the camera to avoid nudging it out of position.

Important!

Set your camera to bracket by shutter time, not by aperture or each frame will have a different focus depth. That in turn may cause problems with the file merging in software.

HDR Capture Procedure

Number of Exposures

Ideally, you want to capture the dynamic range with only three exposures spaced between 1 to 2 stops apart. Never bracket more than 2 stops and there is no practical benefit to bracket less than 1 stop. The goal is always to take the fewest shots possible. The reason is to limit image motion between frames, which is a problem I encounter in many of my landscape shots. The HDR software has provisions to deal with inter-frame motion (called deghosting), but sometimes the cure is worse than the problem. It’s better to avoid the problem when taking the picture and not rely on a software cure afterwards.

Three’s a charm

Wind is almost ever-present in the great outdoors. Because of that, my objective is to shoot only three frames and, if required and the circumstances allow it, use a 1-stop or 2-stop grad filter to close the contrast gap to avoid more frames. I realize using a grad filter seems odd, but it takes one-third less time to shoot three frames than five. Also, some of the problems with using grad filters are mitigated by the nature of the HDR software. Since adopting the use of a grad filter with HDR, I have reduced the need for more than three exposures and consequently have suffered fewer problems with image motion artifacts. In addition, the grad filter may narrow the bracket spacing, and that helps with the noise and fine-detail rendering.

How to Set the Bracket Spacing

Try starting with the fewest auto-bracketing frames your camera supports that achieve a four-stop bracketing range. Typically, that is three frames at two stops or five frames at one stop. Never go above two stops. Now, reduce the bracket spacing until you properly enclose the scene’s dynamic range. If instead you find that two stops isn’t enough, then either expose more frames or use a grad filter as I previously recommended. Now the question is: what defines a properly exposed sequence?

The most important exposures are the two that capture the brightest highlights and the darkest tones, or simply the first and last exposures. The histogram in both those frames should be pulled in roughly one-third (but not more) from their respective end of the histogram. If your camera supports it, consider setting the bracketing resolution to one-third stop increments rather than half-stops for finer control in positioning the histogram endpoints.

 

Typical HDR Histograms: These are an example of the type histograms you want in the darkest tones (left) and brightest highlights (right).

Recommended procedure

I have programmed two custom settings on my Canon R5 for HDR shooting. The first is set to auto-bracket three exposures at 2 stops (4-stop range). The second is set to auto-bracket five exposures at 1.5 stops (6-stop range). I usually start with three exposures along with a grad filter depending on the conditions (I’ll cover that more a bit later).

After I take three exposures, I check the histograms for my one-third criteria as shown in the previous figure. If the endpoints are too far inside the histogram, I reduce the bracket spacing. If I have a grad filter installed, unless it’s an overkill, I rather lower the spacing for better noise rendering than remove the grad filter.

If the end points aren’t pulled in enough with 2-stop spacing, I’ll add (or increase) a grad filter or shoot more frames. Deciding between adding or increasing the grad filter versus shooting more frames depends on the scene’s stability. Unless it’s dead-calm, I’ll usually favor adding a grad filter or increasing its strength. If the scene is devoid of foliage or dead-calm, I’ll increase the number of exposures. In extreme cases, such as the Valley of Fire image back in Chapter-10, I may do both. For that image, I exposed five frames at 1.5-stop bracketing and used a 2-stop grad filter.

If one histogram endpoint is too far in and the other too far out, I use the exposure compensation dial to skew the bracketing to even the endpoints. My Canon allows a quick access to this adjustment and, without having to think about it, I just rotate the compensation dial in the direction I want the histogram to move.

Final step

Since it’s difficult to know if there was any subject motion (other than it’s obvious), I routinely shoot multiple series and later pick the one with the least inter-frame motion. As I’ve mentioned before, don’t rely on the HDR software to bail you out with their deghosting tool.

Shooting Tips

Shortcut

The entire process is basically trial-and-error, but with practice you’ll quickly get the hang of it. The only annoyance is with long exposure times that make it time consuming to determine the correct bracketing. This is common during dawn or dusk where low light and small apertures drive shutter times often to a half-minute or more. In that case, determine the bracketing at your widest aperture to reduce the shutter time and afterwards return the aperture to the desired setting.

The long thin line

This is a problem more often with highlights than darker tones, and that’s when you can’t judge a one-third point in the histogram. An example is when shooting towards a bright light source, typically a rising or setting sun. There isn’t any “meat” to the highlight data to judge a one-third point and, instead, you have a long and thin histogram line. A case in point is the histogram below showing the underexposed frame from a three-frame sunrise shot.

Underexposed Frame: Typical histogram when pointing towards (in this case) a rising sun.

Looking at the histogram it appeared that the bulk of highlights were skewed excessively to the left. But in this case, the highlights of interest were within the bright sky near the sun, and they only showed up as a thin line. In this situation, I relied on seat-of-the-pants judgement. In spite of its inaccuracy for judging exposure, I used the camera’s LCD monitor to judge if the highlights “looked” about right. The figure below is the LCD image I estimated to be about right and next is the completed image.

LCD Image of Underexposed Frame: This is the exposure I visually judged suitable for the highlights.

Results After HDR Processing

Using HDR with a Grad Filter

First, using grad filters for either HDR or on their own has limitations. They aren’t usable when there isn’t clear delineation between the contrasting areas; for example, an uneven horizon or trees blocking the sky. Another situation is a foreground lake reflection where a two-stop or greater grad filter may cause the reflected image to be brighter than the actual image. (Note, this isn’t true for a one-stop filter, which actually works to provide a natural balance in illumination.) When in absence of these situations, I usually begin with my customary three exposures at two-stop increments and a grad filter. Which grad filter value I start with is described below.

One-stop grad filter

This is my most used filter. It adds an extra stop to my exposure range with near-zero risk of any grad filter side-effects. I use this whenever I shoot a classic sunrise or sunset scene that isn’t towards the direction of the sun. If the grad filter compresses the dynamic range a little beyond my one-third histogram rule, then I reduce the bracket spacing instead of removing the filter (unless it’s a definite overkill). Remember, lower exposure bracketing renders a bit better in the HDR software.

Two-stop grad filter

When the sun is just below the horizon either during dawn or dusk and the shot is towards the sun’s position, the contrast is much higher and now it’s a judgement call. If there is some light attenuation, such as haze or overhead clouds, then I’ll use the one-stop grad filter. If the sky is bright or the foreground is deeply shadowed, then I’ll use the two-stop grad filter. Choosing between a hard or soft-edged filter is not particularly critical since the HDR software tends to balance out any uneven exposures within the transition. I typically use a soft-edge for almost all my shots.

Three-stop grad filter?

It is hard to believe you would need this strong a filter, but it may be necessary when shooting towards the sun when it’s above the horizon. However, now you’re more vulnerable to grad filter side-effects. As an example, I once shot a late afternoon vineyard scene towards a sun that was streaming light from behind overcast clouds. The clouded sky was so bright that it took both two-stop bracketing and a three-stop grad filter to capture the dynamic range. Unfortunately, the foreground vines extended from the bottom to the top of the frame and the top vines were underexposed.

This is an example where you normally ditch the grad filter and expose more frames. But, there was a breeze blowing and more exposures would exasperate the inter-frame motion problem, so I stuck with the three-stop filter. The happy ending to this dilemma was that Aurora and Adobe’s HDR had tools to resolve this problem, which I’ll demonstrate using this same vineyard scene later in Chapter-21. Nevertheless, it is best not to rely on software miracles, so avoid a three-stop filter if possible and just increase the number of exposures using a weaker (or no) grad filter.

In-Camera HDR

In-camera HDR is becoming a standard feature in most new cameras, including smart phones. Compared to earlier in-camera HDR attempts, the newer cameras have made major improvements. So, is it usable now for critical photography? Based on my research of some newer high-end cameras and my hands-on experience with the older Canon 5Ds, my opinion is it’s still not ready for prime time. That said, I believe it works well enough to satisfy high-volume shooting situations where you don’t want to spend days in front of a computer processing hundreds of photos.

On the other hand, when I invest time and money solely to photograph a stunning landscape, my only interest is producing the best possible image. HDR is sometimes a hit-or-miss affair, even with the best of HDR software and perfectly bracketed shots. To gamble a once-in-lifetime shot on in-camera HDR software with limited control and only an 8-bit JPEG output would be, to me, foolhardy. I presently don’t see in-camera HDR replacing my current manual capture process and the greater control I have with dedicated HDR software.