Post-Processing Preliminaries
This page is written with the assumption that the reader is familiar with the rudiments of Photoshop and Camera Raw (or Lightroom). If not, there are many comprehensive books available on Photoshop and Camera Raw and I would recommend Adobe Photoshop CC for Photographers (Focal Press) and/or The Adobe Photoshop Lightroom Classic CC Book (Adobe Press), both by Martin Evening.
Monitor
An unavoidable fact of life: you must calibrate your monitor! There are many calibrators available and some are relatively inexpensive, so cost shouldn’t be a deterrent. I use the SpyderX Pro from Datacolor and there are other excellent choices such as the ColorChecker Display by Calibrite. Of course, calibration won’t mean much unless you have a decent monitor. You should research online reviews to steer you towards a model specifically suited for photo editing. Beware that many low-cost monitors are intended for office or home use and not for serious photo editing.

Datacolor SpyderX
Calibration Tips
Most first-time users will just follow the suggested settings in the calibration software. In many cases, you are limited in what parameters you can control in the less-expensive models. Regardless, the first calibration is likely to be a major improvement, but may still fall short of an accurate display. Make several prints and compare them to how Photoshop displays them by enabling Proof Colors (located under the View menu) and setting it to your paper’s profile. View the prints under a neutral light, such as an OttLite desk lamp. Examine shadow detail, overall luminance, and color accuracy. Remember, it’s not likely to ever be perfect. If you have a hard time telling the differences, then you’re probably good to go. If it’s easy to detect hue shifts or differences in shadow density, then consider the following.
If the print is a little darker or there’s shadow detail in the monitor that’s not in the print, then you’ll have to lower your monitor’s brightness. Assuming you have reasonably subdued room lighting, you may need to specify a monitor brightness between 90 to 120 cd/m2 (if the software allows it and your monitor can go that low). Don’t rely on the calibrator’s ambient light measurement if it has that feature. Readings can vary widely depending where you place or point the sensor. Instead, a little trial-and-error will likely find the sweet spot.
If there is a hue shift, specifically with the warmer colors, then you may need to change the monitor’s color temperature. Most calibration routines suggest using 6500K or ‘Native.' That’s fine unless you need to lower your monitor’s brightness nearer or below 120 cd/m2. Your eye’s color perception is sensitive to brightness and you need to adjust the color temperature accordingly. At lower brightness levels you may need a warmer color temperature at around 5800K instead of 6500K. Again, some trial-and-error may be necessary.
Printer Calibration
You can buy a more expensive calibration unit that, besides the monitor, also calibrates a printer for each paper type used. Nevertheless, it’s been my practice to use the profiles supplied by the printer (or paper) manufacturer instead of generating custom profiles. Based on my experience with Epson profiles, they have proven accurate enough that investing in a printer profiler would be more a luxury than a necessity. Plus, profiling each paper type can be tedious if you use several different types.
The Real World of Calibration
The $64 question is how well does the image on a good-quality calibrated monitor look compared to the output from a properly profiled photo printer. From my experience the answer is close enough but not perfect. This is due in large part to the differences in color gamut, dynamic range, and profile accuracy between the monitor and printer. Most sRGB monitors can’t display many printable colors. Software proofing in Photoshop attempts to display the image as it would look when printed, but it can’t display printable colors that are outside of the monitor’s gamut range.
Beside the technical variables, there are more mundane reasons for a print not to match the monitor. Remember that a photo print is sensitive to the viewing light. Move a print back and forth between a florescent light and an incandescence bulb and you will see a difference in color tint. Plus, beside matching colors, you need to match luminance as well. Many monitors are too bright to be calibrated and the resulting prints come out too dark. You need a monitor that allows brightness levels down to 90 cd/m2, although you can probably survive at 120 cd/m2.
Color space defines all the colors (or gamut range) that can be represented in the image file. The three most common are sRGB, Adobe RGB (1998), and ProPhoto RGB, each with a respectively increasing color range. There are basically three ways to assign a color space to an image file. First, you can select it in your camera for JPEG-only files, usually between sRGB and Adobe RGB. Second, you can convert between color spaces in Photoshop. Third, and most common, is to have it applied when you exit the raw converter. To be clearer, while in Camera Raw or Develop, you will be working with an internal color space that is similar to ProPhoto RGB. When you transition to Photoshop or save the file, then the color profile of choice is applied,
Choosing a color space
While opinions vary, here is my simple take on the subject. sRGB is often recommended for website use, yet since most people have uncalibrated monitors that are set to who-knows-what, I can’t get too concerned about color fidelity for web viewing. Still, it’s the best color space for JPEG files since the limited color range of sRGB is better suited for 8-bit data.
At the other extreme, ProPhoto RGB contains some colors that neither my monitor or printer can reproduce. You can argue that doesn’t matter because the wider gamut guarantees you will utilize all the colors your printer is capable of printing. In addition, most high-end cameras have color gamuts approximating ProPhoto RGB. Still, given that most monitors are sRGB, you won’t see those extended colors on the screen. This adds the potential for greater disparity in color and luminance matching between the display and print.
Adobe RGB to me is the sweet spot and the only color space I have used in the past. I think the average amateur who mostly does his/her own printing will be fine with Adobe RGB. Even when some extended colors are clipped, I can’t imagine that anyone but a master printer could spot the difference.
Apple’s new Wide Color
The latest 27-inch 5K iMacs, iPad Pros, and iPhone 7s (and up) have extended gamut displays based on their new Wide Color standard. Wide Color is Apple's adaptation of the DCI-P3 (or P3 for short) color space that was designed as a standard for digital movie projection for the film industry. This is a wider color space than sRGB with a span approaching Adobe RGB. However, Adobe RGB and Wide Color don’t overlap and each space covers certain colors better than the other.
To appreciate the difference, below is the color gamut that a particular brand inkjet paper is capable of printing. It is overlaid on each of the gamut ranges for P3, Adobe RGB, and sRGB. Notice all the color clipping in sRGB while P3 and Adobe RGB encompass most of it. P3 clips a bit more in the blue-cyan region while Adobe RGB clips more of the top-end warmer colors.

Gamut Comparison
P3’s adoption is expected to expand, but if you continue to work with a sRGB monitor, I suggest you stick with Adobe RGB. I recently updated to a new 5K iMac and have switched from Adobe RGB to P3 as my default color space. Note that since 2019, P3 is now supported in Photoshop CC.
If you are shooting raw (as you should be!), you need to decide between retaining the camera’s proprietary raw format or converting to DNG, or do both. DNG (for digital negative) is Adobe’s attempt to establish a universal raw file format. A few camera manufacturers support it, but most go their own way. On the other hand, most image-editing software supports DNG. The question is will your proprietary raw files be compatible with whatever image-editing software exists twenty years from now? Think of the myriad of raw formats that will exist, and will software companies continue to support them all? Maybe they will, but do you want to take the chance?
I convert all my Canon raw files to DNG. The converter can be downloaded free from Adobe and is continually updated to support new camera models. Alternately, Adobe Bridge or Lightroom performs the conversion when it imports your images. As far as I can determine, there is no change to image quality, except some camera metadata might get dropped (though I have yet to miss anything I cared about). You can elect to embed the camera’s propriety raw file when converting to DNG, but that increases the file size accordingly. Another DNG advantage is any editing to the DNG file is contained within the file and not in a separate sidecar (XMP) file.
I’m sure there are many photographers who question if Photoshop is becoming irrelevant, at least to the average amateur. Raw converters, and even HDR software, are incorporating more editing features that appear to cover most of the bases. In fact, many photographers do survive just on Lightroom based on the countless published photographs I’ve seen that were processed only in Lightroom.
However, the entire question is really immaterial. First, from a practical standpoint, you can have both the latest versions of Lightroom and Photoshop for only $10 per month. That should be low enough for most not to agonize over the cost of owning Photoshop. The real issue is Photoshop’s imposing learning curve that often dissuades many from using it. If you have supernatural photographic skills where every shot is near perfect and only needs a minor touch-up, then you don’t need Photoshop. For the rest of us mortals, there will always be a certain percentage of shots that require editing that is beyond Lightroom’s capability. Unless you are comfortable with losing a percentage of your shots, I feel Photoshop is still a necessity for every photographer.
The raw file contains the most image data and that allows the greatest editing latitude. Moreover, the raw converter performs certain editing functions either better or easier than in Photoshop. For those reasons, I perform as much editing possible in Camera Raw before transitioning to Photoshop,
Note that throughout this book, I sometimes mention only Camera Raw for brevity. All comments apply equally to Lightroom’s Develop module, which is functionally identical even though it does have some differences in appearance and certain control operations.
Color Space
Camera Raw
At the bottom-center of the Camera Raw screen is the workflow status. Click on it to bring up the Workflow Options window and set Space to ‘Adobe RGB (1998)’ or ’ProPhoto RGB,' and Depth to ‘16-Bits/Channel.' For Mac users with P3 displays, select ‘Display P3’ for Space. If for some reason you later need a sRGB file, convert the file later in Photoshop to sRGB. Adobe RGB and ProPhoto RGB are wider-gamut color spaces that convert to sRGB more effectively than converting in the other direction.

Workflow Status
Lightroom
To make the same settings in Lightroom, go to Preferences and under the ‘External Editing’ tab set Color Space to ‘Adobe RGB (1998),’ ’ProPhoto RGB,' or ‘Display P3’, and Bit Depth to ’16 bits/component.'.
Smart Objects
When transitioning to Photoshop from Camera Raw, consider opening the file as a Smart Object by holding down the Shift key when clicking ‘Open Image.' Alternately, you can set Smart Object as a default by clicking on the workflow status and enabling: ‘Open in Photoshop as Smart Objects.' The advantage of Smart Objects is, when in Photoshop, you double-click the Smart Object layer and conveniently return to Camera Raw to re-edit the raw file. Then, click Done to return to Photoshop with the new changes incorporated. The disadvantages are the file size is bloated and certain Photoshop tools don’t work on Smart Objects.
Profiles apply various scene interpretations to the image, including matching the JPEG image on the camera’s LCD. The Profile menu includes a wide assortment of profiles that can be previewed by clicking on the grid pattern next to the profile menu (or select ‘Browse’ from the Profile drop-down menu).

Profile Menu in Basic Panel: Click on the right grid pattern to view samples of all the profiles. Adobe Color is the default.
How To Pick a Profile
Generally, you will select one as your everyday working profile. I had always used Adobe Standard and then switched to the newer Adobe Color that is now the default profile. You can even create your own custom profile using the standalone DNG Profile Editor utility.
Camera brand profiles
Beside Adobe’s profiles, Camera Raw supports many camera brand profiles. For my Canon camera, my options are: Faithful, Neutral, Standard, Landscape, Portrait, and Monochrome. I avoid enhanced profiles such as Landscape or Vivid that seem like logical choices, except in my opinion they over-enhance the image. I prefer the profile to render a relatively neutral, baseline image that best matches the original colors and tones. That gives me the most control over refining and adjusting an image to achieve the look I want. However, that doesn’t mean I prefer the other extreme with the Neutral or Faithful profile. My preference is the Standard profile, a good general-purpose compromise that is typically the universal default and a reasonable representation of the camera’s LCD image.
Which Standard profile to use
The two main choices are Adobe Color (which superseded their Adobe Standard) or your camera brand’s Standard profile. Choosing between the two is a subjective choice. You can conveniently switch between the profiles in Camera Raw to decide. If the choice turns into a conundrum, just choose Adobe Color since it’s the new default and renders an excellent starting image. I use Adobe Color because it renders a slightly brighter image than the Canon Standard, although at the slight risk of minor highlight clipping.
The White Balance Conundrum
The first operation I perform is to set the white balance. The conventional thinking is to set the white balance on a neutral tone to eliminate any color cast and restore the true colors of the scene. Except in landscape photography, the procedure is not always that simple. The light’s color in a landscape scene is one of the artistic variables a photographer often exploits and doesn’t necessarily want neutralized. For each landscape image, you must decide if the image will be neutral-tone accurate or true to the natural lighting. Add to that any artistic interpretation that chooses to exaggerate, rather than moderate, any color cast.
That said, the majority of my white balance adjustments are to first neutralize a gray tone and then decide where to go from there. Because the human eye compensates for a color cast to a large degree, setting the white balance to a neutral tone should get close to what the eye originally perceived. Sometimes that works, but other times after neutralizing (for example) a blue cast, the results may appear too warm. The problem is the eye’s natural perception really lies somewhere between the actual color cast and a neutral balance, so adjusting strictly to numbers doesn’t always work.
With much of landscape photography shot under a mix of warm and cold lighting during morning or evening, the white balance may rely more on artistic taste than numbers. In the end, the complex lighting associated with a magic-light scene will likely require seat-of-the-pants manual adjustment, and the white balance eyedropper will be useful only as a starting point. Furthermore, if a single white-balance adjustment doesn’t work out, then it will be necessary to perform individual color adjustments for each color cast by using Camera Raw’s Linear Gradient tool or layer masks in Photoshop.

No Neutral Tone: This scene is lit with the afterglow of dusk and there is no true neutral tone. Instead, the correct color balance lies between personal taste and stretching credibility.
Using the White Balance Tool
I default Camera Raw to start with the ‘As Shot’ white balance, which is the camera’s Auto WB setting and usually a good starting point. I never use the other settings, like Daylight or Cloudy, since they are usually too exaggerated or inaccurate. I select the image shot with my DGK white balance cards and use the White Balance Tool eyedropper to click on the gray card. Lacking a gray card, I search for whatever is closest to a neutral tone. Adobe recommends using a brighter neutral tone to avoid noise contamination, but I rarely find it a problem using a darker tone.
Daylight and shade
For daylight scenes, my camera’s auto white balance is fairly accurate and the adjustments are minor, if at all. For shade or overcast, the corrections can be more severe and sometimes, as mentioned before, a neutralized blue cast appears excessively warm. This happens sometimes when I use the DGK gray card as the neutral reference in deep shade (see example below). Forcing a pure gray to neutral under such conditions may conflict with our natural perception. I might instead sample other lighter neutral tones, but more often I just manually adjust the Basic Panel’s Temperature and Tint sliders. Remember, it’s more important to make the image visually pleasing rather than achieving a neutral color cast.

Original Shot: Shot in deep shade using the camera’s auto white balance, the blue cast is pronounced.

After WB Correction: After clicking the WB eyedropper on the gray card, the background is now too warm.
Mixed lighting
The situation becomes more complicated under mixed-color lighting where there is, for example, a warm sunrise background and a cold shaded-foreground. I never neutralize a grey tone in the warm areas since that destroys the low sun’s natural warm color. On the other hand, I normally dislike the blue cast in shadows and prefer it to be more neutral. The problem is warming the foreground can oversaturate an already warm background and also destroy the natural hue of a blue sky. The solution is the Linear Gradient tool (to be discussed later) that segregates the foreground for targeted Temperature and Tint control.
Texture and Clarity
Relatively new to the enhancement section in the Basic panel shown below is the Texture slider. It enhances only mid-frequency detail (which is what the eye most perceives as “texture”) and avoids increasing noise. It also works in the reverse direction to enhance, most commonly, portraits.
Basic Panel Enhancement Section
Clarity is a more powerful image booster. As with Texture, you can use negative values to enhance portraits or create a more “dreamy” landscape scene.
I view Texture as a “vernier knob” to Clarity. I suggest applying an initial low dose of Clarity and then adjust Texture for the best balance. Zoom in a bit to better judge the results. Remember, when you apply either Texture, Clarity, or both, it’s easy to get carried away with the visual razzle-dazzle, and that may result in a print resembling a bad facelift.
Vibrance
Vibrance is the primary tool to deal with color saturation (either over or under). Global saturation adjustments should be avoided unless absolutely necessary. Vibrance increases saturation in the less saturated colors and tapers off towards the more saturated colors. This makes the image naturally colorful without a phony, over-saturated appearance. Vibrance is also an effective tool to improve an anemic blue sky (but conversely, it will increase the blue cast in shadows).
A supplement to Vibrance is the HSL saturation controls in the Color Mixer panel. A little known fact is that each HSL color saturation control works in the same fashion as Vibrance, that is, targeting the less saturated color (this is not the case with Photoshop’s Hue/Saturation tool).
Dehaze
In one direction, Dehaze works to reduce the effects of distant atmospheric haze while the other direction increases it. It does this by improving midtone contrast, saturation, and luminance. The effect is not global, but applied proportionally according to the level of local contrast. This is a useful tool particularly for daytime scenes where distant haze is most noticeable.
Don’t have haze?
Despite its name, Dehaze really shines as a good all-around enhancement tool. When enhancing contrast, the tone controls have a global effect which may work to the detriment of a portion of the image. The workaround in those cases is to use the Adjustment Brush and target specific areas. However, an alternative is to first try Dehaze for a quick image improvement. This may help achieve a nicer contrast balance and prevent highlight blowouts or blocked shadows. Afterwards, you can touch up the contrast with the standard tools.
As an example, the contrast in the image below started out rather flat and dull. Also, the grad filter’s transition caused the rock formation in the upper-right edge to be underexposed. Attempting to improve the image with standard contrast controls, while better than the original, was still far from perfect (see image below). A quicker cure was to apply Dehaze and then follow up with minor contrast adjustments. Now the highlights in the second image are darker and more saturated, and the underexposed rocks are clearer. Midtone contrast is also improved in some areas.

Using Traditional Tone Controls: Contrast was improved, but highlights are drab and some shadows are blocked.

With Dehaze: Improved highlights and shadows and better overall contrast with less work.
Avoid Clipping
Excessive clipping can occur one of two ways. First, the exposure was wrong and those tone values are gone for good. A “technical” exception is Camera Raw’s highlight recovery algorithm that takes unclipped channel data and extrapolates missing values for the clipped channel. The other cause for clipping is self-induced. When we fiddle with the Whites/Blacks, Clarity, or increase saturation (to name a few), we can induce all sorts of clipping. If as a regular practice you use (for example) Clarity and Vibrance, then after setting the White Point, add a little of each before adjusting the exposure controls.
A textbook histogram fills nearly the entire tonal range with the three color channels converging at each end to a common value to establish pure black and white points. In practice, what’s most important is to have all three color channels retained within the histogram (that is, unclipped) and distributed appropriately for that image’s tonal range and intended appearance. That assumes, of course, all the color channels were exposed properly. Establishing a pure black and/or white point is optional, and frankly I hardly bother with it.
Setting the Black and White Point
When I do set a black or white point, it’s easier in Photoshop using the black and white eyedroppers in the Curves panel. In Camera Raw, you could use the White or Black sliders to “crush” either end (when the white triangle appears in the histogram window), but that can cause uneven clipping of the color channels that may diminish detail, especially in the highlights, For example, the image below neatly occupies the entire dynamic range without clipping. Adjusting the Black slider to the left to establish a black point (see white triangle in second histogram) resulted in clipping the blue channel.

Average Scene: A perfectly distributed histogram with no detail loss due to clipping.

Black Point Folly: Attempting to “crush” a black point causes blue-channel clipping.
Example: setting a black point
When you need to set a black point in Camera Raw, the following is the method I recommend. This also applies equally to setting a white point. The image below is a classic case where you’d want a pure black point in the background.

A Perfect Black Point: Following the procedure below created a black point with the least loss of shadow detail in the above image.
First, I used the Color Sampler eyedropper to monitor the intended black area. What I don’t recommend is to just move the Black slider or use the Curve panel (in Point mode) to crush the blacks. Instead, move each Curve color channel (in Point mode) separately until the Color Sampler reads zero for each color. That will result in slightly, but still noticeable, improvement of detail in the darker shadows. Below is an example adjustment for the blue channel. The bottom-left base (circled red) is moved to the right until the blue channel in the Color Sampler just reads zero (don’t go beyond). This is repeated for the other two channels until you reach a collective RGB:000.

Blue Channel Example: Move the base right until the Blue channel reads zero.
What white/black point values to use
Convention has always been to keep the brightest non-spectral highlights below a value of 255. The reason was printer dot patterns had trouble rendering very-light detail. Inkjet printers now use light inks to improve highlight rendering and make it less an issue. Still, I bend to tradition in a somewhat compromising fashion and keep the highlights slightly below 255, usually at around 250. When I adjust the Whites slider, I don’t obsess at achieving an exact value of 250 (by using the Color Sampler eyedropper on a reference point) and instead eyeball the histogram to stop shy of the 255 value. That, of course, is only if the image has highlights that extend that far.
The black point is more straightforward and should always be set to zero and not slightly higher as many texts have recommended. The quick reason is Photoshop takes care of establishing the correct black level when printing.
Sharpening
I apply a standard amount of sharpening for reasons I will explain in Chapter-15. For now, I’ll point out that I use a pre-sharpening process developed by Photoshop guru Bruce Fraser in his book: Real World ImageSharpening with Adobe Photoshop. As part of that process, in Camera Raw I apply a standard pre-sharpening that suits most the images I take. The values I use are: Amount-40, Radius-0.8, Detail-50, and Masking-10.
Regardless what sharpening procedure you follow, all final output sharpening should be done in Photoshop (unless you’re using only Lightroom, which I’ll cover later). The reason is the sharpening requirement varies depending on output size, resolution, media, and several other factors.
Noise Reduction
Except for prepping images for HDR, I seldom need Noise Reduction because I shoot mostly at low ISO levels. Even if I shoot up to ISO 800, the increased noise is relatively low and I need only modest noise reduction; and the image never suffers any visible sharpness loss. My settings for both Luminance and Color rarely exceed 30. Because of this low-level noise application, I seldom need to fuss with the Detail and Contrast sliders. All of this, of course, is based on my experience with a full-frame camera. High-megapixel cameras with smaller sensors may behave differently.
What is the optimum noise reduction setting?
Unfortunately, there is no green light indicator when you’ve achieved optimum noise reduction. It’s basically a seat-of-the-pants decision between how much detail you’re willing to give up for as little noise possible. If you yearn for a simple one-click solution, then consider Nik’s Dfine plug-in that I cover later in Chapter-16.
To best evaluate the noise reduction process, zoom the image to between 200x and 400x. Monitor a low-frequency area (such as a blue sky) to best judge the magnitude of the noise and the effectiveness of the noise reduction. At the same time, monitor low-contrast detail to judge how far to go before the detail is washed out. Don’t judge sharpness loss solely on high-contrast detail. The algorithm backs off on the sharper detail but will steamroll over subtle detail. Finally, don’t over-think the adjustments. A lot of tweaking is likely visible only on the monitor’s zoomed-in image and not in the actual print.
Camera Raw has expanded its masking options. Below are the options that can be used as either a separate mask or combined to form a more complex mask:
Select Subject
Select Sky
Brush
Linear Gradient (aka Gradient Tool)
Radial Gradient
Color Range
Luminance
My most commonly used masks are the Linear Gradient, Radial Gradient, and Brush tool. The Radial Gradient is my favorite tool for vignetting and the Brush tool is great for either touch-ups or to edit an existing mask. However, my most used mask is the Linear Gradient that mimics a graduated ND filter, and then some. You can overlay more than one filter to independently edit an upper and lower section. Besides exposure, you can adjust other Camera Raw controls, including color balance. You can also do this in Photoshop, except it takes more work; and remember, the more you edit in Camera Raw, the better.
I often use the Linear Gradient to fine-tune the exposure balance in shots taken originally with a grad filter. When I use a grad filter, sometimes the exposure balance between the foreground and background is too even and it looses some of the natural contrast. The Linear Gradient is perfect for restoring the natural balance.
Linear Gradient Example
Below is a sunrise and moonset shot of Zabrisky Point in Death Valley. In this shot, the camera’s grad filter overly flattened the contrast. To correct that In Camera Raw, I applied two Linear Gradient masks, one for the illuminated upper mountain top and sky, and a second filter for everything below. I was then able to edit each section independently to both restore contrast, improve color balance, and add vibrancy. As a final touch-up, I used the Brush tool to subtract the moon from the upper gradient and then added a new Brush mask to tweak the moon separately.

Death Valley: Flat contrast with a cool color-cast foreground.

Death Valley Enhanced: Using two Linear Gradient filters, color balance, contrast, and vibrancy is restored.
Mask Editing
As mentioned in the above example, I used the Brush tool to extract the moon from the upper Linear Gradient. By selecting the ‘Subtract’ button, you’re presented with all the masking options (see screenshot below). Conversely, you can select the ‘Add’ button to add additional masking. Typically when editing an existing mask, the Brush tool is likely the most common selection.

Filter Brush Tool: Access the Brush tool from the dropdown menu when either ‘Add’ or ‘Subtract’ is selected.
Another mask editing example is the classic problem when a foreground structure (such as a tree) protruded through the Linear Gradient into the masked area. You don’t want the top of the tree darker than the bottom, so you have to extract it out. By using the Brush tool in the ‘Subtract’ mode you can easily paint out the tree. I suggest you enable the ‘Auto Mask’ to help confine the selection within the details of the tree
Other Masking Tools
Newly added mask options are the Select Subject and Select Sky. The Select Subject works well only if the subject is well delineated. The Select Sky also works well, but it may not pick up the sky’s reflection in, for example, a lake scene. Then there’s Color Range and Luminance Range, both potentially useful for trickier masking jobs.
Color Range
The Color Range masking option selects single or multiple colors that you define by clicking on them. You can control the sensitivity with the Refine slider. This is similar to Photoshop’s Color Range tool under the Select menu. Color Range can be an alternative to the new Select Sky option, especially when it fails to pick up the sky in a reflection.

Color Range
Luminance Range
With Luminance Range you can easily select scattered highlights or deep shadows for targeted editing. Enabling the ‘Show Luminance Map’ makes the job easier. To select a luminance range, you click on it to move the rectangle (with the small circle) to the required position. Then, adjust the width of the rectangle to widen or narrow the selection. Move the two outer lines to adjust the feathering.

Luminance Range
A Hidden Feature
A feature not widely known in Camera Raw and Develop is automatic defective-pixel removal. All digital sensors have defective pixels and continue to develop more over time. There is even a 2011 IEEE paper that attempts to predict the defect rate of sensor pixels. The types of defective pixels are often defined as stuck, dead, or hot; but to keep things simple, I’ll refer to them all as defective pixels.
Your camera contains a pixel map that identifies all the sensor’s defective pixels during manufacturing and uses surrounding pixels as a substitute. But as your camera ages, you’ll begin to notice abnormal “dots” in your images. In my experience, the most common and observable manifestations are red and white pixels that become more prevalent during longer exposures at higher ISO settings.
Camera Raw and Lightroom identifies these pixels and eliminates most of them in the raw file. If you have never noticed a bad pixel and only shoot raw, this is probably the reason why. Otherwise, they’ll be noticeable in other file formats or in other raw processors. They are only a minor inconvenience since you can manually eliminate them the same as any dust spot.
Fixing Defective Pixels
Widely found on the Internet is a procedure to map out defective pixels in your camera. The procedure is to remove the lens, install the body cap, set your camera to manual sensor clean, enable the cleaning mode, wait 30 or more seconds and, voilà, bad pixels are gone — well maybe. I’ve found this procedure has at times appeared to work, but more often it didn’t. However, Canon’s technical support did lend some credence to this method by confirming it may reset some stuck pixels.
As a last resort, and only if your sensor has seriously degraded, you should be able to get it remapped at a qualified service center. Canon performs the remapping as part of their Extended Service maintenance when requested.