As I demonstrated in my last post, we photographers are always processing our images to narrow the gap between what our sensors record and how we envision the subjects of our photographs. This may or may not be what we actually saw—if we remember it!
There are many third party applications that help this along—third party because the photographer is the first party and Photoshop is the second (with Lightroom in between for the first edit). One of the most powerful ones to enter the market recently is Aurora, which enhances the color range and detail of images. It offers itself as a High Dynamic Range (HDR) processor, so it does in one step what had taken 3 or more exposures sandwiched together in Photoshop, to do beforehand—obviously a time-consuming process. It adds some pazzazz to most images.
However, one must also be careful not to overuse it, since it can also make a photograph look unnatural. Here is an example where this type of processing was absolutely necessary, but where the very processing that saved this image also made it look artificial in one particular area. So it was necessary to walk the image back to the original in that area by combining the processed image with the original image in separate layers, then painting in the layer of the original image to make a final image, more powerful than the original, but equally believable. Here are the stages:
- The original image, taken from a helicopter above the island of Molokai in Hawaii. Conditions are less than idea: (a) one is constantly moving, (b) shooting through a clear but curved plastic bubble window, and (c) contending with a rotor that rotates too fast to see, but which the necessarily fast shutter stops, so that it blocks parts of the scene. Here is the original photo, as shot.Its problems are obvious. There’s a reflection in the upper right and a smaller one just right and above the center; there’s an overall dullness, much of it due to the mist.
- I cropped the image to remove the upper right reflection and cloned the smaller one out. Then I boosted the color dynamic range, adding some brightness and saturation, and I also enhanced the structure a bit, to give a better sense of detail to the trees, sand, surf and water. Here was the result:Much better, but there’s a problem: the clouds in the upper left show sharp divisions between tonal areas. The border areas between darker gray and light gray tend to be jagged, betraying some overprocessing. To remedy this, I had to layer this image over the original one and enlarge the original so that the processed image superimposed exactly over the original one one (by reducing the opacity of the layer on top and changing it size until it overlay perfectly). Then, using a layer mask, I painted out the jagged clouds and the background rock wall on the left, to let those of the original image show through. Then I flattened the whole thing to make a realistic composite that was far more powerful than the original: In fact, by doing this, the misty area on the left helps set off the clear, saturated jungle treetops, thus making the composition stronger—a double win for this image. The irony is that I only noticed the problem on the third round of the final revisions. On the first round I concentrated on textual errors; on the second I corrected the brightness of several images that were two dark (I had to calibrate my monitors!); and then on the third, when I thought everything would be fine, I noticed the jagged clouds and made this (hopefully) final correction.
I keep thinking of novelist Henry James’ observation that “genius is the infinite capacity for taking pains.” So it’s not a matter of being brilliant—so many people are smarter than I am. It’s just a matter of going to all that trouble to make something perfect. It’s that (!) easy!