Cameras are not nearly as good at capturing a wide range of brightnesses as our eyes, so a high contrast scene that appears fine to our eyes often ends up with either washed-out highlights or dark, muddy lowlights, like the two shots above.
When using automatic exposure metering, most cameras default to trying to achieve a balance between the two extremes, often by sampling the scene across large areas of the frame. The simplest mode is for it to pick an exposure that averages the brightness levels seen throughout the frame, but this is apparently not very useful because it's been hidden deep in the menu structure on my camera (Nikon D700).
More useful, but still simple, is to balance across the whole frame, but give added weight to the center of the frame. That's what I used when making yesterday's shot. Frankly, with that scene, I'd expect that the center weighting didn't have that much impact compared to an overall averaging, but that's just speculation.
The most specific kind of exposure metering is spot, where you tell the camera to calculate the exposure from one small area of the frame. This is useful when you want to ensure the proper exposure for the thing you're spotting, but it's at the expense of caring what the rest of the frame ends up as.
Here's another example where for the lowlights I put the spot marker on the lady's black coat. Black is dark to begin with (duh!), and facing away from the sunset, all the more so, so the camera really amped up the exposure to try to make the coat bright, and as a result everything else is washed out...
There's no fundamental reason that any of this is required, it's just that technology can't currently deal with these scenes very well. It's just counting photons, so how hard can it be? 🙂 I'm sure that years from now this dynamic-range problem will all be a quaint footnote in the history books, but until then it's a real, practical, problem you face pretty much every time you pick up a camera.
Some people try to use HDR techniques to combine the detail from multiple shots, but that often ends up looking horribly fake because in the end, you still have to squeeze all that dynamic range back into the highly-limiting data container called the image file. Technology is just not up to par with our eyes.
None of the metering modes I've mentioned so far are the default mode for Nikon cameras. Their default metering mode is called “Color Matrix II”, and it involves sampling the brightness, color, and subject distance at a bazillion points across the frame, then comparing that data to a database of 30,000+ real-world scenes that Nikon must have built up by hand over years. It often works pretty well... at least for definitions of “pretty well” that have been tempered by the aforementioned limitations in current technologies.
I used that for this shot of the sun heading toward the horizon...
Nikon D700 + Nikkor 70-200mm f/2.8 @ 200 mm — 1/640 sec, f/10, ISO 200 — map & image data — nearby photos
The thing that perhaps bugs me the most about current limitations in technology is what happens when things are “too bright”. Let's take a look at the sun as it dipped closer to the mountains, in two heavily-cropped shots taken 14 seconds apart...
The setting sun was not yellow or white, but it ends up yellow and white in the picture because its photons totally overwhelm the camera sensor. But just as three blades of grass end up at the same height after being run over with a mower – even if one might have been short, one tall, and one very tall – the parts of the sun that end up white do so because all three color channels on the sensor (red, green, and blue – RGB) are overwhelmed. Having, therefor, lost all information about the relative strengths among the color channels, we end up with full-on color in each channel: the digital-image definition of “white”.
The parts of the sun that are yellowish were less bright, such that only two channels were overwhelmed (red and green). It's as if you have three blades of grass, one of which is actually shorter than the mower deck, and thus after the two taller ones are cut, you still have no idea about the relative brightness except that the one was shorter than the three. In this case, “blue” being shorter than “red/green” ends up as “yellow”.
In an attempt to capture the actual color of the sun, I told the camera to underexpose the image by 5.5 stops. That means that after it decided what exposure it though would give it a nice result, I instructed it to pick an exposure that registered 45× less light. When you're talking about photons from the sun, “less” is definitely a relative term, which is why it still comes out quite well in the second shot. The point of doing this was to end the exposure before the photons overwhelmed all the color channels, and to at least some extent I succeeded. It's as if I raised the mower deck to five feet off the ground: a blade of grass 10 feet high is still chopped down considerably, but even after that, relatively speaking, it's still a lot taller than a two-inch blade of grass.
Finally, I'll end with a shot that has nothing to do with the rest of these, except that it was taken at the same time. It's the guy silhouetted at the left of yesterday's shot.
Nikon D700 + Nikkor 24-70mm f/2.8 @ 62 mm — 1/160 sec, f/2.8, ISO 2000 — map & image data — nearby photos
I find something oddly appealing about this shot. I think it's due to the blurred background and almost complete lack of shadows, it makes it look as if I pasted the guy into the empty scene from a different shot. But I didn't do anything in post-processing. Nothing. All the shots on this page are rendered out of Lightroom with all default settings, except for shrinking to fit my blog, and for the sun-closeup shots, normalizing the white balance and then cropping.