So far in this article, we've talked about how color spaces are used to represent colors as numbers within an image file. But once a color-managed application has used the color space to convert the images colors into a device-independent understanding of color, how does the application convert that to the device-dependent information needed by your monitor or printer?
The answer lies with the other half of color management: device color profiles...
The first page of this article discusses color profiles for digital images, but there are also color profiles for devices (computer monitors, scanners, printers, paper, and ink). Like their digital-image versions, these color profiles describe how to convert between color and raw numeric data, but in the case of these real-world devices, the color is not theoretical, but actual: how much of which ink should be applied to the paper to achieve the proper color, or how much energy should be used to fire a CRT's cathode ray against its screen's phosphors, etc.
With a color-managed system where all the links of the visual chain have appropriate color profiles, the device-independent color data (such as inside an image file) can be converted with certainty to device-dependent color data such as monitor voltages or printer ink amounts.
Having properly-calibrated devices, controlled by software using the appropriate color profile for the device, is essential to ensure that the color you see on the screen or on paper is the proper color, or, at least, as proper a color as can be had within the physical limits of the display method. (If a monitor doesn't have the physical ability to reproduce a particular color, no color space or color profile can change that.)
It's common to see complaints of “the picture looks so different when I print it compared to what I see on my screen” on photography-related forums, and the reason is almost certainly related to the non-use or misuse of device color profiles.
Unfortunately, it's not necessarily easy having a properly color-managed system.
A color profile for a printer is dependent on the printer/ink/paper combination, so if you print photos on several different kinds of photo paper, you must create several different color profiles, and be sure to use the appropriate one when printing.
Printer manufactures usually supply canned profiles for their printers (but only for use with their ink and their paper), but these can be of dubious usefulness. A custom printer/ink/paper color profile creates the most certain output, but generating such a color profile can be costly.
One method involves printing an image with known real-world colors, and sending it (and some money) to a profiling service, which uses a spectrophotometer to measure the actual colors you got. From this, it can calculate what adjustments need to be made so that you get the truest colors, and that information is encapsulated into a printer/ink/paper-dependent color profile.
There are other methods as well. This page, which is part of an advertisement for one such method, gives a nice overview.
Creating a monitor-specific color profile is generally easier.
The easy & cheap method involves running a calibration program, and eyeballing answers to things like “slide the slider until the X is the same color as the background.” The wildly vague nature of subjective human color perception makes this an iffy scenario, but the result is better than doing nothing. Apple's OSX includes this with its Display Calibrator Assistant (System Preferences > Displays > Color > Calibrate).
A better solution is a calibration device that you stick on the screen so that it can meter the colors that its associated software sends. It can then calculate a profile for your specific monitor (with its color/tint/brightness/contrast settings as you have them during the test). Testing devices to do this run as little as $120 or so.
It might sound like a lot of hassle to create color profiles for your monitor and printer/paper/ink, but that's only because it is. Most people don't, and even if they had the proper profiles, most software doesn't take advantage of them. If only one web browser (that I know of) even bothers to take into account an image's embedded color profile, how many do you think will then use your monitor's profile to make the second conversion (from true color to the best appropriate color your monitor can produce)?
A color-managed application like Photoshop does do this properly, but most software doesn't. Apple software for the Mac generally does, but not all software for the Mac does. (I find it really disappointing that Firefox for the Mac is not a color-managed application.)
Back to images, if an image has ever been saved as sRGB, any extra color information it might have had is lost forever, even if that sRGB image is later (re)converted to a “wider” color space. (Page 5 of this article talks about color-space “width.”)
In fact, you'll lower the quality of an sRGB image by converting it to a different color space, because the new color-space's discrete encodable colors won't match up exactly with the old space's, necessarily requiring some fudging of the colors to get them to fit.
Anyway, the upshot is that while sRGB is still the de facto standard for the web, having your AdobeRGB or other “wider” color-space images converted to sRGB means that you're throwing out the ability to represent shades that you may well have the ability so see on a modern (properly color managed) monitor or printer.
The AdobeRGB color space is a popular pro/prosumer camera alternative to sRGB. The default out-of-the-box color space for these cameras is invariably sRGB, but if supported, AdobeRGB can be selected for new images via the camera-settings menu.
AdobeRGB can encode a wider variety of colors than sRGB (in particular, richer shades of green and blue). It does this at the necessary expense of encoding all the colors with slightly less precision; more on encoding trade-offs on page 6 of this article.
If you never need the richer shades of green, the slight loss of precision could theoretically hurt you, but the general consensus is that AdobeRGB is better for printing than sRGB. (sRGB is still the best for the web, of course, simple because not using it risks incorrect colors.)
If their camera supports AdobeRGB, many photographers avail themselves of it so that they maintain maximum color information. Nevertheless, many users opt to stay with sRGB because its ubiquity simplifies their workflow. To them, the slight improvement in color is not worth the inconvenience of needing to pay attention to a file's intended use.
A more advanced option still, for cameras that support it, are Raw image files. “Raw” is not a file format, but a type of data: it contains raw image-sensor data, prior to any processing (such as white balance compensation, sharpness adjustments, conversion to a device-independent color space, and the like). For reference, each camera maker has its format for their camera's raw files: Nikon cameras, for example, create .NEF files, while Canon cameras create .CRW or .CR2 files.
The benefits to working with raw files instead of JPG are numerous, but beyond the scope of this article. One benefit of note to us here, though, is that sensor-dependent color data in a raw file is usually much more detailed than the comparable JPG, which has had the color data reduced to a device-independent color space, typically sRGB or AdobeRGB.
With the advent of native raw workflow applications like Apple's Aperture and Adobe's Photoshop Lightroom, the photographer can work with images in a very wide color space, reducing to sRGB or the like only when required (such as when generating JPG copies for the web, or for printing, etc.).
Continued on the Next Page
The technical discussion deepens on the next page: Page 5: Chromaticity Diagrams.
However, if you'd like to skip further technical stuff, feel free to skip directly to Page 7: Recommendations and Links.