So, what factors make a color space good? Many of the issues can be summarized with two statements, both of which describe a good color space: “bigger is better” and “smaller is better.” The mutually-exclusive nature of these goals indicates the contentious tradeoffs that must be made during color-space design.
Illustrative Example: Length
Just to give a feel for the issues and tradeoffs facing color-space designers, let's take a simplistic look at how we might encode something far simpler than color: length.
(To be clear, this example has nothing to do with photography — it's just an example to illustrate the nature of digital “space encoding” tradeoffs. Being familiar with these issues helps you to understand discussions of the relative merits of color spaces).
First, it seems prudent to note that if we had unlimited space to encode our length, we would not have to make any tradeoffs. With unlimited space, we could pick any unit (say, millimeters) and know that we could exactly, unambiguously, perfectly encode any length we wished, from the Planck length (0.000000000000000000000000000000001616241) to the guesstimated size of the visible universe down to more significant digits than can ever be known (137198261174792827472661283376.9087225566893726264897726289572).
Modern digital images can have tens of millions of pixels, so it can be unwieldy to let the encoding space requirements expand without bound. People like fitting a lot of pictures on their memory cards and hard disks. Finding a way to encode as much information as possible in the minimum space brings us face to face with the need for tradeoffs.
Back to our length example, let's say that file-size concerns dictated that we use only three digits to encode the length, which means raw numbers from 0 to 999 for each length we wish to encode. It's up to us now to design a “length space” as best we can within that limitation.
If we select the millimeter as our unit, we could encode lengths up to 999 mm (just over a yard) with fairly fine granularity (our 1 mm units). This might be fine for encoding the lengths of some things (say, shoes and car tires), but remains woefully lacking for most things (widths of hairs and the heights of mountains).
If we choose a larger unit to apply to the raw numbers, such as a foot, then we can encode lengths up to about a third of a kilometer — a much larger gamut, so to speak. The tradeoff is that the level of precision, or granularity — how fine a point along the full encodable range that can be defined — has become more rough (in encoding-space lingo, the “quantization errors” are larger). We can now measure building heights fairly reasonably (to within a foot), but people heights become iffy because everyone's height gets rounded off to the nearest foot. In this case, the loss of precision has totally eliminated encoding the size of marshmallows.
Regardless of the unit we pick, when we use a strictly linear approach as we have above, we run into the same tradeoff: gamut size vs. precision.
One idea is to shift the starting point so that the gamut lies over the area we might be interested in. Consider this:
This allows our values from 0 through 999 to encode lengths from 457mm through 2,455mm (18 inches through 8 feet) to a granularity of 2 mm, which would be a useful length gamut for encoding the height of people. I'm not sure it would be much use for anything else, but it illustrates the point.
(Having shown an equation, I should remind you that this is all just an example to illustrate tradeoffs with encodings — I'm making this up as I go along, so there's no need to memorize or even pay any real attention to these equations!)
One technique to achieve better encoding performance is to bring an understanding of human perception into the equation. When considering the heights of people, an inch or two either way can be a big deal, but the same difference is generally much less relevant when considering the distance between cities. So, one technique is to use a non-linear encoding such that the precision increases as the length gets shorter. Put another way, the difference between adjoining encodable lengths is smaller when the length is smaller, and larger when the length is larger. This fits to the way people generally think.
For example, using this equation (which I just made up off the top of my head) in our encoding:
|length in millimeters = e||
with values from 0 through 999 allows us to represent lengths from between 0.027 millimeters through more than half a million kilometers. That's from about 1/1,000th of an inch (thinner than the width of an average human hair) to a distance beyond the moon. That's a wide length gamut.
Yet, despite the convenient width, it still allows the lengths of many things to be encoded with “reasonable” precision — to within a percent or so of their actual length. For example, it can encode the length and width of a pencil to within 0.3%, the length of my foot to within 1.2%, the length of a soccer field to within 0.6%, the height of Mt. Everest to within 0.4%, the diameters of the earth to within 0.1% and of the moon to within 0.3%, and the distance to the moon to within 0.2%.
That's not too bad, and if someone like me can come up with that off the top of my head, someone with real mathematical skill might be able to make one that's even better.
There are a lot of things about color that can be used to one's advantage when designing a color space:
A lot of colors look the same. Whole ranges of wavelengths look more or less exactly the same to most people, so those regions of color need not be encoded with much precision. An encoding space is better if it can use the available precision where it counts the most.
The same can be said of the range of colors covered. This is one reason that the monochromatic colors are not generally included in RGB color spaces: exceedingly similar colors can be included in their place without having to extend the gamut all the way to the edge. Most people just can't tell the difference, so the smaller gamut is used to provide more precision across the encoded space.
As mentioned on the previous page, the eye's perception of brightness is not linear with the intensity of light. Thus, it's a more efficient use of the available precision if the brightness component of the color can be encoded in proportion to how the eye perceives brightness. This is usually done with a gamma.
In the end, the width vs. precision tradeoff is always there. If a color space is designed for a specific purpose, at least it can use features of the intended use to squeeze out extra efficiency (sRGB bothered to encode only the colors that circa-1996 common monitors could display, for example). A general-purpose color spaces are a more difficult subject; hopefully, this page has provided some insight into some of the issues.
Continued on the Next Page
This article continues on Page 7: Recommendations and Links.