Image Bit Depth

I am sure you have heard the term 'bit depth'. But what does it mean and what difference does it make to the quality of the images you produce? That is what we are going to explore in this lesson.

Reminder of what a 'bit' is
Bit depths and Photoshop
Variations in tone

Reminder of what a 'bit' is

Perhaps the place to start is to remind you what a 'bit' is. Remember that each pixel is really just a binary number that represents its colour. Remember too that to represent 256 tones in a pixel it took a binary number 8 digits in length (11111111) to represent that number of tones. Now each digit in that number is 1 bit of information and so we came to have the colour of a pixel being represented by a number that is 8 bits in size. So the number of possible colours of a grayscale pixel is 256 colours and that is represented by a binary number that is '8 bits in depth.'

Now that number does not have to be only 8 bits in depth. It can be 10, 12, 14 or 16 bits long and each time it is longer, it can represent many more variations in colour.

So the 'weight' of a single grayscale pixel is 8 bits which is also known as 1 byte (8 bits = 1 byte in computer speak). But it could also be 16 bits which is 2 bytes in size.

And the weight of an 8 bit RGB image is 3 times that because there is one 8 digit number for each colour channel in an RGB pixel - 1 for the Red channel, 1 for the Green channel and 1 for the Blue channel - which makes it 3 bytes in size. But that RGB image could have a bit depth of 16 bits, in which case each of the three colour channels would have a number that is 16 digits long, which is 16 bits in size which is 2 bytes per colour channel, which means that each pixel of a 16 bit RGB image is 6 bytes in size.

The real advantage of a 16-bit image over an 8-bit image, however, is not its file size, but the variations in colour that it represents.

Bit depths and Photoshop

Any image at the pre-digital stage, whether it is a scene being photographed or an analogue film being scanned (such as a 35 mm negative or transparency), has a continuous range of tones. Once digitally photographed, or scanned, however, the digital file is recorded by dividing the image into a number of tonal levels. This can be 8 bits through to 16 bits per colour channel depending on the scanner software and the limits of the hardware.

Figure 1 The reality we perceive with the human eye is continuous tone. When it is captured on a scanner or a digital camera though, each pixel representing that reality has to be represented by a binary number that is a certain length or 'bit depth.' Once the image is brought into Photoshop, it is treated as either being an 8-bit image or a 16-bit image. Once one has worked on the image, you always need to output to others in 8 bits since that is what the industry works with.

Regardless of how many levels the camera or scanner can detect and save, the file can only be in 8- or 16-bit format when it is opened in Photoshop. If the camera or scanned image is in something like 12 bits, then exporting as a 16-bit file into Photoshop does not improve on the 12-bit scan; it just ensures that the full 12 bits are kept.  A 12-bit image, imported as a 16-bit file, still has only 12 bits of level information in it.  

The 16-bit file has twice as many binary digits for each RGB channel, but it still has the same number of pixels. But as each pixel has twice the amount of information it means that the 16-bit file is therefore also twice the working file size as the 8-bit file. In the end, whatever bit depth the images are captured at and then worked on, the final result supplied to the public must be in an 8-bit format as most image production will not understand 16-bit images.

NOTE:  There are also 1-bit images, but these would not be used for photographs as they can only hold two tones of black and white. Photoshop 9 introduced a 32-bit image format, but this is only intended for specialist uses.

Variations in tone

The real advantage of images with higher bit depth is the variations in colour that they can represent. This is not so much for the human eye though. An 8-bit image has 256 tonal levels per channel (2^8=256), whereas a 16-bit image has 65,536 levels (2^16=65,536), way beyond what the human eye can see. Nevertheless there are some real advantages in scanning in 16-bit mode. Take this image below for example.

Figure 2 Above is an image that is badly underexposed. The histogram below shows how it has been compressed into just a small part of the total tonal range.

Figure 3 Because the tonal range of the image is forced into such a small section of the total tonal range, it is going to require significant stretching. An 8-bit image which is only 256 tones at its maximum anyway will in all likelihood posterize because 80 tones currently represented in the image are going to be stretched across 256 tones. A 16-bit image which has a possible 65,536 tonal levels will have more than enough data to spread right across the tonal range, even if only 30% if the tones are represented.

The image above has a very flat tonal range (done deliberately for illustration purposes). The histogram statistics show that the levels are only from 0-80, which is under one third of the full range of 256 (indicated). Even though the Histogram always shows the levels statistics in 8-bit mode, a 16-bit version of this image would have 80 x 2^8 levels instead. This comes to 20,480 which is more than enough for the final 256 tones needed. At this stage, before the image has been corrected, the histogram will look the same for both the 8- and 16-bit images. So let's do the correction.

Figure 4 The image is corrected in the Levels tool in Photoshop.

Figure 5 The histogram for the 8-bit image (left) shows gaps where 80 tonal levels have been spread over 256 tonal levels. The histogram for the 16-bit image (right) shows no gaps in the tonal range because 20,480 tonal levels were spread over 256 tonal levels.

After the scan has been corrected by pulling out the range in the Levels tool in Photoshop, the difference between 8- and 16-bit versions now reveals itself in the histograms.  The 8-bit histogram is fractured.  It only had 80 levels to start with, which means that gaps had to be added between the levels to cover the 256 tonal space. But the result is still 80 levels. On the other hand, the 16-bit image had 20,480 levels instead of 80, so no gaps had to be added to fill the 256 tonal space. When the 16-bit image is converted to 8 bits after correction the result is a full 256 tones.

When image levels are pulled apart too far there is a very real risk of posterization when working with an 8-bit image. For most production work, however, you may not notice any real gain in image quality, so it is a matter of experimentation to see what suits your quality standards and workflow the best. It is for this reason though that archival quality scans are always at 16 bits.