Image Bit Depth, and why your pictures don’t look good in Photoshop. Episode 1

I’d like to cover today an essential aspect of imaging, that a surprisingly large amount of scope users either aren’t aware of or are only partially familiar with.

Image Structure

While we view images as two dimensional constructs, we are always looking at a third dimension, that dimension is the brightness range, or height, contained in the image. Below is a typical monochrome image.

Monochrome 12-Bit Image

In the view above we look at the third dimension in the image (brightness value) as an output in monitor intensity. More specifically, consider that each pixel simply holds a number. So images are similar to excel tables, where each cell, or pixel, holds an integer value, of some amount.  A 0 value in the image shows as black on the screen, and a fully saturated value (in this case 4,095) shows as bright white on the screen. If we now view these values as height, instead of intensity, things look a little bit different.

Monochrome Image above, where higher pixel values are represented as higher elevations

What we now see looks like a mountain range. The higher peaks represent larger numbers for each pixel. Note that ALL values sit between 0 and 4095 in the height scalebar on the right side of the image above. This scale represents the dimensional limits of the image. In this case the dimensional cap, or highest number, that is allowed in this image, is 4,095. Anything higher than this number will be clipped to 4,095. Conversely, the lowest number we can have is 0. Negative numbers are not supported. Fractional numbers are not supported. Only integer values will work in this container that we are viewing as an image.

The maximum allowable value for an image is defined by the bit depth of the image. All bit values start at 0 and move up. A bit is simply a 0 or a 1, so a 1 bit image can hold a value of 0 or 1 for any given pixel. (This is a “Binary Image”) As we increase the bit depth (or the allowable height of our mountain range) the possible number combinations increase. So 2 bits = 0,1 + 0,1, for 4 total possible value combinations. Bit combinations basically work like this in a 2 bit image:

Intensity of 0 = Bit 1=0, bit 2 = 0.

Intensity of 1 = Bit 1=1, bit 2 = 0.

Intensity of 2 = Bit 1=0, bit 2 = 1.

Intensity of 3 = Bit 1=1, bit 2 = 1.

As we increase each bit value we simply increase 2 to the power of X, so 2 to the power of 2 = 4 (2 bit image). And 2 to the power of 8 = 255 (8 bit image). You’ll find 12, 14 and even 16 bit cameras commonly used in research. Why such a high bit depth? The greater numeric range we can collect, the more accurately we can segment small brightness changes in whatever we are imaging. So in order to generate the most accurate measurement, we want the highest bit depth.

So, image structure includes an image’s spatial dimensions, pixels in X and Y axes, but an image structure also defines it’s Bit Depth dimension.

But what about my 32 bit monitor? And my 48 bit awesome SLR camera? And how does this relate to Photoshop!?!?

I’ll answer this question and more that you never wanted to know, in my second post on this subject, on Friday this week.:-)

(Update – You can find Episode 2 here, and Episode 3 here)

-Austin


Posted

in

by

Tags:

Comments

2 responses to “Image Bit Depth, and why your pictures don’t look good in Photoshop. Episode 1”

  1. sam Avatar

    this is great. i look forward to the next installment. i’ll forward them to my labmates…

    on question: do you know what algorithm ImageJ and other programs use to change a 12-bit image to an 8-bit image, for instance? is it as simple as just binning intensity ranges together?

    1. Austin Avatar
      Austin

      Hi Sam! So ImajeJ and many others basically downconvert by whatever divisor will fit in the input file range into the working memory space. So if using a 16 bit image, and converting to 8 bit, you’d imaging that the software would simply divide all values by 255 or 256. I found this doesn’t work as it should in ImageJ! Please give this a shot to confirm my findings, but here are the exact steps I used:

      – Open a 16 bit image (or something bigger than 8 bit) in imagej.
      – draw a big square region on the image
      – click Analyze menu -> Measure (note the mean)
      – click Image menu -> Type submenu, and select 8 bit.
      – again click Analyze menu -> measure. (again – note the mean)
      What I found in my example is a starting mean was = 2314.655 and the ending mean was 10.856. I’d expect that the mean would equal 9.042. BUT this basic example may not account for how many background values were reduced to zero.

      So – I’m not convinced on how this works yet. GREAT question Sam! Either friday or monday, I’ll make a test image with some simple values and run it through imageJ and post the results. Curious…

      -Austin