[GIS] lost when converting 12 bit imagery to 8 bit

bit depthdigital image processingremote sensing

SPOT 6/7 satellite imagery is captured with a dynamic range of 12 bits per pixel per channel (ref). However almost all of the SPOT imagery I have seen in use has been 8 bits per channel, and split into RGB natural colour (Bands-321) and Near-infrared-RG false colour (Bands-432). What information is lost in this 12 to 8 bits conversion?

I'm wondering if we should be altering our request for purchase specifications to deliver the full bit depth.

Although I'm referencing SPOT imagery specifically the question is general and really applies to any satellite or sensor system.

Update: Cross posted to Gdal-Dev mailing list, http://osgeo-org.1560.x6.nabble.com/gdal-dev-What-is-lost-when-converting-12-bit-imagery-to-8-bit-tt5482829.html. Feel free to crib the good bits from that conversation and add to your answers.

Best Answer

I know that you have tried to frame this as a general question, rather than specific to SPOT 6/7, but it's really a little of both.

The naive answer to the "general" question "what is lost when transforming 12-bit raster data to 8-bit?" is "4 bits of precision." This answer may not be terribly useful though because there are different ways to stretch the data from 12-bits to 8-bits and it also depends on what the 12-bit numbers represent. Are they physical units such as spectral radiance? If yes, then stretching to 8-bits may obliterate the physical meaning of the pixel values. Are the 12-bit numbers all in the range [0-255] already? If yes, then stretching from 12-bit to 8-bit won't "lose" anything, but will make the file smaller.

In the case of SPOT 6/7, you might find it helpful to review the SPOT image user guide, particularly the descriptions of different processing levels in section 2.3. If you're ordering Primary Products (as opposed to Standard Orthos), one might want to preserve the original 12-bit range in order to perform some quantitative analysis that depends on the physical units (scaled spectral radiance at sensor, in this case), or at least benefits from having as much of the original information as possible (such as some stereo photogrammetry pipelines). If one just wants to look at color composites (without having to stretch the images oneself), then ordering Primary Products stretched to 8-bits should be fine. It's not clear to me from the docs what stretch is performed to transform from 12-bits to 8-bits in the Primary Products. Preserving the 12-bits also gives a customer the option of defining their own stretch or other radiometric transformation.

tl;dr: Maybe nothing is lost, maybe it completely breaks a workflow. It depends on the use case.