MATLAB: How to convert pixel intensities into pixel spectrum

hyperspectral imageshyperspectral videopixel intensitiespixel spectrum

In an experiment, some researchers used a video hyperspectral (HS) camera (a video camera + interferometer + lenses) in order to get hyperspectral images (HSI) in the 650-1100 nm range, with a resolution of 5 nm. They try to classify the elements in the entire HSI. One way of doing this is by means of analysing the spectrum of each component in the sample. Sugar gives a characteristic spectrum, different from the chlorophyll spectrum, and so on.
For an unknown reason, their system does not save the hypercube as say, “sample.hdr” and “sample.raw” files, as common HS systems do. Instead, the video system saves the captured data as a series of “.png” image files, i.e., photos; one for each wavelength.
I think I can read the images and create the R, G, B data matrices:
clear; clc;
files = dir('*.png');
dataR = [];
dataG = [];
dataB = [];
for ii = 1:size(files,1)
[a, cmap] = imread(files(ii).name);
dataR(:, :, ii) = a(:, :, 1);
dataG(:, :, ii) = a(:, :, 2);
dataB(:, :, ii) = a(:, :, 3);
For instance, the cube dataR is a 1040x1392x91 double matrix with size of 1053911040.
However, the 91 values (dimension p) in each individual picture-cube do not represent any sort of continuous spectra at all (like:
The above is an example of a spectrum.
They appear as some kind of squared intensities, like the two right-hand images in:
Above is an example of a discontinuous signal or response.
I don’t know much about video, so I’m assuming that the png compression process binned the data and left only those rather discontinuous intensities. Am I wrong?
Is there a way to transform each of the 91 elements in the p dimension (each pixel’s individual intensity) into a real pixel’s spectral signature?
In other words, can I convert all the p intensities of a pixel into a spectrum of that pixel?
Thanks a lot for any hint.

Best Answer

PNG files only have lossless data compression, so what you are seeing is not due to PNG compression.
What you are seeing reflects that fact that the original signal varies in wavelength more quickly than the wavelength resolution of the camera. It is the wavelength equivalent of a sound that has bursts that are shorter than the audio sampling rate, akin to trying to record a 10 kHz signal on a channel sampled at 8000 samples/s (which would be able to resolve 4000 Hz). Some portions will just not show up, and some will show up by gross distortions of what you can see.
Wavelength is the inverse of frequency, so given amplitudes at particular wavelengths (potentially equally spaced) you can transform the values into amplitudes at particular frequencies (which would not generally be equally spaced) and then you could do interpolation to fill in more of the spectrum. An inverse Non-Uniform FFT (NuFFT) might figure into it somewhere. But this will not create information, just smooth out the presentation.
Related Question