Solved – Figuring out Signal to Noise Ratio of Spectral data

machine learningsignal detectionsignal processing

I have some Raman spectral data of a chemical:

  1. Each spectrum consist of multiple tuples of (wavenumber, intensity)
  2. I have multiple instances of 1. taken with different laser power. This causes the intensity to be different across different spectra.

My goal is to evaluate which laser power provides the best reading. I would like an algorithmic way to determine this.

Assuming that I have good characterization of my sample, i.e. I know the peak wavenumbers and their relative intensity, are there any algorithms to determine the Signal to Noise ratio of each read?

Best Answer

Bit late to the party, but here we go:

which laser power provides the best reading

As Raman spectroscopist I'd say the highest power that doesn't burn (or otherwise alter) your sample and that doesn't lead to detector saturation. The reason is that your signal to noise ratio is the better the higher your total intensity is.

are there any algorithms to determine the Signal to Noise ratio of each read

Yes and no.

The most basic algorithms involves doing repeated measurements (thus do not work for one spectrum) and calculating the signal to noise ratio as the average of the signal intensity $I$ (e.g. band height or band area) in question divided by its standard deviation: $SNR = \frac{\bar I}{sd (I)}$; the SNR is the inverse of the relative standard deviation of that band. The SNR will be different for different bands in the same spectrum.

It is important to realize that under ideal conditions the noise on a Raman spectrum depends on the signal intensity. There are (at least) two contributions to the noise: the dark current with its noise (variance) and the shot noise (photon counting noise). You can measure the variance of the dark noise for your system and measurement parameters (laser intensity should not influence it, but exposure time of the camera does). Shot noise should ideally by Poisson distributed, i.e. $var (I) = mean (I)$. Thus, assuming ideal conditions as 0th approximation you can say that the SNR of a Raman signal with intensity I is at most $SNR \leq \frac{I}{\sqrt I} = \sqrt I$. First approximation including the dark current $d$ would yield $SNR \leq \frac{I - d}{\sqrt{I + s^2 (d)}}$, where I - d is the actual Raman signal (observed intensity - dark current) and assuming dark current variance $s^2 (d)$ and shot noise are independent so that their variances add up. These calculations can be done for single spectra.


What you should not do unless you can give good reasons why this should work in your particular application is measureing the noise at the baseline of the Raman spectrum and relate this to your Raman signal intensity. This approximation works only if you have such a high background that the baseline intensity $BG$ is approximately equal to the signal intensity ($\sqrt{I + BG} \approx \sqrt{BG}$, i.e. $BG \gg I$).


There's an excellent in-depth discussion of these topics in McCreery: Raman Spectroscopy for Chemical Analysis


Of course, the signal to noise ratio for a given application is yet different as e.g.

  • you may combine the information of several spectral regions in order to gain SNR
  • cross senstivities (matrix spectra) can reduce the SNR and/or make some analyte bands useless for the particular application.
Related Question