Solved – How to compare measurements and uncertainties made with different measuring instruments

distributionsmeasurementmeasurement errorrrepeated measures

I have two different measuring instruments, A and B, both measure the same physical property $x$ of an object but with different "quality": B gives measurements with a known uncertainty while I do not know the uncertainty in the measurements given by A.

I have $N$ distinct objects and I measure the property $x$ for all of them with A, so I get a list of measurements $L_A=\{x_{A1}, x_{A2},\ldots,x_{AN}\}$ where $x_{Ai}$ is the measurement of the property $x$ for the $i$-th object.

The objects are not labeled so, given an object, I just know that its measurement belongs to $L_A$ but I am not able to extract the measurement from $L_A$. Furthermore, I cannot use A to measure an object that I have already measured in the past with A.

Then I randomly choose $M \lt N$ objects from the $N$ objects. I measure all the $M$ objects of the sample with B and I get a list of measurements $L_B=\{x_{B1}, x_{B2},\ldots,x_{BM}\}$. Please note that the index in the subscript is not a label for the object so I cannot directly compare $x_{A1}$ with $x_{B1}$.

Is it possible to estimate the uncertainty in the measurements given by A with the above data?

I was thinking about comparing the empirical cumulative distribution function of $L_A$ with the one of $L_B$ but it is just an idea and I am not able to elaborate it further.

Is there any established standards which cover my problem? For example I found references to "ISO 5725: Accuracy (trueness and precision) of measurement methods and results" but I have not access to it.

Update:

I found my question similar to How to test if reading from two devices are significantly different? where I read the answer by Michael Lew, he suggests the paper LUDBROOK, John. Statistical techniques for comparing measurers and methods of measurement: a critical review. Clinical and Experimental Pharmacology and Physiology, 2002, 29.7: 527-536. but unfortunately it seems to me that the paper requires the pairing between the measurements.

Update 2:

I have written an R script to simulate my problem.

set.seed(42)

instrument_measurement <- function(true_value,gain,offset,dispersion)
# The instrument has three parameters: the true_value is transformed by means
# of a linear transformation described by parameters offset and gain; then there
# is a dispersion parameter. An ideal instrument would have gain=1, offset=0
# and dispersion that approaches to zero.
{
  return(rnorm(length(true_value),mean=gain*true_value-offset,sd=dispersion))
}

N=1000
true_mean = 0
true_sd = 1
# I simulate the property to be measured for N objects, here the property is
# normally distributed...
true_values = rnorm(N,mean=true_mean,sd=true_sd)

# but it could also be a mixture of normal distributions:

# components <- sample(1:3,prob=c(1/7,5/7,1/7),size=N,replace=TRUE)
# mus <- c(-0.2,0,+0.3)
# sds <- sqrt(c(0.05,0.05,0.05))
# true_values <- rnorm(n=N,mean=mus[components],sd=sds[components])
# plot(density(true_values))


# The "quality" of instrument B is "good enough" to measure the true values:
gain_B = 1
offset_B = true_sd/10
dispersion_B = true_sd/10

# The instrument B has a lower "quality" than the one of instrument A:
gain_A = 1.1*gain_B
offset_A=-2*offset_B
dispersion_A=5*dispersion_B

# I simulate the measuremente made by instrument A:
L_A = instrument_measurement(true_values,gain_A,offset_A,dispersion_A)

# I make the sample:
sample_to_measure_with_B = sample(true_values,100,replace=F)

# I simulate the measuremente made by instrument B:
L_B = instrument_measurement(sample_to_measure_with_B,gain_B,offset_B,dispersion_B)

# I plot the empirical CDF of the true values, of the measurements made with
# instrument A and of the measurements made with instrument B
plot(ecdf(true_values),col="grey",main="",xlab="x, measured property",ylab="value of empirical CDF")
lines(ecdf(L_A),col="blue")
lines(ecdf(L_B),col="orange")
legend(x=(max(true_values)+mean(true_values))/2,y=.5,legend=c("true","A","B"),col=c("grey","blue","orange"),lty=c(1,1,1))
title("Empirical CDFs")

Referring to the script, is is possible to estimate gain_A, offset_A and dispersion_A from L_A and L_B? What would be the uncertainties in the estimates?

I had an inelegant idea of defining a cost function and try to minimize it in the space of the parameters gain, offset and dispersion:

function <- ecdf_distance(ecdf1,ecdf2)
{
# return 0 if ecdf1 is "equal" to ecdf2
# return a positive scalar that measures the difference between ecdf1 and ecdf2
}

function <- cost(parameters)
{
L = instrument_measurement(L_B,parameters$gain,parameter$offset,parameter$dispersion)
return(ecdf_distance(ecdf(L_A),ecdf(L))
}

I made some test keeping gain=1 but with no luck… the cost function seems constant with respect to dispersion… I am afraid I lack some theory/math about the problem 🙂

enter image description here

Best Answer

The model you use to "simulate your problem" can be used almost verbatim to estimate the parameters you are interested in using Bayesian estimation. Here is the model I'll use (using the same notation as you):

$$ L_B \sim \mathrm{Normal}(\mu, \sigma) \\ x_i \sim \mathrm{Normal}(\mu, \sigma) \mathrm{\ for\ i\ from\ 1\ to\ N} \\ L_{Ai} \sim \mathrm{Normal}(x_i \cdot \mathrm{gain} - \mathrm{offset}, \mathrm{dispersion}) \mathrm{\ for\ i\ from\ 1\ to\ N} \\ $$

The glaring omision in this model compared to your problem is that I don't include the assumption that some of the same $x_i$s that got measured by B could then be measured again by A. This could probably be added, but I'm not completely sure how.

This model is implemented in R & JAGS below using very vague, almost flat priors, the data used is the one you generated in your question:

library(rjags)

model_string <- "model{
  for(i in 1:length(L_B)) {
  L_B[i] ~ dnorm(mu, inv_sigma2) # <- reparameterizing sigma into precision 
                                 #    needed because of JAGS/BUGS legacy.  
  }
  for(i in 1:length(L_A)) {
    x[i] ~ dnorm(mu, inv_sigma2)
    L_A[i] ~ dnorm(gain * x[i] - offset , inv_dispersion2)
  }

  mu ~ dnorm(0, 0.00001)
  inv_sigma2 ~ dgamma(0.0001, 0.0001) 
  sigma <- sqrt(1 / inv_sigma2)
  gain ~ dnorm(0, 0.00001) T(0,)
  offset ~ dnorm(0, 0.00001)
  inv_dispersion2 ~ dgamma(0.0001, 0.0001)
  dispersion <- sqrt(1 / inv_dispersion2)
}"

Let's run it and see how well it does:

model <- jags.model(textConnection(model_string), list(L_A = L_A, L_B = L_B), n.chains=3)
update(model, 3000)
mcmc_samples <- coda.samples(model, c("mu", "sigma", "gain", "offset", "dispersion"), 200000, thin=100)
apply(as.matrix(mcmc_samples), 2, quantile, c(0.025, 0.5, 0.975))
##       dispersion   gain      mu   offset  sigma
## 2.5%     0.01057 0.1366 -0.3116 -0.51836 0.9365
## 50%      0.18657 1.0745 -0.1099 -0.26950 1.0675
## 97.5%    1.20153 1.2846  0.1051 -0.04409 1.2433

The resulting estimates are reasonably close to the values you used when you generated the data:

c(gain_A, offset_A, dispersion_A)
## [1]  1.1 -0.2  0.5

...except for, perhaps, dispersion. But with more data, perhaps more informed priors and running the MCMC sampling longer this estimate should be better.

Related Question