Solved – Estimate error of predicted value obtained by linear regression model

error-propagationlinearregression

I have measured 2 parameters, r and p. Each parameter was measured in three technical replicates (n=3) per sample. r is measured directly. p is measured indirectly; the data obtained is output voltages of a measurement instrument. To correlate these with p, I produced a calibration curve by measuring samples of known p. I then calculated a linear regression and obtained a formula
$p(V) = m*V +b$ in which V is the measured voltage. But this regression was calculated using the mean of the three measurements.

1) How do I calculate a linear regression that takes into account the error of each measurement?

To obtain values of p for my samples I entered the measured voltages in the formula.

2) How do I calculate the error of a value predicted by linear regression?

Best Answer

I'm not sure about your first question, but the second one has many answers to it. I'll describe main statistical techniques for measuring error rate.

  • residuals. The most simple approach is to find the difference between the predicted value and real value.
  • variance of the errors. The bigger variance is, the worse model works.
  • standart errors for intercept and beta values. I think that no explanation is needed.
  • residuals standart errors. Better way to measure residuals.
  • t-statistics of beta values and resulting probability of beta values to be 0. The bigger probability is, the higher possibility that there's no correlation between variables.
  • amount of variance explained by the model.
  • correlation between model and output.

Not all the measurements described here are error rates, but they help us to evaluate how well model works and may be useful for you.

Related Question