The name of this regression model

errors in variablesleast squaresregressiontotal least squaresuncertainty

I am wondering how I can map this problem to something known.
Let us start with a standard linear regression framework, and suppose we want to reconstruct an observed signal $y$ from single known components $\mathbf {A}$ that are mixed following a single unknown vector of weights $x$. We know $\mathbf {A}$ from reliable data and use it as a reference.
Assuming those weights are positive, this is the solution to the NNLS problem:

$$
{\displaystyle \operatorname {arg\,min} \limits _{\mathbf {x} }\|\mathbf {Ax} -\mathbf {y} \|_{2}^{2}} \quad \textrm{subject to } x ≥ 0.
$$

Now I am wondering, what if instead of the matrix $\mathbf{A}$, we know a parametric description of the single components in the form of, say, parameters of their distribution for each entry? My problem is that this $\mathbf{A}$ has now only "pointwise" information of the reference value when in reality, one might be more confident of certain entries than others (having different variances in the reference).
It reminds me of some Bayesian models, but I wonder whether there is a simpler route to get there.

Best Answer

One type of model you describe is Total Least Squares (TLS, https://en.wikipedia.org/wiki/Total_least_squares), but it does not have the non-negative constraint you place on elements of the coefficient vector $\bf{x}$. In typical usages of this nomenclature, the errors in the elements of the matrix $\bf{A}$ are independent identically distributed (IIID), so have the same standard-deviation of errors, but in general, it does not need to be IID. See that page for more info (note that it has $\bf{X}$ instead of $\bf{A}$).

A slightly more general model is called Errors In Variables (EIV, https://en.wikipedia.org/wiki/Errors-in-variables_models)

Related Question