Solved – What does “fiducial” mean (in the context of statistics)

bayesianfiducialinferenceronald-fisherterminology

When I Google for

"fisher" "fiducial"

…I sure get a lot of hits, but all the ones I've followed are utterly beyond my comprehension.

All these hits do seem to have one thing in common: they are all written for dyed-in-the-wool statisticians, people thoroughly steeped in the theory, practice, history, and lore of statistics. (Hence, none of these accounts bothers to explain or even illustrate what Fisher meant by "fiducial" without resorting to oceans of jargon and/or passing the buck to some classic or other of the mathematical statistics literature.)

Well, I don't belong to the select intended audience that could benefit for what I've found on the subject, and maybe this explains why every one of my attempts to understand what Fisher meant by "fiducial" has crashed against a wall of incomprehensible gibberish.

Does anyone know of an attempt to explain to someone who is not a professional statistician what Fisher meant by "fiducial"?

P.S. I realize that Fisher was a bit of a moving target when it came to pinning down what he meant by "fiducial", but I figure the term must have some "constant core" of meaning, otherwise it could not function (as it clearly does) as terminology that is generally understood within the field.

Best Answer

The fiducial argument is to interpret likelihood as a probability. Even if likelihood measures the plausibility of an event, it does not satisfy the axioms of probability measures (in particular there is no guarantee that it sums to 1), which is one of the reasons this concept was never so successful.

Let's give an example. Imagine that you want to estimate a parameter, say the half-life $\lambda$ of a radioactive element. You take a couple of measurements, say $(x_1, \ldots, x_n)$ from which you try to infer the value of $\lambda$. In the view of the traditional or frequentist approach, $\lambda$ is not a random quantity. It is an unknown constant with likelihood function $\lambda^n \prod_{i=1}^n e^{-\lambda x_i} = \lambda^n e^{-\lambda(x_1+\ldots+x_n)}$.

In the view of the Bayesian approach, $\lambda$ is a random variable with a prior distribution; the measurements $(x_1, \ldots, x_n)$ are needed to deduce the posterior distribution. For instance, if my prior belief about the value of lambda is well represented by the density distribution $2.3 \cdot e^{-2.3\lambda}$, the joint distribution is the product of the two, i.e. $2.3 \cdot \lambda^n e^{-\lambda(2.3+x_1+\ldots+x_n) }$. The posterior is the distribution of $\lambda$ given the measurements, which is computed with Bayes formula. In this case, $\lambda$ has a Gamma distribution with parameters $n$ and $2.3+x_1+\ldots+x_n$.

In the view of fiducial inference, $\lambda$ is also a random variable but it does not have a prior distribution, just a fiducial distribution that depends only on $(x_1, \ldots, x_n)$. To follow up on the example above, the fiducial distribution is $\lambda^n e^{-\lambda(x_1+\ldots+x_n)}$. This is the same as the likelihood, except that it is now interpreted as a probability. With proper scaling, it is a Gamma distribution with parameters $n$ and $x_1+\ldots+x_n$.

Those differences have most noticeable effects in the context of confidence interval estimation. A 95% confidence interval in the classical sense is a construction that has 95% chance of containing the target value before any data is collected. However, for a fiducial statistician, a 95% confidence interval is a set that has 95% chance of containing the target value (which is a typical misinterpretation of the students of the frequentist approach).