Solved – Handling outliers in Bayesian linear regression

bayesianlinear modeloutliersregressionrobust

I am reading this post which talks about Robust Linear regression in a Bayesian setting. The particular blog post can be found here:

http://twiecki.github.io/blog/2013/08/27/bayesian-glms-2/

There was a particular bit which I could not handle. The author talks about robust regression and says that in the presence of outliers a frequentist would use a non-quadratic distance measure to evaluate the fit. Then he or she talks about assuming that the data is distributed according to the student t-distribution with heavier tails.

The bit that has me confused is (speaking about having a normal prior):

As you can see, the fit is quite skewed and we have a fair amount of uncertainty in our estimate as indicated by the wide range of different posterior predictive regression lines. Why is this? The reason is that the normal distribution does not have a lot of mass in the tails and consequently, an outlier will affect the fit strongly.

I do not follow as to why having lighter weight on values far out is making it worst. Any hints as to why that is would be greatly appreciated.

Best Answer

Consider a set of data with no outlying observations, at a suitable value of the parameters. Now consider moving an observation far into the tail (keeping parameter values and the remaining data constant)

If the density has thin tails, an observation far away is very unlikely (has low relative probability given the parameters), so the chance of seeing it ... and hence the likelihood would be higher if the parameters were moved substantially to accommodate it (there's a limit to how far of course, as the more you move the parameters, the less probable the remainder of the data become.

By contrast a distribution with fat tails doesn't see that observation as unusual at all, and may hardly need to change in response to it.