Solved – Bayesion priors in ridge regression with scikit learn’s linear model

pythonregressionridge regressionscikit learn

I'm using scikit learn's linear model to do ridge regression.
Ridge regression penalizes parameters for moving away from zero. I want to penalize for moving away from a certain prior, with each parameter having a different prior.

Is this possible with scikit learn's linear model? I know there's a BayesianRidge module there, but I'm not sure what it does.

Best Answer

Ridge regression looks like:

$$ \min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta||^2 $$

If you want to instead compute

$$ \beta^* = \arg\min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta - \beta_0||^2 $$

I guess you could just turn this into shrinking towards zero using the new variable

$$\theta = \beta - \beta_0.$$

So you'd solve:

$$ \theta^* := \arg\min_{\theta}||Y-X\beta_0-X \theta||^2 + \lambda_1 ||\theta||^2 $$

Then apply the change of variables again (i.e., $\beta^* := \theta^* + \beta_0$).

So to recap, if I have some black box function $\text{RidgeRegression}(Y,X, \lambda)$, I can use it to solve for an arbitrary prior $\beta_0$ simply by calling $\text{RidgeRegression}(Y-X\beta_0, X, \lambda)$.

Related Question