Specifically, I want to know if there is a difference between `lm(y ~ x1 + x2)`

and `glm(y ~ x1 + x2, family=gaussian)`

. I think that this particular case of glm is equal to lm. Am I wrong?

# Generalized Linear Model – Difference Between LM and GLM for Gaussian Family

generalized linear modellmnormal distributionr

## Best Answer

While for the specific form of model mentioned in the body of the question (i.e.

`lm(y ~ x1 + x2)`

vs`glm(y ~ x1 + x2, family=gaussian)`

), regression and GLMs are the same model, the title question asks something slightly more general:To which the answer is "Yes!".

The reason that they can be different is because you can also specify a

link functionin the GLM. This allows you to fit particular forms of nonlinear relationship between $y$ (or rather its conditional mean) and the $x$-variables; while you can do this in`nls`

as well, there's no need for starting values, sometimes the convergence is better (also the syntax is a bit easier).Compare, for example, these models (you have R so I assume you can run these yourself):

Note that the first pair are the same model ($y_i \sim N(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i},\sigma^2)\,$), and the second pair are the same model ($y_i \sim N(\exp(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i}),\sigma^2)\,$ and the fits are essentially the same within each pair.

So - in relation to the title question - you can fit a substantially wider variety of Gaussian models with a GLM than with regression.