Variance-stabilizing transformation on a simple linear regression

regression analysisstatisticstransformationvariance

I am currently working with variance-stabilizer method and readed something about it from my textbook. I want to understand it better so I would like to consider a case where I for instance have a simple linear regression, let's say I have a model with response $y$, intercept and explanatory variable $x$, that is,
$$y_k=\beta_0+\beta_1x_k+\epsilon_k, \;\;\;\;\;\;\;\; k=1,…,n.$$
with the residual variance proportional to the square of $x_k$: $V(\epsilon_k)=\sigma^2x^2_k$. Then I would take the following as data transformation:
$$\tilde{y}_k:=\frac{y_k}{x_k} \text{ and } \tilde{x_k}:=\frac{1}{x_k}, \;\;\;\;\;\;\;\; k=1,…,n.$$
Here is what I don't understand. The first thing is: Can I say something about the above is indeed a variance-stabilizing transformation? Second: Can I say something about the parameters from the models with original or transformed data related
to each other?

I've found examples of variance-stabilizing transformation on wikipedia where they use integrals and so on, but I cannot se how it could be applied in my example. I hope anyone can help me to better understand the variance-stabilizing transformation.

Best Answer

After the division by $x_k$, the model becomes $$ \tilde{y}_k=\beta_0\tilde{x}_k+\beta_1+\tilde{\epsilon}_k, $$ where $\tilde{\epsilon}_k:=\epsilon_k/x_k$. Assuming that $x_k$'s are non-stochastic, $$ \operatorname{Var}(\tilde{\epsilon}_k)=\operatorname{Var}(\epsilon_k/x_k)=\frac{\sigma^2x_k^2}{x_k^2}=\sigma^2. $$ This is a particular case of generalized least squares.

Related Question