Solved – Question on how to normalize regression coefficient

least squaresregressionregression coefficientsself-study

Not sure if normalize is the correct word to use here, but I will try my best to illustrate what I am trying to ask. The estimator used here is least squares.

Suppose you have $y=\beta_0+\beta_1x_1$, you can center it around the mean by $y=\beta_0'+\beta_1x_1'$ where $\beta_0'=\beta_0+\beta_1\bar x_1$ and $x_1'=x-\bar x$, so that $\beta_0'$ no longer has any influence on estimating $\beta_1$.

By this I mean $\hat\beta_1$ in $y=\beta_1x_1'$ is equivalent to $\hat\beta_1$ in $y=\beta_0+\beta_1x_1$. We have reduced equation for easier least square calculation.

How do you apply this method in general? Now I have the model $y=\beta_1e^{x_1t}+\beta_2e^{x_2t}$, I am trying to reduce it to $y=\beta_1x'$.

Best Answer

Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas.

The question

Let's begin by restating the question and using unambiguous terminology. The data consist of a list of ordered pairs $(t_i, y_i)$ . Known constants $\alpha_1$ and $\alpha_2$ determine values $x_{1,i} = \exp(\alpha_1 t_i)$ and $x_{2,i} = \exp(\alpha_2 t_i)$. We posit a model in which

$$y_i = \beta_1 x_{1,i} + \beta_2 x_{2,i} + \varepsilon_i$$

for constants $\beta_1$ and $\beta_2$ to be estimated, $\varepsilon_i$ are random, and--to a good approximation anyway--independent and having a common variance (whose estimation is also of interest).

Background: linear "matching"

Mosteller and Tukey refer to the variables $x_1$ = $(x_{1,1}, x_{1,2}, \ldots)$ and $x_2$ as "matchers." They will be used to "match" the values of $y = (y_1, y_2, \ldots)$ in a specific way, which I will illustrate. More generally, let $y$ and $x$ be any two vectors in the same Euclidean vector space, with $y$ playing the role of "target" and $x$ that of "matcher". We contemplate systematically varying a coefficient $\lambda$ in order to approximate $y$ by the multiple $\lambda x$. The best approximation is obtained when $\lambda x$ is as close to $y$ as possible. Equivalently, the squared length of $y - \lambda x$ is minimized.

One way to visualize this matching process is to make a scatterplot of $x$ and $y$ on which is drawn the graph of $x \to \lambda x$. The vertical distances between the scatterplot points and this graph are the components of the residual vector $y - \lambda x$; the sum of their squares is to be made as small as possible. Up to a constant of proportionality, these squares are the areas of circles centered at the points $(x_i, y_i)$ with radii equal to the residuals: we wish to minimize the sum of areas of all these circles.

Here is an example showing the optimal value of $\lambda$ in the middle panel:

Panel

The points in the scatterplot are blue; the graph of $x \to \lambda x$ is a red line. This illustration emphasizes that the red line is constrained to pass through the origin $(0,0)$: it is a very special case of line fitting.

Multiple regression can be obtained by sequential matching

Returning to the setting of the question, we have one target $y$ and two matchers $x_1$ and $x_2$. We seek numbers $b_1$ and $b_2$ for which $y$ is approximated as closely as possible by $b_1 x_1 + b_2 x_2$, again in the least-distance sense. Arbitrarily beginning with $x_1$, Mosteller & Tukey match the remaining variables $x_2$ and $y$ to $x_1$. Write the residuals for these matches as $x_{2\cdot 1}$ and $y_{\cdot 1}$, respectively: the $_{\cdot 1}$ indicates that $x_1$ has been "taken out of" the variable.

We can write

$$y = \lambda_1 x_1 + y_{\cdot 1}\text{ and }x_2 = \lambda_2 x_1 + x_{2\cdot 1}.$$

Having taken $x_1$ out of $x_2$ and $y$, we proceed to match the target residuals $y_{\cdot 1}$ to the matcher residuals $x_{2\cdot 1}$. The final residuals are $y_{\cdot 12}$. Algebraically, we have written

$$\eqalign{ y_{\cdot 1} &= \lambda_3 x_{2\cdot 1} + y_{\cdot 12}; \text{ whence} \\ y &= \lambda_1 x_1 + y_{\cdot 1} = \lambda_1 x_1 + \lambda_3 x_{2\cdot 1} + y_{\cdot 12} =\lambda_1 x_1 + \lambda_3 \left(x_2 - \lambda_2 x_1\right) + y_{\cdot 12} \\ &=\left(\lambda_1 - \lambda_3 \lambda_2\right)x_1 + \lambda_3 x_2 + y_{\cdot 12}. }$$

This shows that the $\lambda_3$ in the last step is the coefficient of $x_2$ in a matching of $x_1$ and $x_2$ to $y$.

We could just as well have proceeded by first taking $x_2$ out of $x_1$ and $y$, producing $x_{1\cdot 2}$ and $y_{\cdot 2}$, and then taking $x_{1\cdot 2}$ out of $y_{\cdot 2}$, yielding a different set of residuals $y_{\cdot 21}$. This time, the coefficient of $x_1$ found in the last step--let's call it $\mu_3$--is the coefficient of $x_1$ in a matching of $x_1$ and $x_2$ to $y$.

Finally, for comparison, we might run a multiple (ordinary least squares regression) of $y$ against $x_1$ and $x_2$. Let those residuals be $y_{\cdot lm}$. It turns out that the coefficients in this multiple regression are precisely the coefficients $\mu_3$ and $\lambda_3$ found previously and that all three sets of residuals, $y_{\cdot 12}$, $y_{\cdot 21}$, and $y_{\cdot lm}$, are identical.

Depicting the process

None of this is new: it's all in the text. I would like to offer a pictorial analysis, using a scatterplot matrix of everything we have obtained so far.

Scatterplot

Because these data are simulated, we have the luxury of showing the underlying "true" values of $y$ on the last row and column: these are the values $\beta_1 x_1 + \beta_2 x_2$ without the error added in.

The scatterplots below the diagonal have been decorated with the graphs of the matchers, exactly as in the first figure. Graphs with zero slopes are drawn in red: these indicate situations where the matcher gives us nothing new; the residuals are the same as the target. Also, for reference, the origin (wherever it appears within a plot) is shown as an open red circle: recall that all possible matching lines have to pass through this point.

Much can be learned about regression through studying this plot. Some of the highlights are:

  • The matching of $x_2$ to $x_1$ (row 2, column 1) is poor. This is a good thing: it indicates that $x_1$ and $x_2$ are providing very different information; using both together will likely be a much better fit to $y$ than using either one alone.

  • Once a variable has been taken out of a target, it does no good to try to take that variable out again: the best matching line will be zero. See the scatterplots for $x_{2\cdot 1}$ versus $x_1$ or $y_{\cdot 1}$ versus $x_1$, for instance.

  • The values $x_1$, $x_2$, $x_{1\cdot 2}$, and $x_{2\cdot 1}$ have all been taken out of $y_{\cdot lm}$.

  • Multiple regression of $y$ against $x_1$ and $x_2$ can be achieved first by computing $y_{\cdot 1}$ and $x_{2\cdot 1}$. These scatterplots appear at (row, column) = $(8,1)$ and $(2,1)$, respectively. With these residuals in hand, we look at their scatterplot at $(4,3)$. These three one-variable regressions do the trick. As Mosteller & Tukey explain, the standard errors of the coefficients can be obtained almost as easily from these regressions, too--but that's not the topic of this question, so I will stop here.

Code

These data were (reproducibly) created in R with a simulation. The analyses, checks, and plots were also produced with R. This is the code.

#
# Simulate the data.
#
set.seed(17)
t.var <- 1:50                                    # The "times" t[i]
x <- exp(t.var %o% c(x1=-0.1, x2=0.025) )        # The two "matchers" x[1,] and x[2,]
beta <- c(5, -1)                                 # The (unknown) coefficients
sigma <- 1/2                                     # Standard deviation of the errors
error <- sigma * rnorm(length(t.var))            # Simulated errors
y <- (y.true <- as.vector(x %*% beta)) + error   # True and simulated y values
data <- data.frame(t.var, x, y, y.true)

par(col="Black", bty="o", lty=0, pch=1)
pairs(data)                                      # Get a close look at the data
#
# Take out the various matchers.
#
take.out <- function(y, x) {fit <- lm(y ~ x - 1); resid(fit)}
data <- transform(transform(data, 
  x2.1 = take.out(x2, x1),
  y.1 = take.out(y, x1),
  x1.2 = take.out(x1, x2),
  y.2 = take.out(y, x2)
), 
  y.21 = take.out(y.2, x1.2),
  y.12 = take.out(y.1, x2.1)
)
data$y.lm <- resid(lm(y ~ x - 1))               # Multiple regression for comparison
#
# Analysis.
#
# Reorder the dataframe (for presentation):
data <- data[c(1:3, 5:12, 4)]

# Confirm that the three ways to obtain the fit are the same:
pairs(subset(data, select=c(y.12, y.21, y.lm)))

# Explore what happened:
panel.lm <- function (x, y, col=par("col"), bg=NA, pch=par("pch"),
   cex=1, col.smooth="red",  ...) {
  box(col="Gray", bty="o")
  ok <- is.finite(x) & is.finite(y)
  if (any(ok))  {
    b <- coef(lm(y[ok] ~ x[ok] - 1))
    col0 <- ifelse(abs(b) < 10^-8, "Red", "Blue")
    lwd0 <- ifelse(abs(b) < 10^-8, 3, 2)
    abline(c(0, b), col=col0, lwd=lwd0)
  }
  points(x, y, pch = pch, col="Black", bg = bg, cex = cex)    
  points(matrix(c(0,0), nrow=1), col="Red", pch=1)
}
panel.hist <- function(x, ...) {
  usr <- par("usr"); on.exit(par(usr))
  par(usr = c(usr[1:2], 0, 1.5) )
  h <- hist(x, plot = FALSE)
  breaks <- h$breaks; nB <- length(breaks)
  y <- h$counts; y <- y/max(y)
  rect(breaks[-nB], 0, breaks[-1], y,  ...)
}
par(lty=1, pch=19, col="Gray")
pairs(subset(data, select=c(-t.var, -y.12, -y.21)), col="Gray", cex=0.8, 
   lower.panel=panel.lm, diag.panel=panel.hist)

# Additional interesting plots:
par(col="Black", pch=1)
#pairs(subset(data, select=c(-t.var, -x1.2, -y.2, -y.21)))
#pairs(subset(data, select=c(-t.var, -x1, -x2)))
#pairs(subset(data, select=c(x2.1, y.1, y.12)))

# Details of the variances, showing how to obtain multiple regression
# standard errors from the OLS matches.
norm <- function(x) sqrt(sum(x * x))
lapply(data, norm)
s <- summary(lm(y ~ x1 + x2 - 1, data=data))
c(s$sigma, s$coefficients["x1", "Std. Error"] * norm(data$x1.2)) # Equal
c(s$sigma, s$coefficients["x2", "Std. Error"] * norm(data$x2.1)) # Equal
c(s$sigma, norm(data$y.12) / sqrt(length(data$y.12) - 2))        # Equal
Related Question