For completion's sake (and to help improve the stats of this site, ha) I have to wonder if this paper wouldn't also answer your question?
ABSTRACT:
We discuss the choice of polynomial basis for approximation of uncertainty propagation through complex simulation models with capability to output derivative information. Our work is part of a larger research effort in uncertainty quantification using sampling methods augmented with derivative information. The approach has new challenges compared with standard polynomial regression. In particular, we show that a tensor product multivariate orthogonal polynomial basis of an arbitrary degree may no longer be constructed. We provide sufficient conditions for an orthonormal set of this type to exist, a basis for the space it spans. We demonstrate the benefits of the basis in the propagation of material uncertainties through a simplified model of heat transport in a nuclear reactor core. Compared with the tensor product Hermite polynomial basis, the orthogonal basis results in a better numerical conditioning of the regression procedure, a modest improvement in approximation error when basis polynomials are chosen a priori, and a significant improvement when basis polynomials are chosen adaptively, using a stepwise fitting procedure.
Otherwise, the tensor-product basis of one-dimensional polynomials is not only the appropriate technique, but also the only one I can find for this.
I think this is a bug in the predict function (and hence my fault), which in fact nlme does not share. (Edit: should be fixed in most recent R-forge version of lme4
.) See below for an example ...
I think your understanding of orthogonal polynomials is probably just fine. The tricky thing you need to know about them if you are trying to write a predict method for a class of models is that the basis for the orthogonal polynomials is defined based on a given set of data, so if you naively (like I did!) use model.matrix
to try to generate the design matrix for a new set of data, you get a new basis -- which no longer makes sense with the old parameters. Until I get this fixed, I may need to put a trap in that tells people that predict
doesn't work with orthogonal polynomial bases (or spline bases, which have the same property).
d <- expand.grid(x=seq(0,1,length=50),f=LETTERS[1:10])
set.seed(1001)
u.int <- rnorm(10,sd=0.5)
u.slope <- rnorm(10,sd=0.2)
u.quad <- rnorm(10,sd=0.1)
d <- transform(d,
ypred = (1+u.int[f])+
(2+u.slope[f])*x-
(1+u.quad[f])*x^2)
d$y <- rnorm(nrow(d),mean=d$ypred,sd=0.2)
ggplot(d,aes(x=x,y=y,colour=f))+geom_line()+
geom_line(aes(y=ypred),linetype=2)
library(lme4)
fm1 <- lmer(y~poly(x,2,raw=TRUE)+(1|f)+(0+x|f)+(0+I(x^2)|f),
data=d)
fm2 <- lmer(y~poly(x,2)+(1|f)+(0+x|f)+(0+I(x^2)|f),
data=d)
newdat <- data.frame(x=unique(d$x))
plot(predict(fm1,newdata=newdat,REform=NA))
lines(predict(fm2,newdata=newdat,REform=NA),col=2)
detach("package:lme4")
library(nlme)
fm3 <- lme(y~poly(x,2,raw=TRUE),
random=list(~1|f,~0+x|f,~0+I(x^2)|f),
data=d)
VarCorr(fm3)
fm4 <- lme(y~poly(x,2),
random=list(~1|f,~0+x|f,~0+I(x^2)|f),
data=d)
newdat <- data.frame(x=unique(d$x))
lines(predict(fm3,newdata=newdat,level=0),col=4)
lines(predict(fm4,newdata=newdat,level=0),col=5)
Best Answer
I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differences between the two approaches. That is, if we had a perfect computer that could represent all values exactly, why would we prefer one approach over the other?
user5957401 argues that orthogonal polynomials reduce the collinearity among the polynomial functions, which makes their estimation more stable. I agree with Jake Westfall's critique; the coefficients in orthogonal polynomials represent completely different quantities from the coefficients on raw polynomials. The model-implied dose-response function, $R^2$, MSE, predicted values, and the standard errors of the predicted values will all be identical regardless of whether you use orthogonal or raw polynomials. The coefficient on $X$ in a raw polynomial regression of order 2 has the interpretation of "the instantaneous change in $Y$ when $X=0$." If you performed a marginal effects procedure on the orthogonal polynomial where $X=0$, you would get exactly the same slope and standard error, even though the coefficient and standard error on the first-order term in the orthogonal polynomial regression is completely different from its value in the raw polynomial regression. That is, when trying to get the same quantities from both regressions (i.e., quantities that can be interpreted the same way), the estimates and standard errors will be identical. Using orthogonal polynomials doesn't mean you magically have more certainty of the slope of $X$ at any given point. The stability of the models is identical. See below:
Created on 2019-10-25 by the reprex package (v0.3.0)
The marginal effect of
Petal.Width
at 0 from the orthogonal fit and its standard error are exactly equal to those from the raw polynomial fit (i.e.,1.1527
. Using orthogonal polynomials doesn't improve the precision of estimates of the same quantity between the two models.The key is the following: using orthogonal polynomials allows you to isolate the contribution of each term to explaining variance in the outcome, e.g., as measured by the squared semipartial correlation. If you fit an orthogonal polynomial of order 3, the squared semipartial correlation for each term represents the variance in the outcome explained by that term in the model. So, if you wanted to answer "How much of the variance in $Y$ is explained the linear component of $X$?" you could fit an orthogonal polynomial regression, and the squared semipartial correlation on the linear term would represent this quantity. This is not so with raw polynomials. If you fit a raw polynomial model of the same order, the squared partial correlation on the linear term does not represent the proportion of variance in $Y$ explained by the linear component of $X$. See below.
Created on 2019-10-25 by the reprex package (v0.3.0)
The squared semipartial correlations for the raw polynomials when the polynomial of order 3 is fit are $0.001$, $0.003$, and $0.005$. When only the linear term is fit, the squared semipartial correlation is $0.927$. The squared semipartial correlations for the orthogonal polynomials when the polynomial of order 3 is fit are $0.927$, $0.020$, and $0.005$. When only the linear term is fit, the squared semipartial correlation is still $0.927$. From the orthogonal polynomial model but not the raw polynomial model, we know that most of the variance explained in the outcome is due to the linear term, with very little coming from the square term and even less from the cubic term. The raw polynomial values don't tell that story.
Now, if you want this interpretational benefit over the interpretational benefit of actually being able to understand the coefficients of the model, then you should use orthogonal polynomials. If you would prefer to look at the coefficients and know exactly what they mean (though I doubt one typically does), then you should use the raw polynomials. If you don't care (i.e., you only want to control for confounding or generate predicted values), then it truly doesn't matter; both forms carry the same information with respect to those goals. I would also argue that orthogonal polynomials should be preferred in regularization (e.g., lasso), because removing higher-order terms doesn't affect the coefficients of the lower order terms, which is not true with raw polynomials, and regularization techniques often care about the size of each coefficient.