@John recently pointed out to me that R's poly
function produces less correlated values (more orthogonal) to fit polynomial predictors, i.e. the transformed predictors have a lower correlation with each other than if I just mean centered the 1st order predictor prior to generating the nth order predictor(s). R's poly accomplishes this using an algorithm from Kennedy, W. J. Jr and Gentle, J. E. (1980) Statistical Computing Marcel Dekker, pg 343-344. I checked the book out from the library, but I have to say that I am not quite following what is going on. I've also made some plots comparing these values to the raw values and I sort of can see what is going on, but I still don't I fully understand it. It seems like the data is being processed and subjected to some transformation that involves the mean of the values and some prediction coefficients. I can always use predict.poly
to turn the values from real values to poly
values. My question is how should I report the results of a model fit with poly
? The raw slope coefficients refer to a transformed value, if I report them, then I assume I should also report some of the coefficients from poly
itself as well. Which ones? In any particular format? Is anybody going to really understand them? Is this a compelling reason to stick to raw mean-centered coefficients?
Solved – How would you report (in publication) the results of a linear model fit using the poly function in R
polynomialrregressionregression coefficients
Best Answer
With modern matrix algebra software, I feel that orthogonal polynomials get in the way more than they help. I prefer to use ordinary polynomials for this reason, for example the
pol
function in the Rrms
package.