Solved – importance of predictor variables in multiple linear regression

multiple regressionr

I am running multiple linear regression with R.

mod=lm(varP ~ var1 +var2+var3+var4)

The table is:

all:
lm(formula = varP ~ var1 + var2 + var3 + var4)

Residuals:
    Min      1Q  Median      3Q     Max     
-4.9262 -0.6985  0.0472  0.7319  4.3305 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.700823   0.084737   8.271 1.45e-15 ***
var1      1.080172   0.175348   6.160 1.59e-09 ***
var2     -0.057803   0.007777  -7.432 5.25e-13 ***
var3     -9.924772   4.268235  -2.325   0.0205 *  
var4     -0.015104   0.001290 -11.710  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.139 on 460 degrees of freedom
Multiple R-squared:  0.657, Adjusted R-squared:  0.654 
F-statistic: 220.3 on 4 and 460 DF,  p-value: < 2.2e-16

it means that my model explains 65.4% of the variance.
But now, I would like to determine the importance of each predictor.

I was using:

lm.sumSquares(mod) 

Is dR-sqr relevant to interpret this importance ?

              SS       dR-sqr pEta-sqr  df        F p-value
(Intercept)   88.73054 0.0510   0.1294   1  68.4015  0.0000
var4         177.88026 0.1022   0.2296   1 137.1262  0.0000
var2          71.65234 0.0412   0.1072   1  55.2361  0.0000
var1          49.22579 0.0283   0.0762   1  37.9477  0.0000
var3           7.01377 0.0040   0.0116   1   5.4069  0.0205

Error (SSE)  596.71237     NA       NA 460       NA      NA    
Total (SST) 1739.76088     NA       NA  NA       NA      NA

Best Answer

If you are using R you can use the caret package which has a built in method to give variable importance. See this link (http://caret.r-forge.r-project.org/varimp.html)

You basically will just have to do

 varImp(mod, scale = FALSE)