If you are using an extra penalty on each term, you can just fit the model and you are done (from the point of view of selection). The point of these penalties is allow for shrinkage of the perfectly smooth functions in the spline basis expansion as well as the wiggly functions. The results of the model fit account for the selection/shrinkage. If you remove the insignificant terms and then refit, the inference results (say in summary()
output) would not include the "effect" of the previous selection.
Assuming you have a well-chosen set of covariates and can fit the full model (a model with a smooth of each covariate plus any interactions you want) you should probably just work with the resulting fit of the shrunken full model.
If a term is using effectively 0 degrees of freedom it is having no effect on the fit/predictions at all. For the non-significant terms that have positive EDFs, by keeping them in you are effectively stating that these covariates have a small but non-zero effect. If you remove these terms as you suggest, you are saying explicitly that the effect is zero.
In short, don't fit the reduced model; work with the full model to which shrinkage was applied.
The deviance explained of the reduced model can be lower as it has fewer terms with which to explain variation in the response. It's a bit like the $R^2$ of a model increasing as you add covariates.
Most of the extra smooths in the mgcv toolbox are really there for specialist applications — you can largely ignore them for general GAMs, especially univariate smooths (you don't need a random effect spline, a spline on the sphere, a Markov random field, or a soap-film smoother if you have univariate data for example.)
If you can bear the setup cost, use thin-plate regression splines (TPRS).
These splines are optimal in an asymptotic MSE sense, but require one basis function per observation. What Simon does in mgcv is generate a low-rank version of the standard TPRS by taking the full TPRS basis and subjecting it to an eigendecomposition. This creates a new basis where the first k
basis function in the new space retain most of the signal in the original basis, but in many fewer basis functions. This is how mgcv manages to get a TPRS that uses only a specified number of basis functions rather than one per observation. This eigendecomposition preserves much of the optimality of the classic TPRS basis, but at considerable computational effort for large data sets.
If you can't bear the setup cost of TPRS, use cubic regression splines (CRS)
This is a quick basis to generate and hence is suited to problems with a lot of data. It is knot-based however, so to some extent the user now needs to choose where those knots should be placed. For most problems there is little to be gained by going beyond the default knot placement (at the boundary of the data and spaced evenly in between), but if you have particularly uneven sampling over the range of the covariate, you may choose to place knots evenly spaced sample quantiles of the covariate, for example.
Every other smooth in mgcv is special, being used where you want isotropic smooths or two or more covariates, or are for spatial smoothing, or which implement shrinkage, or random effects and random splines, or where covariates are cyclic, or the wiggliness varies over the range of a covariate. You only need to venture this far into the smooth toolbox if you have a problem that requires special handling.
Shrinkage
There are shrinkage versions of both the TPRS and CRS in mgcv. These implement a spline where the perfectly smooth part of the basis is also subject to the smoothness penalty. This allows the smoothness selection process to shrink a smooth back beyond even a linear function essentially to zero. This allows the smoothness penalty to also perform feature selection.
Duchon splines, P splines and B splines
These splines are available for specialist applications where you need to specify the basis order and the penalty order separately. Duchon splines generalise the TPRS. I get the impression that P splines were added to mgcv to allow comparison with other penalized likelihood-based approaches, and because they are splines used by Eilers & Marx in their 1996 paper which spurred a lot of the subsequent work in GAMs. The P splines are also useful as a base for other splines, like splines with shape constraints, and adaptive splines.
B splines, as implemented in mgcv allow for a great deal of flexibility in setting up the penalty and knots for the splines, which can allow for some extrapolation beyond the range of the observed data.
Cyclic splines
If the range of values for a covariate can be thought of as on a circle where the end points of the range should actually be equivalent (month or day of year, angle of movement, aspect, wind direction), this constraint can be imposed on the basis. If you have covariates like this, then it makes sense to impose this constraint.
Adaptive smoothers
Rather than fit a separate GAM in sections of the covariate, adaptive splines use a weighted penalty matrix, where the weights are allowed to vary smoothly over the range of the covariate. For the TPRS and CRS splines, for example, they assume the same degree of smoothness across the range of the covariate. If you have a relationship where this is not the case, then you can end up using more degrees of freedom than expected to allow for the spline to adapt to the wiggly and non-wiggly parts. A classic example in the smoothing literature is the
library('ggplot2')
theme_set(theme_bw())
library('mgcv')
data(mcycle, package = 'MASS')
pdata <- with(mcycle,
data.frame(times = seq(min(times), max(times), length = 500)))
ggplot(mcycle, aes(x = times, y = accel)) + geom_point()
These data clearly exhibit periods of different smoothness - effectively zero for the first part of the series, lots during the impact, reducing thereafter.
if we fit a standard GAM to these data,
m1 <- gam(accel ~ s(times, k = 20), data = mcycle, method = 'REML')
we get a reasonable fit but there is some extra wiggliness at the beginning and end the range of times
and the fit used ~14 degrees of freedom
plot(m1, scheme = 1, residuals = TRUE, pch= 16)
To accommodate the varying wiggliness, an adaptive spline uses a weighted penalty matrix with the weights varying smoothly with the covariate. Here I refit the original model with the same basis dimension (k = 20) but now we have 5 smoothness parameters (default is m = 5
) instead of the original's 1.
m2 <- gam(accel ~ s(times, k = 20, bs = 'ad'), data = mcycle, method = 'REML')
Notice that this model uses far fewer degrees of freedom (~8) and the fitted smooth is much less wiggly at the ends, whilst still being able to adequately fit the large changes in head acceleration during the impact.
What's actually going on here is that the spline has a basis for smooth and a basis for the penalty (to allow the weights to vary smoothly with the covariate). By default both of these are P splines, but you can also use the CRS basis types too (bs
can only be one of 'ps'
, 'cr'
, 'cc'
, 'cs'
.)
As illustrated here, the choice of whether to go adaptive or not really depends on the problem; if you have a relationship for which you assume the functional form is smooth, but the degree of smoothness varies over the range of the covariate in the relationship then an adaptive spline can make sense. If your series had periods of rapid change and periods of low or more gradual change, that could indicate that an adaptive smooth may be needed.
Best Answer
There are 14 basis functions here, not 15; one is removed when the identifiability (sum-to-zero) constraint is applied to the basis.
All of the weights (coefficients) for the basis functions are shrunk to some extent if the smooth is penalized. As there are 14 basis functions there will always be 14 coefficients associated with this smooth regardless of the amount of shrinkage. If the smoothing parameter, $\lambda$, is sufficiently high, those coefficients will be shrunk to be effectively zero.
However, there is no reason to presume that 9 or 10 of the coefficients (in your case) will be all shrunk to effective zero. The penalty is controlling the wiggliness of the estimated smooth and that can counterintuitively require non-zero values of all the coefficients to achieve a fit that uses 4 to 5 effective degrees of freedom (EDF).
The point that the help page is making is that the estimated function (the page uses the word "term") can be shrunk towards a constant function at large $\lambda$ values rather than towards a linear function.
So, there is certainly shrinkage going on; the smooth in your model uses ~4 EDF when it could have used 14 EDF:
Clearly the estimated smooth has been shrunk away from the unpenalised fit.
However, because of the parameterisation being used for the smooth and the wigglines penalty it's not easy to see where the shrinkage has taken place in terms of the model coefficients. In an alternate parametrisation, the so-called natural parameterization, the EDF of the individual basis functions that comprise a smooth are on a scale that demonstrates the shrinkage, but that requires you to change (reparameterise) the basis functions.
If we draw the basis functions weighted by the model coefficients, for both models, you'll get some idea of where the shrinkage has happened:
It's clear here that many of the basis functions have been shrunk towards zero functions.
Many of the coefficients are close to zero here
indicating low weights for certain functions.