As @Roland surmised, the jagged edges are due to you predicting multiple values of the response at each value of x1
. You get multiple predictions because you plugged in all the values of x6
when you created new_data
. That you didn't get a spaghetti plot is likely down to the ordering of the values in the expand.grid
call; if x1
were your random effect variable then the plot could have been a visual mess and highlighted the problem more easily.
When predicting from models like this you typically need to provide a data frame of new values for all covariates used to fit the model. You can provide a single value for the covariates you are not interested in, in which case the predicted values are conditioned on those values. You could do this for the random effect also by selecting a single level of the random effect variable to predict for:
new_data <- with(dat,
expand.grid(x1 = seq(min(x1), max(x1), length = 200),
x2 = median(x2),
x3 = median(x3),
x4 = median(x4),
x5 = median(x5),
x6 = factor(x6[1], levels = levels(x6)), # <- here
Latitude = median(Latitude),
Longitude = median(Longitude)))
In the example I'm choosing a single level from x6
but making sure that new_data$x6
is still a factor with the right levels.
This would condition the predictions on that subject, which may not be what you want.
An alternative is to exclude certain terms from the predictions. Basically this is like setting or forcing the effect of $f(\mathsf{Lon}, \mathsf{Lat}) = 0$, which given how the smooths are set up means the average spatial effect. In terms of the random effects, because these are mean 0 i.i.d. Gaussian random effects, setting the effect of the random effect smooth to 0 is the same as generating population level predictions.
Note: The smooths are centred about the model constant term, so if you have factor terms, then this will mean the predictions are conditioned on the reference level(s) for that/those factor(s); changing the contrasts for those factors will change the interpretation of the model constant term so you should bear that in mind as you might prefer a different set of contrasts.
Regardless, we can exclude those terms that we might consider superfluous or nuisance using the terms
or exclude
arguments to predict.gam
, which have the following description:
terms: if type=="terms"
or type="iterms"
then only results for the terms (smooth or parametric) named in this array will be returned. Otherwise any smooth terms not named in this array will be set to zero. If NULL
then all terms are included.
exclude: if type=="terms"
or type="iterms"
then terms (smooth or parametric) named in this array will not be returned. Otherwise any smooth terms named in this array will be set to zero. If NULL
then no terms are excluded. Note that this is the term names as it appears in the model summary, see example. You can avoid providing the covariates for the excluded terms by setting newdata.guaranteed=TRUE
, which will avoid all checks on newdata
.
(emphasis added) In your case it is easier perhaps to exclude the terms you want excluding.
Note the bit in italics above. You have to name the smooths in this vector you pass to terms
or exclude
exactly as {mgcv} knows them. So read them off the summary()
output if you are unsure.
If you want to exclude the effects of s(x5)
, s(x6)
, and s(Latitude, Longitude)
from your predictions then you would do:
excl <- c("s(x5)", "s(x6)", "s(Latitude,Longitude)")
pred <- predict(model, new_data, exclude = excl,
type = "link", se.fit = TRUE, unconditional = TRUE)
Note how I removed spaces in the label for the spatial smooth.
You probably don't want to use newdata.guaranteed=TRUE
here unless you are very sure you have everything perfectly correct in the definition of your new_data
; it's very easy to mess up factor coding etc.
As a double check, you should confirm that your new_data
has as many rows as you planned to plot. Here you wanted to show how the predicted value of your response changed over the range of x1
and generated 200 values of x1
at which you wanted predictions. Hence your new_data
should have 200 rows: if it doesn't then you messed something up and need to check how you are creating new_data
and invariably you have an issue in the expand.grid
call.
Note also here that you will get different predictions if you predict at the median latitude and longitude than if you exclude those effects with exclude
as sum to zero constraint imposed on each smooth will centre the smooths at the mean, and hence the 0 represents the mean spatial effect, not the effect on Y at the median location. I may be glossing over some minor implementation details there, but just bear that in mind when choosing values to predict at if you are conditioning on them as you did in your example.
Best Answer
Q1
You probably don't want to use the
by
mechanism for this many factor levels but any time you do use a factorby
smooth you must include in the model as a parametric term the factor. I suspect what's happening is that in order to model the different means, the smooths are changing shape and becoming less wiggly (using fewer EDFs) in order to simply model the differences in mean abundance between the genera.Your model then should be:
Q2
A big con is that you can no longer get estimates for the genera you left out. A pro is that it might not be possible to model (easily) those low prevalence genera or modelling them might require a much more complex model to capture the differences in abundance and varying mean-variance relationships than if you just modelled the most abundant genera.
Whether either of these is a big deal depends on the questions you are trying to answer.
Q3
I would suggest that for something where you have 10s or more levels of a factor, you might want to rethink using the factor
by
approach. Instead I would gravitate towards thefs
basis type which treats the collection of smooths more like true random effects to give a "random smooth" which includes random intercepts and slopes for each genera plus random wiggly terms. The constraint on thefs
that makes it feasible for data of this sort is that we only have one set of smoothing parameters to estimate instead of a separate smoothing parameter for each genera. The implication of this is that we are explicitly assuming that the smooth for each genera has the same overall wiggliness, but potentially different shapes. With the factorby
smooth formulation you are explicitly accepting that the smooth for each genera could have a different wiggliness.As described above, a starting model using the
fs
basis would be:noting that we have dropped the parametric term
Genus
as thes(Time, Genus, bs = "fs")
includes a random intercept for each level ofGenus
.If you have such large amounts of data you should be looking at the
mgcv::gamm()
function or bettergamm4::gamm4()
to do the modelling as thefs
basis is designed to be more efficient there, or even more likelymgcv::bam()
if the model and data are large enough.