If you are only interested in allowing that one variable to changed based on education, you can use an interaction term between expl_par and expl_ent in a single model. Then you can just see if the p-value for the interaction term is significant. This method constrains all other variables to take the same value for each level of education, but you can add additional interaction terms if you have enough data/believe other variables should differ. I don't know Stata very well so I'm not sure of the code. Depending on the software, typically either you create a new variable that equals expl_par*expl_ent and then include that variable in the model, or you can just do the multiplication right in the model statement.
In a factor by
variable smooth, like other simple smooths, the bases for the smooths are subject to identifiability constraints. If you just naively computed the basis of the required dimension, and given the defaults for s()
, you'd get 2 basis functions that are in the null space of the smoothness penalty:
- a flat, horizontal function, and
- a linear functions
Both are perfectly smooth and not penalised by the smoothness penalty as a result. The flat function is the same thing as the model intercept. The identifiability issues arises because you could add any value to the estimated coefficient for the intercept (constant) term and subtract the same value from the coefficient for the flat, horizontal basis function, and get the same fit but via a new model. As there are an infinite set of numbers you could add to the intercept you have an infinity of models.
This is not good, so to alleviate the issue an identifiability constraint is used. There are several such constraints but the one that leads to good confidence interval coverage properties is the sum-to-zero constraint. Over the range of the covariate, the smooth is constrained to sum to zero. This means it is centred about zero and this means the flat function is deleted from the basis of the smooth.
Now, in the case of factor by
variables, because each smooth is centred about zero, the smooth itself contains no easy way to control for differences between the levels in terms of the mean response; say samples from condition F
were on average having larger values of pr
than condition G1
. We'd want the spline for F
to be shifted up by some constant amount relative to G1
. That's what the parametric terms are and they come from the + as.factor(Abbr)
term in the model formula. The parametric terms represent the deviation of the indicated group from the mean of the reference group (in your case the level not listed, F
). If you didn't include this term in the model, then the smooths may become more wiggly as they try to account for the mean shifts of the groups, which is not something you want.
The other main type of smooth you might use for this kind of model is the random factor smooth basis bs = "fs"
. This basis/smooth includes intercepts for each level of the grouping factor and as such doesn't need the parametric terms.
The approximate significance of the smooths actually represents a test that the indicated smooth is actually a flat, zero, function. Or put another way, it is the smooth equivalent of a t
or Wald Z
test of the null hypothesis that a coefficient in a linear model or GLM is equal to zero (i.e. has no effect). There is strong evidence against the null for each of your smooths, which is reflected in the strong non-linearity of the estimated smooths and that the confidence intervals for the smooths do not include 0 for most of the range of Year
.
Best Answer
I suggest using a cumulative-link regression that predicts the education level with one binary study status factor (dropped out/stayed in); see
ordinal
package inR
.If your stats package doesn't have an ordinal regression function like this, you could use a series of logistic regressions, e.g.,
It's not appropriate to use a t-test (or non-parametric t-test like wilcoxon) with the ordinal data you have because the difference between the levels in your dependent variable is not reflected in the value-assignment that is being discussed in this post.