Solved – Intuition behind tensor product interactions in GAMs (MGCV package in R)

interactionintuitionnonparametricrsplines

Generalized additive models are those where
$$
y = \alpha + f_1(x_1) + f_2(x_2) + e_i
$$

for example. the functions are smooth, and to be estimated. Usually by penalized splines. MGCV is a package in R that does so, and the author (Simon Wood) writes a book about his package with R examples. Ruppert, et al. (2003) write a far more accessible book about simpler versions of the same thing.

My question is about interactions within these sorts of models. What if I want to do something like the following:
$$
y = \alpha + f_1(x_1) + f_2(x_2) + f_3(x_1\times x_2) + e_i
$$

if we were in OLS land (where the $f$ is just a beta), I'd have no problem with interpreting $\hat{f}_3$. If we estimate via penalized splines, I also have no problem with interpretation in the additive context.

But the MGCV package in GAM has these things called "tensor product smooths". I google "tensor product" and my eyes immediately glaze over trying to read the explanations that I find. Either I'm not smart enough or the math isn't explained very well, or both.

Instead of coding

normal = gam(y~s(x1)+s(x2)+s(x1*x2))

a tensor product would do the same (?) thing by

what = gam(y~te(x1,x2))

when I do

plot(what)

or

vis.gam(what)

I get some really cool output. But I have no idea what is going on inside the black box that is te(), nor how to interpret the aforementioned cool output. Just the other night I had a nightmare that I was giving a seminar. I showed everyone a cool graph, they asked me what it meant, and I didn't know.

Could anyone help both me, and posterity, by giving a bit of mechanics and intuition on what is going on underneath the hood here? Ideally by saying a bit about the difference between the normal additive interaction case and the tensor case?

Best Answer

I'll (try to) answer this in three steps: first, let's identify exactly what we mean by a univariate smooth. Next, we will describe a multivariate smooth (specifically, a smooth of two variables). Finally, I'll make my best attempt at describing a tensor product smooth.

1) Univariate smooth

Let's say we have some response data $y$ that we conjecture is an unknown function $f$ of a predictor variable $x$ plus some error $ε$. The model would be:

$$y=f(x)+ε$$

Now, in order to fit this model, we have to identify the functional form of $f$. The way we do this is by identifying basis functions, which are superposed in order to represent the function $f$ in its entirety. A very simple example is a linear regression, in which the basis functions are just $β_2x$ and $β_1$, the intercept. Applying the basis expansion, we have

$$y=β_1+β_2x+ε$$

In matrix form, we would have:

$$Y=Xβ+ε$$

Where $Y$ is an n-by-1 column vector, $X$ is an n-by-2 model matrix, $β$ is a 2-by-1 column vector of model coefficients, and $ε$ is an n-by-1 column vector of errors. $X$ has two columns because there are two terms in our basis expansion: the linear term and the intercept.

The same principle applies for basis expansion in MGCV, although the basis functions are much more sophisticated. Specifically, individual basis functions need not be defined over the full domain of the independent variable $x$. Such is often the case when using knot-based bases (see "knot based example"). The model is then represented as the sum of the basis functions, each of which is evaluated at every value of the independent variable. However, as I mentioned, some of these basis functions take on a value of zero outside of a given interval and thus do not contribute to the basis expansion outside of that interval. As an example, consider a cubic spline basis in which each basis function is symmetric about a different value (knot) of the independent variable -- in other words, every basis function looks the same but is just shifted along the axis of the independent variable (this is an oversimplification, as any practical basis will also include an intercept and a linear term, but hopefully you get the idea).

To be explicit, a basis expansion of dimension $i-2$ could look like:

$$y=β_1+β_2x+β_3f_1(x)+β_4f_2(x)+...+β_if_{i-2} (x)+ε$$

where each function $f$ is, perhaps, a cubic function of the independent variable $x$.

The matrix equation $Y=Xβ+ε$ can still be used to represent our model. The only difference is that $X$ is now an n-by-i matrix; that is, it has a column for every term in the basis expansion (including the intercept and linear term). Since the process of basis expansion has allowed us to represent the model in the form of a matrix equation, we can use linear least squares to fit the model and find the coefficients $β$.

This is an example of unpenalized regression, and one of the main strengths of MGCV is its smoothness estimation via a penalty matrix and smoothing parameter. In other words, instead of:

$$β=(X^TX)^{-1}X^TY$$

we have:

$$β=(X^TX+λS)^{-1}X^TY$$

where $S$ is a quadratic $i$-by-$i$ penalty matrix and $λ$ is a scalar smoothing parameter. I will not go into the specification of the penalty matrix here, but it should suffice to say that for any given basis expansion of some independent variable and definition of a quadratic "wiggliness" penalty (for example, a second-derivative penalty), one can calculate the penalty matrix $S$.

MGCV can use various means of estimating the optimal smoothing parameter $λ$. I will not go into that subject since my goal here was to give a broad overview of how a univariate smooth is constructed, which I believe I have done.

2) Multivariate smooth

The above explanation can be generalized to multiple dimensions. Let's go back to our model that gives the response $y$ as a function $f$ of predictors $x$ and $z$. The restriction to two independent variables will prevent cluttering the explanation with arcane notation. The model is then:

$$y=f(x,z)+ε$$

Now, it should be intuitively obvious that we are going to represent $f(x,z)$ with a basis expansion (that is, a superposition of basis functions) just like we did in the univariate case of $f(x)$ above. It should also be obvious that at least one, and almost certainly many more, of these basis functions must be functions of both $x$ and $z$ (if this was not the case, then implicitly $f$ would be separable such that $f(x,z)=f_x(x)+f_z(z)$). A visual illustration of a multidimensional spline basis can be found here. A full two dimensional basis expansion of dimension $i-3$ could look something like:

$$y=β_1+β_2x+β_3z+β_4f_1(x,z)+...+β_if_{i-3} (x,z)+ε$$

I think it's pretty clear that we can still represent this in matrix form with:

$$Y=Xβ+ε$$

by simply evaluating each basis function at every unique combination of $x$ and $z$. The solution is still:

$$β=(X^TX)^{-1}X^TY$$

Computing the second derivative penalty matrix is very much the same as in the univariate case, except that instead of integrating the second derivative of each basis function with respect to a single variable, we integrate the sum of all second derivatives (including partials) with respect to all independent variables. The details of the foregoing are not especially important: the point is that we can still construct penalty matrix $S$ and use the same method to get the optimal value of smoothing parameter $λ$, and given that smoothing parameter, the vector of coefficients is still:

$$β=(X^TX+λS)^{-1}X^TY$$

Now, this two-dimensional smooth has an isotropic penalty: this means that a single value of $λ$ applies in both directions. This works fine when both $x$ and $z$ are on approximately the same scale, such as a spatial application. But what if we replace spatial variable $z$ with temporal variable $t$? The units of $t$ may be much larger or smaller than the units of $x$, and this can throw off the integration of our second derivatives because some of those derivatives will contribute disproportionately to the overall integration (for example, if we measure $t$ in nanoseconds and $x$ in light years, the integral of the second derivative with respect to $t$ may be vastly larger than the integral of the second derivative with respect to $x$, and thus "wiggliness" along the $x$ direction may go largely unpenalized). Slide 15 of the "smooth toolbox" I linked has more detail on this topic.

It is worth noting that we did not decompose the basis functions into marginal bases of $x$ and $z$. The implication here is that multivariate smooths must be constructed from bases supporting multiple variables. Tensor product smooths support construction of multivariate bases from univariate marginal bases, as I explain below.

3) Tensor product smooths

Tensor product smooths address the issue of modeling responses to interactions of multiple inputs with different units. Let's suppose we have a response $y$ that is a function $f$ of spatial variable $x$ and temporal variable $t$. Our model is then:

$$y=f(x,t)+ε$$

What we'd like to do is construct a two-dimensional basis for the variables $x$ and $t$. This will be a lot easier if we can represent $f$ as:

$$f(x,t)=f_x(x)f_t(t)$$

In an algebraic / analytical sense, this is not necessarily possible. But remember, we are discretizing the domains of $x$ and $t$ (imagine a two-dimensional "lattice" defined by the locations of knots on the $x$ and $t$ axes) such that the "true" function $f$ is represented by the superposition of basis functions. Just as we assumed that a very complex univariate function may be approximated by a simple cubic function on a specific interval of its domain, we may assume that the non-separable function $f(x,t)$ may be approximated by the product of simpler functions $f_x(x)$ and $f_t(t)$ on an interval—provided that our choice of basis dimensions makes those intervals sufficiently small!

Our basis expansion, given an $i$-dimensional basis in $x$ and $j$-dimensional basis in $t$, would then look like:

\begin{align} y = &β_{1} + β_{2}x + β_{3}f_{x1}(x)+β_{4}f_{x2}(x)+...+ \\ &β_{i}f_{x(i-3)}(x)+ β_{i+1}t + β_{i+2}tx + β_{i+3}tf_{x1}(x)+β_{i+4}tf_{x2}(x)+...+ \\ &β_{2i}tf_{x(i-3)}(x)+ β_{2i+1}f_{t1}(t) + β_{2i+2}f_{t1}(t)x + β_{2i+3}f_{t1}(t)f_{x1}(x)+β_{i+4}f_{t1}(t)f_{x2}(x){\small +...+} \\ &β_{2i}f_{t1}(t)f_{x(i-3)}(x)+\ldots+ \\ &β_{ij}f_{t(j-3)}(t)f_{x(i-3)}(x) + ε \end{align}

Which may be interpreted as a tensor product. Imagine that we evaluated each basis function in $x$ and $t$, thereby constructing n-by-i and n-by-j model matrices $X$ and $T$, respectively. We could then compute the $n^2$-by-$ij$ tensor product $X \otimes T$ of these two model matrices and reorganize into columns, such that each column represented a unique combination $ij$. Recall that the marginal model matrices had $i$ and $j$ columns, respectively. These values correspond to their respective basis dimensions. Our new two-variable basis should then have dimension $ij$, and therefore the same number of columns in its model matrix.

NOTE: I'd like to point out that since we explicitly constructed the tensor product basis functions by taking products of marginal basis functions, tensor product bases may be constructed from marginal bases of any type. They need not support more than one variable, unlike the multivariate smooth discussed above.

In reality, this process results in an overall basis expansion of dimension $ij-i-j+1$ because the full multiplication includes multiplying every $t$ basis function by the x-intercept $β_{x1}$ (so we subtract $j$) as well as multiplying every $x$ basis function by the t-intercept $β_{t1}$ (so we subtract $i$), but we must add the intercept back in by itself (so we add 1). This is known as applying an identifiability constraint.

So we can represent this as:

$$y=β_1+β_2x+β_3t+β_4f_1(x,t)+β_5f_2(x,t)+...+β_{ij-i-j+1}f_{ij-i-j-2}(x,t)+ε$$

Where each of the multivariate basis functions $f$ is the product of a pair of marginal $x$ and $t$ basis functions. Again, it's pretty clear having constructed this basis that we can still represent this with the matrix equation:

$$Y=Xβ+ε$$

Which (still) has the solution:

$$β=(X^TX)^{-1}X^TY$$

Where the model matrix $X$ has $ij-i-j+1$ columns. As for the penalty matrices $J_x$ and $J_t$, these are are constructed separately for each independent variable as follows:

$$J_x=β^T I_j \otimes S_x β$$

and,

$$J_t=β^T S_t \otimes I_i β$$

This allows for an overall anisotropic (different in each direction) penalty (Note: the penalties on the second derivative of $x$ are added up at each knot on the $t$ axis, and vice versa). The smoothing parameters $λ_x$ and $λ_t$ may now be estimated in much the same way as the single smoothing parameter was for the univariate and multivariate smooths. The result is that the overall shape of a tensor product smooth is invariant to rescaling of its independent variables.

I recommend reading all the vignettes on the MGCV website, as well as "Generalized Additive Models: and introduction with R." Long live Simon Wood.