[Math] Why is it called the ‘Seesaw theorem’

algebraic-geometry

I'm reading Mumford's "Abelian variety" and he proved the theorem of cubes by using the Seesaw theorem:

Let $X$ be a complete variety, $T$ any variety and $\mathcal{L}$ a line bundle on $X\times T$. Then the set
$$
T_{1} = \{t\in T\,:\, \mathcal{L}_{X\times\{t\}} \text{ is trivial on } X\times\{t\}\}
$$

is closed in $T$, and if on $X\times T_{1}, p_{2}:X\times T_{1}\to T_{1}$ is the projection then $\mathcal{L}|_{X\times T_{1}}\simeq p_{2}^{*}M$ for some line bundle $M$ on $T_{1}$.

But I can't understand why this theorem is called Seesaw theorem. Could you tell me about any intuition of this theorem? Thanks in advance.

Best Answer

It is just a vague metaphor for relating properties of line bundles on $X\times Y$ to their restrictions to $X \times \{y\}$ and $\{x\}\times Y$, looking at bundles from two angles.

Closedness seems natural, as we are making a line bundle equal to a particular value.

Update. Concerning the question, there are two parts to the "intuition" behind the seesaw theorem.

On the one hand, the fact that $T_1$ is closed---we are trying to construct isomorphisms between $\mathcal{L}_{X \times \{t\}}$ and $\mathcal{O}_{X \times \{t\}}$. The second sheaf is fixed, and we only let the first one vary. Since these are line bundles, locally we can certainly construct such isomorphisms---because locally all line bundles are trivial---and there is only a "global" constraint that may prevent us from building an actual isomorphism. So let me try to convince you that the vanishing of this global constraint comes down to a closed condition.

We proceed by analogy, thinking in terms of topological cohomology first---we can cover $X \times \{t\}$ with finitely many open sets on which both sheaves are trivial, hence on each open set we can build an isomorphism. For simplicity imagine that $X=S^1$, so as to have a simple picture in mind. Now we "go around our variety" and keep finding local isomorphisms between the two sheaves. When we "come back to the place we started" we have obtained an automorphism of $\mathcal{O}_{X \times \{t\}}$. How? We start by locally identifying $\mathcal{O}_{X \times \{t\}}$ with $\mathcal{L}_{X \times \{t\}}$, then we go around the variety following the isomorphisms between the restrictions of $\mathcal{L}$ to each open set in our cover, then we come back to the place we started and re-identify $\mathcal{L}_{X \times \{t\}}$ with $\mathcal{O}_{X \times \{t\}}$. In the case of $X = S^1$---if the line bundle $\mathcal{L}$ is trivial, when we go back we just have the identity. If $\mathcal{L}$ is the Möbius bundle, when we go back the automorphism we find is multiplication by $-1$, which tells us that we cannot build a global isomorphism.

Okay, so now we would like this automorphism of $\mathcal{O}_{X \times \{t\}}$ to be the identity. We know that this automorphism is in any case multiplication by a scalar, because these are the only automorphisms of a line bundle on a complete variety, so---in order to construct a global isomorphism $$\mathcal{L}_{X \times \{t\}} \cong \mathcal{O}_{X \times \{t\}}$$---we are requiring that this scalar be equal to $1$. And this is certainly a closed condition---we have a cover of our variety $X$ with some open sets. Given a line bundle $\mathcal{L}$ on $X$, we build local isomorphisms between $\mathcal{L}$ and $\mathcal{O}_X$ on each open set---composing these isomorphisms along a "closed path" we obtain an automorphism of $\mathcal{O}_X$, that is, a number. This number will vary "continuously" with $\mathcal{L}$, and the condition that the isomorphism exists is that this number is equal to $1$.

Of course this does not quite make sense mathematically, but let us try to be a bit more precise, and one will see that we get close to the actual proof of the seesaw theorem. As the reader is certainly aware, the obstruction we are mentioning is not really a "number", but rather a cohomology class, and the existence of the isomorphism mean that this cohomology class is trivial. Cohomology spaces are vector spaces---and the fact that a cohomology class is trivial boils down to the fact that a certain linear system is solvable. The coefficients of this linear system vary with $t$, and---if we write formally the system as$$A(t)x=b(t)$$---the condition that the system is solvable can be expressed as the vanishing of certain determinants, which is certainly a closed condition in $t$.

Finally, we might start to see how to turn these vague arguments into an actual proof. The statement in the previous paragraph becomes the upper semicontinuity theorem, namely:

Theorem (Semicontinuity). Let $f : X \to Y$ be a proper map and $\mathcal{F}$ be a coherent sheaf on $X$ that is flat over $Y$---recall that $$\text{line bundle} \implies \text{locally free} \implies \text{flat}.$$Then the function $\text{dim}\, H^0(X_y, \mathcal{F}_y)$ is an upper semicontinuous function on $Y$. In fact, the same holds for the upper cohomology---$\text{dim}\,H^i(X_y, \mathcal{F}_y)$ is upper semicontinuous for all $i \ge 0$.

Notice that the proof of the semicontinuity theorem is not particularly easy, but I hope to have convinced the reader that the statement is at least believable. Once we have the semicontinuity theorem, the fact that $T_1$ is closed in the seesaw theorem follows immediately:

A line bundle $\mathcal{L}$ is trivial if and only if both $\mathcal{L}$ and $\mathcal{L}^{-1}$ have sections. So we look at the vector space$$H^0(X \times \{t\}, \mathcal{L}_{X \times \{t\}}) \text{ and }H^0(X \times \{t\}, \mathcal{L}_{X \times \{t\}}^{-1}).$$The condition that $\mathcal{L}_{X x \{t\}}$ is trivial is that both these vector spaces are nontrivial. But the locus of the $t$ where one of them is nontrivial is closed by the upper semicontinuity theorem.

Secondly, the existence of $\mathcal{M}$. It is quite clear that the only possible candidate for $\mathcal{M}$ is $(p_2)_*(\mathcal{L})$, and the key point is that this is indeed locally free, which in turn follows from the fact that its cohomology is constant along the fibers---by the description of $T_1$ in terms of dimensions of $H^0$. Ultimately, the crucial statement is again one in linear algebra---with a little help from semicontinuity.

Finally there is the matter of what the seesaw theorem "means". Its most natural application is the following---consider the problem of studying line bundles on $$V = X \times T.$$Denote by $\mathcal{L}$, $\mathcal{M}$ two bundles on $V$ and by $\mathcal{L}_t$, $\mathcal{M}_t$ the restrictions to $X \times \{t\}$. Then it is a reasonable question to ask to what extent the knowledge of $\mathcal{L}_t$, $\mathcal{M}_t$ determines $\mathcal{L}$, $\mathcal{M}$. In particular, assume that$$\mathcal{L}_t \cong \mathcal{M}_t$$for all $t$. What can we say about $\mathcal{L}$, $\mathcal{M}$? The answer is that$$\mathcal{L} = \mathcal{M} \otimes \pi_2^*(\mathcal{N})$$for some $\mathcal{N}$ on $T$. This is very natural---if the restrictions to all the "horizontal" fibers agree, then the difference between the two line bundles should be a line bundle that is constant on each horizontal fiber, and therefore comes by pullback from $T$. Of course this is just the seesaw principle applied to $\mathcal{L} \otimes \mathcal{M}^{-1}$.

Related Question