[Math] Is this lemma in elementary linear algebra new

ag.algebraic-geometrylinear algebraprojective-geometry

Is anyone familiar with the following, or anything close to it?

Lemma. Suppose $A$, $B$ are nonzero finite-dimensional vector spaces
over an infinite field $k$, and $V$ a subspace of $A\otimes_k B$
such that

(1) For every nonzero $a\in A$ there exists nonzero $b\in B$
such that $a\otimes b\in V$,

and likewise,

(2) For every nonzero $b\in B$ there exists nonzero $a\in A$
such that $a\otimes b\in V$.

Then

(3) $\dim_k(V) \geq \dim_k(A) + \dim_k(B) – 1$.

Remarks: The idea of (1) and (2) is that the spaces $A$ and $B$
are minimal for "supporting" $V$; that is, if we replace
$A$ or $B$ by any proper homomorphic image, and we map $A\otimes B$ in
the obvious way into the new tensor product, then that map will
not be one-one on $V$. The result is equivalent to saying that if one is
given a finite-dimensional subspace $V$ of a tensor product $A\otimes B$
of arbitrary vector spaces, then one can replace $A$, $B$ by images
whose dimensions sum to $\leq \dim(V) + 1$ without hurting $V$.

In the lemma as stated, if we take for $A$ a dual space $C^*$, and
interpret $A\otimes B$ as $\mathrm{Hom}(C,B)$, then the hypothesis again
means that $C$ and $B$ are minimal as spaces "supporting" $V$, now as a
subspace of $\mathrm{Hom}(C,B)$; namely, that restricting to any proper
subspace of $C$, or mapping onto any proper homomorphic image of $B$,
will reduce the dimension of $V$.

In the statement of the lemma, where I assumed $k$ infinite,
I really only need its cardinality to be at least the larger
of $\dim_k A$ and $\dim_k B$.

The proof is harder than I would have thought; my write-up is 3.3K.
I will be happy to show it if the result is new.

Best Answer

This is a nice lemma: I know a good deal of similar results but this one is unknown to me.

I believe it is suitable, as an answer, to give a proof that works with no restriction on the cardinality of the underlying field $F$. I will frame the answer in terms of matrix spaces. Thus, we have a linear subspace $V \subset M_{n,p}(F)$ such that, for every non-zero vector $X \in F^n$, the space $V$ contains a rank $1$ matrix with column space $F X$ and, for every non-zero vector $Y \in F^p$, the space $V$ contains a rank $1$ matrix with row space $F Y^t$. Note that those assumptions are unchanged in multiplying $V$ by invertible matrices (be it on the left or on the right).

The proof works by induction on $p$. The case where $p=1$ or $n=1$ is obvious. Assume now that $p>1$ and $n>1$. The discussion is split into two cases, where the standard basis of $F^p$ is denoted by $(e_1,\dots,e_p)$.

Case 1: $V e_p=F^n$. Then, one writes every matrix $M$ of $V$ as $M=\begin{bmatrix} A(M) & C(M) \end{bmatrix}$ where $A(M) \in M_{n,p-1}(F)$ and $C(M) \in F^n$. With our assumptions, we find rank $1$ matrices $M_1,\dots,M_{p-1}$ in $V$ with respective row spaces $F e_1^t,\dots,F e_{p-1}^t$. Then, $M_1,\dots,M_{p-1}$ are linearly independent and all belong to the kernel of $V \ni M \mapsto C(M)$. Using the rank theorem, one deduces that $\dim V \geq (p-1)+\dim C(V)=(p-1)+n$.

Case 2 : $V e_p \subsetneq F^n$. Multiplying $V$ on the left by a well-chosen invertible matrix, we lose no generality in assuming that $V e_p \subset F^{n-1} \times \{0\}$. In other words, every matrix $M$ of $V$ may be written as $$M=\begin{bmatrix} A(M) & C(M) \\ R(M) & 0 \end{bmatrix}$$ where $A(M)$ is an $(n-1) \times (p-1)$ matrix, $R(M)$ is a row matrix and $C(M)$ is a column matrix. Then, we note that $A(V)$ satisfies the same set of assumptions as $V$: indeed, if we take a non-zero row $L \in M_{1,p-1}(F)$, then we know that $V$ contains a rank $1$ matrix $M_1$ whose row space is spanned by $\begin{bmatrix} L & 1 \end{bmatrix}$. Obviously the last row of $M_1$ is zero whence $A(M_1)$ is non-zero and its row space is included in $F L$. One works likewise to obtain the remaining part of the condition. Thus, by induction one finds $$\dim A(V) \geq (n-1)+(p-1)-1.$$ Finally, we know that $V$ must contain a non-zero matrix $M_2$ with $A(M_2)=0$ and $C(M_2)=0$, and that it must contain a non-zero matrix $M_3$ with $A(M_3)=0$ and $R(M_3)=0$. Obviously, $M_2$ and $M_3$ are linearly independent vectors in the kernel of $V \ni M \mapsto A(M)$. Using the rank theorem, one concludes that $$\dim V \geq 2+\dim A(V) \geq 2+(n-1)+(p-1)-1=n+p-1.$$

Related Question