Linear Algebra – Why No Determinant Generalization for Infinite Dimensional Vector Spaces?

determinantlinear algebra

This question is to add to my understanding why the concept of a determinant does not extend to an infinite dimensional vector space. I am already aware of a couple facts which hint why this is so:

  • The determinant of an endomorphism of a finite dimensional vector space with dimension $n$ can be defined in a basis-free way as the composition of these canonical maps:
    $$\mathrm{End}(V)\xrightarrow{\phi} \mathrm{End}(\Lambda^n V)\xrightarrow{\psi} K$$
    defined by $\phi(A)=((x_1\wedge\cdots\wedge x_n)\mapsto(Ax_1\wedge\cdots\wedge Ax_n))$ and $\psi$ defined as the inverse of the map $\psi^{-1}:K\rightarrow \mathrm{End}(\Lambda^n V)$ defined by $\psi^{-1}(\lambda)=(x\mapsto \lambda x)$. This construction reveals why finite
    dimension is important: $\mathrm{End}(\Lambda^n V)$ need not be 1-dimensional otherwise for
    any $n$ if $V$ fails to be finite dimensional. And thus our last map fails to exist.
  • Another reason that the determinant fails to extend to infinite dimensional spaces is that there are injective linear endomorphisms which do not have an inverse. Such maps may still have left inverses, but no right inverse. Such a pair is the right-shift and left-shift maps
    $$(x_1,x_2,\ldots)\mapsto(0,x_1,x_2,\ldots)\qquad (x_1,x_2,\ldots)\mapsto(x_2,x_3,\ldots)$$
    where the left-shift is the left inverse of the right-shift; however, the left-shift remains non-invertible. A 'good' generalization of determinant would assign non-zero determinant to the first and zero to the last. This results in the determinant of the inverse not being the inverse of the determinant.
  • Another way to see that the concept does not generalize is that if a determinant for an operator exists, you might expect it to be the product of the eigenvalues. In general, a linear endomorphism from an infinite dimensional space can have infinitely many eigenvalues.

The previous fact suggests that we could define the determinant for a specific subset of $\mathrm{Aut}(V)$, namely those automorphisms which fix all but a finite number of 1-dimensional subspaces of $V$. But how far could we go with this generalization? Once you show that the determinant exists for a finite dimensional vector space, you can
interpret the determinant as a nontrivial map which restricts to a group homomorphism from $\mathrm{Aut}(V)$ to $K^\times$ and assigns $0$ to the rest of the endomorphisms. Can we show that there is no nontrivial homomorphism from $\mathrm{Aut}(V)$ to $K^\times$ for an infinite-dimensional vector space? Much like we can show that the sign homomorphism does not extend to $S_{\Bbb N}$?

Best Answer

My suspicion is that there are issues with solely considering purely algebraic infinite-dimensional vector spaces and trying to generalize the determinant from its algebraic construction.

However, there are analytic generalizations of determinants. Some of them are very deep and I am no expert on them, but there is at least one very concrete generalization I am familiar with.

Definition: Let $A$ be a finite rank operator on a Hilbert space. Then define $$\det(I - A) = \prod_j (1 - \lambda_j(A)),$$ where $\lambda_j$ is the $j$th largest eigenvalue of $A$. If $A$ is instead a trace class operator on a Hilbert space, then define $$\det(I - A) = \lim_{k \rightarrow \infty} \det(I - A_k),$$ where $\{A_k\}$ is a sequence of finite rank operators such that $\|A - A_k\|_{L^1} \rightarrow 0$.

Related Question