Give a basis independent proof of positive definiteness of the transpose/adjoint

linear algebrapositive-semidefinite

For showing the transpose (or dual, $S^T$) of positive operator, $S$, on a finite-dimensional inner product space, is positive, I could use the following two basis dependent proofs:

One such is here, and another is using the fact that $S$ is self-adjoint, so it is diagonalizable. Since the transpose of a diagonal matrix is itself, the transpose of $S$ is also positive.

However, I'm looking to prove this in a basis independent manor. I've come up with the following:

For some finite-dim inner product space $V$, it's dual space $V^*$, and a positive operator $S$. $\dagger$ is being used to represent the adjoint for operators and complex-conjugate on field elements.

$\forall \phi \in V^*, \forall v \in V$, $\langle S^T(\phi)v, \phi(v) \rangle = \langle \phi(S(v)), \phi(v) \rangle = (\phi(v))^{\dagger}\phi(S(v)) $.

By the Riesz representation theorem, $\exists u \in V, s.t. \forall v \in V, \phi(v) = \langle v, u \rangle$

Thus $\langle \phi(S(v)), \phi(v) \rangle = \langle S(v), \phi^{\dagger}\phi(v) \rangle = \langle S(v), \phi(v)u \rangle = (\phi(v))^{\dagger} \langle S(v), u \rangle $.

Any hints how I can proceed further, or hints to alternative methods to go about this? I don't see anything else that can be said past this point, or where the positivity of $S$ can be used.

Best Answer

A simple way to think about this is to identify $V$ and $V^*$. Then the proof is slick, but it conceals the difference between $V$ and $V^*$, and the dependence of their identification on the dot product: $$(S^Tx,x)=(x,Sx)=(Sx,x)>0.$$

In some cases, e.g. in tensor calculus and differential geometry, it is sometimes important to make this identification explicit (it corresponds to raising and lowering indices in coordinate notation). To make it explicit, it is convenient to think of $S^*:V^*\to V^*$ instead of $S^T$, and write $\langle\phi,x\rangle$ instead of $\phi(x)$ for the value of $\phi\in V^*$ on $x\in V$. The $\langle\cdot,\cdot\rangle$ is called the canonical pairing, and it saves us from writing a lot of nested parentheses. Then $S^*$ is defined by $\langle S^*\phi,x\rangle:=\langle\phi,S x\rangle$, and (a unique) functional $\phi$ is corresponded to $x$ via $\langle\phi,y\rangle=(x,y)$. There is also the induced inner product on $V^*$ with similar definition and properties. Now the calculation goes like this: $$(S^*\phi,\phi)=\langle S^*\phi,x\rangle=\langle\phi,S x\rangle=(x,Sx)=(Sx,x)>0$$

There is a way to make this independent of an inner product. We should consider not operators on the same space, but operators from a space to its dual $S:V\to V^*$. They are used in tensor calculus (to contract indices), and in functional analysis, e.g. by Lions et al.

It is natural to call such operators positive definite when $\langle Sx,x\rangle>0$ for $x\neq0$. Defining $S^*$ by $\langle S^*x,y\rangle=\langle Sy,x\rangle$ we get $S^*:V^{**}\to V^*$. Note how neither inner product, nor basis, nor any other extra structure is needed so far. Moreover, $V$ can be identified with a subspace of its double dual $V^{**}$ also canonically, see injection into the double-dual. Namely, $x\mapsto\langle\cdot,x\rangle$ defines a functional on $V^*$ canonically corresponded to $x$. For finite-dimensional spaces $V$ maps onto the entire $V^{**}$. In general, when this happens the space is called reflexive. Now we recover the simplicity: $$\langle S^*x,x\rangle=\langle Sx,x\rangle>0.$$