Inner Products – Intuition of Adjoint Operator

adjoint-operatorsinner-products

A linear operator $T$ on an inner product space $V$ is said to have an adjoint operator $T^*$ on $V$ if $⟨T(u),v⟩=⟨u,T^*(v)⟩$ for every $u,v\in V$

I know how to proof "why this operator exist$(\text{Uniquely})$" using the fact that

If $V$ be a finite dimentional inner product space and $f$ be a linear functional on $V$ then there exist a unique $y\in V$ such that $f(x)=⟨x,y⟩,\forall x\in V$

But $\color{red}{\text{I don't have an intuitive understanding of adjoint operator}}$. What actually it does$?$ And is there any relation between $T^*$ with the transpose$(\text{or dual})$
Thanks for your time. Thanks in advance .

Best Answer

Let me first recall the definition of the transpose. Let $T: \ \ X \longrightarrow Y$ be a linear operator from $X$ to $Y$. The dual $X^*$ is the space of (continuous) linear functional on $X$, i.e., linear operators from $X \longrightarrow \mathbb{C}$. So if $x\in X^*, \ v \in X$, $x(v) \in \mathbb{C}$. We may even denote the composition of $x$ with $v$ as

$$ x(v):= (x,v). \ \ \quad \ \ (0) $$

Now the transpose of $T$ is the linear operator $ T^t : Y^* \longrightarrow X^*$ defined as

$$ (y, Tv) = ( T^t y, v) \ \ \quad \ \ (1) $$

for all $ v\in X, \ y \in Y^*$. Take now bases of $X,Y$ (say $e_i, \ f_j$) and natural bases of $X^*,Y^*$ (satisfying $ (\eta_i , e_j) = \delta_{i,j}$ and so on). Assume $\dim X = n$ and $\dim Y = m$. So for example $(0)$ becomes

$$ (x,v) = \sum_{i} x_i v_i \ \ \quad \ \ (2) $$

The matrix of $T$ in these bases is defined as

$$ Te_i = \sum_{j=1}^m T_{j,i} f_j $$

(for $i=1,2,\ldots, n$). Using the convention above we have

$$ Tv = T\sum_i v_i e_i = \sum_{i,j} v_i T_{j,i} f_j $$

so that the components of $(Tv)_k$ are given by $ (Tv)_k = \sum_i T_{k,i} v_i$. These are the usual matrix-vector multiplication rules.

You can see that the transpose is associated to the matrix $(T^t)_{i,j}=T_{j,i}$. Note that $T$ corresponds to a $m\times n$ matrix.

However what ultimately allows this to work, or alternatively, what makes definition $(1)$ works, is the following simple fact.

Once you have a matrix of $m\times n$ numbers $T_{i,j}$, you can form $n$-dimensional vectors via $\sum_{j=1}^m T_{k,j} v_j$ ($k=1,\ldots n$) but also $m$-dimensional vectors via $\sum_{i=1}^n T_{i,k} w_i$ ($k=1,\ldots m$).

The latter operation corresponds (using the usual matrix-vector multiplications rules) to

$$ (w_1,w_2, \ldots w_m) \left(\begin{array}{cccc} T_{1,1} & T_{1,2} & \cdots & T_{1,n}\\ T_{2,1}\\ \vdots\\ T_{m,1} & & \cdots & T_{m,n} \end{array}\right) $$

according to the usual rules (row-vector times matrix). To form the adjoint we proceed in a very similar way. However now we use the scalar product to identify linear functionals. That is, instead of $(2)$ we use the scalar product $\langle \bullet, \bullet \rangle$:

$$ \langle x, v \rangle = \sum_{j=1}^n x_j^* v_j $$

Then the construction of the adjoint is the same as that for the transpose, but you get the additional complex conjugate terms.

In component of course, the matrix associated to the adjoint satisfies:

$$ (T^*)_{i,j} = T_{j,i}^* $$