Differential Geometry – Definition of a Tensor for a Manifold

differential-geometrymanifoldstensors

While reading Nakahara's geometry, topology and physics. I came across the following definition of a tensor.

A tensor $T$ of type $(p, q)$ is a multilinear map that maps
$p$ dual vectors and $q$ vectors to $\mathbb{R}$. While generalizing to manifolds, the dual vectors are replaced by elements of the cotangent space, and the vectors by elements of the tangent space. How is this definition equivalent to the definition of a tensor (which I knew before) as a linear multi-dimensional array? When I think of tensors, I think of them as generalizations of matrices to many dimensions. More specifically, how do I show that a linear map between two vector spaces is a tensor of rank (1,1). In the above definition, I can see a map to $\mathbb{R}$, while this is a map between two vector spaces.

Best Answer

To see that a linear map of vector spaces is a $(1,1)$-tensor, realize that such an object eats a vector $X$ and covector $\omega$ (linear form to $\mathbb R$!) and gives you a number, i.e. $$\mathbf T(X,\omega)=\sum_{i,j} X^i\omega_j\mathbf T(\partial_i,dx^j)=\sum_{i,j} \omega_jT^j_iX^i,\;\text{where }T^j_i:=\mathbf T(\partial_i,dx^j)\in\mathbb R.$$ In the previous formula one can see that a $(1,1)$-tensor "is" a matrix $T^j_i$ which takes a vector $X$ and gives a vector $\sum_i T^j_iX^i$, which is precisely the matrix characterization of a linear map between vector spaces, precisely the linear map $\mathbf T(\_,\cdot):V\rightarrow V$ given by $X\mapsto\mathbf T(X,\cdot)$.

Thinking of tensors as multi-dimensional arrays is indeed a conceptual understanding of what they are as long as you imagine them acting linearly on vectors. You may be interested in this long answer I gave at this other question concerning the concept and construction of covector in manifolds (and thus generalizing to tensors). You can think that your multi-dimensional multi-linear arrays have their entries smoothly dependent on the points of the manifold, in such a way that the whole object remains linear when acting on smooth vector fields (i.e. sections of the tangent bundle). For this to be true, their components have to transform in a particular way (generalizing the transformation law derived at the linked answer above).

If covectors are smoothly varying linear forms $\omega\vert_p :T_pM\rightarrow\mathbb R$ such that $\omega (aX+bY)=a\omega(X)+b\omega(Y)\in\mathbb R$ for all $a,b\in\mathbb R$ and any smooth vector fields $X,Y\in TM$, then they are completely determined, by linearity (check!), by their action on any coordinate basis of any chart: $$\omega(\partial_i)=:\omega_i\Rightarrow \omega (X)=\sum_i X^i\omega(\partial_i)=\sum_i X^i\omega_i\,.$$ Since $X^i$ and $\omega_i$ are by definition of vectors and covectors smooth scalar fields on $M$, we have checked that indeed such $\omega$ are the linear forms $TM\rightarrow\mathbb R$. Now, define covariant $k$-tensors to be similarly generalized from punctual multi-linear forms $\Omega\vert_p:\otimes^k T_pM\rightarrow\mathbb R$, that is to say, linear functionals on $k$ vectors fields: $$\Omega(aX_1+bY_1,X_2,...,X_k) =a\Omega(X_1,...,X_k)+ b\Omega(X_1,...,X_k)\text{ and similarly for the other slots}.$$ Because of this multi-linearity, their definition as action on a number of vectors reduces to action on coordinate basis: $$\Omega(\partial_{i_1},...,\partial_{i_k}) =\Omega_{i_1...i_k}\Rightarrow \Omega(X_1,...,X_k)=\sum_{i_1,...,i_k}X^{i_1}_1\cdots X^{i_k}_k\Omega_{i_1...i_k}\,.$$ In order to extend this algebraic (co)-tensors at every point to tensor fields on the manifold, their multi-array components $\Omega_{i_1...i_k}(P)$ must be smooth functions on the points $P\in M$, i.e. $\Omega_{i_1...i_k}:M\rightarrow\mathbb R$, but for the whole array to behave coherently and multi-linearly, since vectors transform between charts by the basis transformation, their components must patch together as: $$\partial'_i=\sum_j\frac{\partial x^j}{\partial y_i}\partial_j\Rightarrow \Omega'_{i_1...i_k}:=\Omega(\partial'_{i_1},...,\partial'_{i_k})=\sum_{j_1,...,j_k}\frac{\partial x^{j_1}}{\partial y_{i_1}}\cdots\frac{\partial x^{j_k}}{\partial y_{i_k}}\Omega_{j_1...j_k}\,.$$ This is the reason to the, too often, confusing fact that the components of a covector transform as the basis of vectors, and the components of a vector as the basis of covectors!

If you generalize this to include contravariant $k$-tensors $A\vert_p:\otimes^k T_p^*M\rightarrow\mathbb R$, it is easy to deduce that their transformation between charts has the oposite jacobian matrices. Finally you put together all this to define $(r,s)$-tensors $T\vert_p:\otimes^r T_pM\otimes^s T_p^*M\rightarrow\mathbb R$, which are multi-linear objects which take $r$ vectors and $s$ covectors and give numbers, make them into tensor fields by letting their array components to vary smoothly on $M$ and ensuring their patching on overlaping charts to preserve linearity by: $$T'\,^{i_1...i_s}_{j_1...j_r}=\sum_{l_1,...,l_s}\sum_{k_1,...,k_r}\frac{\partial x^{k_1}}{\partial y_{j_1}}\cdots\frac{\partial x^{k_r}}{\partial y_{i_k}}\cdot\frac{\partial y^{i_1}}{\partial x_{l_1}}\cdots\frac{\partial y^{i_r}}{\partial x_{l_k}}T^{l_1...l_s}_{k_1...k_r}\,.$$ Therefore, indeed you can think of tensors as general multi-linear multi-dimensional arrays of smooth real functions on every chart of your manifold, such that all of them patch together nicely on the charts' intersections (think of charts as coordinate systems, like observers in physics, so any of them has a bunch of functions making these arrays, and any two observers agree they are talking about the same array-object by checking that their action on any input is the same regardless of their coordinate transformations). So actually, you end up realizing that you could have defined tensors as sections of the tensor product bundle of several copies of the tangent and cotangent bundles, since that is the most geometric, intrinsic and coordinate-independent definition possible. By taking only anti-symmetric covariant tensors you get the differential forms of the other answer.

Related Question