A tensor of type $(r,s)$ on a vector space $V$ is a $C$-valued function $T$ on $V×V×…×V×W×W×…×W$ (there are $r$ $V$'s and $s$ $W$'s in which $W$ is the dual space of $V$) which is linear in each argument. We take $(0, 0)$-tensors to be scalars, as a matter of convention. The interpretations of $(r,0)$-tensors are trivial, since they are definitions of multilinear functionals (as a special case $(1,0)$-tensor interpreted as covector (elements of dual space)). We can interpret $(1,1)$ tensors as follows: $A(v,f ) ≡ f (Av)$. Say we have a linear operator $R$; then we can turn $R$ into a second rank tensor $T$ by $T(v,w) ≡ v · Rw$ where $·$ denotes the usual dot product of vectors. If we compute the components of $T$ we find that the components of the tensor $T$ are the same as the components of the linear operator $R$. Ok. Everything is good. But I cant understand interpretations of other $(r,s)$-tensors. For example I found in Wikipedia $(0,1)$-tensor interpreted as a vector or $(0,2)$ as a bivector and in general $(0,s)$ tensor as $s$-vector tensor; or $(2,1)$ tensor as cross product and so on. I want you to show how the tensors in general interpreted. Is it possible for you to show these interpretations like as I did for $(1,1)$-tensor ?
[Math] Interpretation of $(r,s)$ tensor
differential-geometrylinear algebratensor-ranktensorsvector-spaces
Related Solutions
To see that a linear map of vector spaces is a $(1,1)$-tensor, realize that such an object eats a vector $X$ and covector $\omega$ (linear form to $\mathbb R$!) and gives you a number, i.e. $$\mathbf T(X,\omega)=\sum_{i,j} X^i\omega_j\mathbf T(\partial_i,dx^j)=\sum_{i,j} \omega_jT^j_iX^i,\;\text{where }T^j_i:=\mathbf T(\partial_i,dx^j)\in\mathbb R.$$ In the previous formula one can see that a $(1,1)$-tensor "is" a matrix $T^j_i$ which takes a vector $X$ and gives a vector $\sum_i T^j_iX^i$, which is precisely the matrix characterization of a linear map between vector spaces, precisely the linear map $\mathbf T(\_,\cdot):V\rightarrow V$ given by $X\mapsto\mathbf T(X,\cdot)$.
Thinking of tensors as multi-dimensional arrays is indeed a conceptual understanding of what they are as long as you imagine them acting linearly on vectors. You may be interested in this long answer I gave at this other question concerning the concept and construction of covector in manifolds (and thus generalizing to tensors). You can think that your multi-dimensional multi-linear arrays have their entries smoothly dependent on the points of the manifold, in such a way that the whole object remains linear when acting on smooth vector fields (i.e. sections of the tangent bundle). For this to be true, their components have to transform in a particular way (generalizing the transformation law derived at the linked answer above).
If covectors are smoothly varying linear forms $\omega\vert_p :T_pM\rightarrow\mathbb R$ such that $\omega (aX+bY)=a\omega(X)+b\omega(Y)\in\mathbb R$ for all $a,b\in\mathbb R$ and any smooth vector fields $X,Y\in TM$, then they are completely determined, by linearity (check!), by their action on any coordinate basis of any chart: $$\omega(\partial_i)=:\omega_i\Rightarrow \omega (X)=\sum_i X^i\omega(\partial_i)=\sum_i X^i\omega_i\,.$$ Since $X^i$ and $\omega_i$ are by definition of vectors and covectors smooth scalar fields on $M$, we have checked that indeed such $\omega$ are the linear forms $TM\rightarrow\mathbb R$. Now, define covariant $k$-tensors to be similarly generalized from punctual multi-linear forms $\Omega\vert_p:\otimes^k T_pM\rightarrow\mathbb R$, that is to say, linear functionals on $k$ vectors fields: $$\Omega(aX_1+bY_1,X_2,...,X_k) =a\Omega(X_1,...,X_k)+ b\Omega(X_1,...,X_k)\text{ and similarly for the other slots}.$$ Because of this multi-linearity, their definition as action on a number of vectors reduces to action on coordinate basis: $$\Omega(\partial_{i_1},...,\partial_{i_k}) =\Omega_{i_1...i_k}\Rightarrow \Omega(X_1,...,X_k)=\sum_{i_1,...,i_k}X^{i_1}_1\cdots X^{i_k}_k\Omega_{i_1...i_k}\,.$$ In order to extend this algebraic (co)-tensors at every point to tensor fields on the manifold, their multi-array components $\Omega_{i_1...i_k}(P)$ must be smooth functions on the points $P\in M$, i.e. $\Omega_{i_1...i_k}:M\rightarrow\mathbb R$, but for the whole array to behave coherently and multi-linearly, since vectors transform between charts by the basis transformation, their components must patch together as: $$\partial'_i=\sum_j\frac{\partial x^j}{\partial y_i}\partial_j\Rightarrow \Omega'_{i_1...i_k}:=\Omega(\partial'_{i_1},...,\partial'_{i_k})=\sum_{j_1,...,j_k}\frac{\partial x^{j_1}}{\partial y_{i_1}}\cdots\frac{\partial x^{j_k}}{\partial y_{i_k}}\Omega_{j_1...j_k}\,.$$ This is the reason to the, too often, confusing fact that the components of a covector transform as the basis of vectors, and the components of a vector as the basis of covectors!
If you generalize this to include contravariant $k$-tensors $A\vert_p:\otimes^k T_p^*M\rightarrow\mathbb R$, it is easy to deduce that their transformation between charts has the oposite jacobian matrices. Finally you put together all this to define $(r,s)$-tensors $T\vert_p:\otimes^r T_pM\otimes^s T_p^*M\rightarrow\mathbb R$, which are multi-linear objects which take $r$ vectors and $s$ covectors and give numbers, make them into tensor fields by letting their array components to vary smoothly on $M$ and ensuring their patching on overlaping charts to preserve linearity by: $$T'\,^{i_1...i_s}_{j_1...j_r}=\sum_{l_1,...,l_s}\sum_{k_1,...,k_r}\frac{\partial x^{k_1}}{\partial y_{j_1}}\cdots\frac{\partial x^{k_r}}{\partial y_{i_k}}\cdot\frac{\partial y^{i_1}}{\partial x_{l_1}}\cdots\frac{\partial y^{i_r}}{\partial x_{l_k}}T^{l_1...l_s}_{k_1...k_r}\,.$$ Therefore, indeed you can think of tensors as general multi-linear multi-dimensional arrays of smooth real functions on every chart of your manifold, such that all of them patch together nicely on the charts' intersections (think of charts as coordinate systems, like observers in physics, so any of them has a bunch of functions making these arrays, and any two observers agree they are talking about the same array-object by checking that their action on any input is the same regardless of their coordinate transformations). So actually, you end up realizing that you could have defined tensors as sections of the tensor product bundle of several copies of the tangent and cotangent bundles, since that is the most geometric, intrinsic and coordinate-independent definition possible. By taking only anti-symmetric covariant tensors you get the differential forms of the other answer.
Let's assume everything has finite dimension here.
1) I don't know exactly to what property $(f\otimes g)(...) = f(...)g(...)$ refer to exactly in the second definition of tensor. For example if we have $f$ to be a (0,2) tensor and $g$ a (0,2) tensor. the product would be a (0,4) tensor so $(f\otimes g)(v_1,v_2,v_3,v_4) = f(v_1,v_2)g(v_3,v_4)$. To what kind of tensor in definition 2 would this be isomorphic ?
A tensor product of two vector spaces $V$ and $W$ is a pair $(\mathsf{T} ,t)$, where $\mathsf{T} $ is a vector space and $t\colon V \times W \to \mathsf{T} $ is bilinear, such that if $\{{\bf v}_i\}$ and $\{{\bf w}_j\}$ are bases for $V$ and $W$, then $\{t({\bf v}_i,{\bf w}_j)\}$ spans $\mathsf{T} $, and if given $b\colon V \times W \to Z$ is any bilinear map (arriving at an arbitrary vector space $Z$), there is an unique linear map $\overline{b}\colon \mathsf{T} \to Z$ such that $\overline{b}\circ t = b$. Meaning that bilinear maps $b$ factor through $\mathsf{T} $ and we have all the information needed in a single linear map $\overline{b}$. One then proves that all tensor products of $V$ and $W$ are isomorphic, and so we put the usual notations $\mathsf{T} \equiv V \otimes W$, $t \equiv \otimes$, and write ${\bf v} \otimes {\bf w}$ for $t({\bf v},{\bf w})$.
An explicit construction is to take the free vector space with basis $V \times W$, and take its quotient by the subspace spanned by the elements of the form \begin{align} &({\bf v}_1+{\bf v}_2,{\bf w}) - ({\bf v}_1,{\bf w})-({\bf v}_2,{\bf w}), \\ & ({\bf v},{\bf w}_1+{\bf w}_2)-({\bf v},{\bf w}_1)-({\bf v},{\bf w}_2), \\ & (\lambda{\bf v},{\bf w}) - ({\bf v},\lambda{\bf w}).\end{align} We then denote the class of $({\bf v},{\bf w})$ by ${\bf v} \otimes {\bf w}$.
One generalizes all of this by considering more spaces, writing "multilinear maps" instead of "bilinear maps", and so on. The space $${\frak T}^{(r,s)}(V) = \{ f\colon (V^\ast)^r \times V^s \to \Bbb R \mid f \text{ is multilinear} \}$$is isomorphic to $V^{\otimes r}\otimes (V^\ast)^{\otimes s}$, and that isomorphism does not depend on a choice of basis (so it is better than your average run-of-the-mill isomorphism). Well, being more honest, we use a basis to define the isomorphism, but then we check that it would be same if we started with another basis. We say that $T \in {\frak T}^{(r,s)}(V)$ is a $r-$times contravariant and $s-$times covariant tensor. We of course have an operation $$\otimes \colon {\frak T}^{(r,s)}(V) \times {\frak T}^{(r',s')}(V) \to {\frak T}^{(r+r',s+s')}(V).$$
Now we hopefully understand a little better what a tensor product is, we can simply note that ${\frak T}^{(0,4)}(V) \cong (V^\ast)^{\otimes 4}$, and if $\{{\bf v}_i\}$ is a basis for $V$, and $\{{\bf v}^i\}$ is the dual basis, then $$f \otimes g = \sum_{i,j,k,\ell} f_{ijk\ell} {\bf v}^i \otimes {\bf v}^j\otimes {\bf v}^k \otimes {\bf v}^\ell,$$where $f_{ijkl} = f({\bf v}_i,{\bf v}_j,{\bf v}_k,{\bf v}_\ell)$ and ${\bf v}^i \otimes {\bf v}^j\otimes {\bf v}^k \otimes {\bf v}^\ell \in {\frak T}^{(0,4)}(V)$. It corresponds to that same expression seen as a linear combination of ${\bf v}^i \otimes {\bf v}^j \otimes {\bf v}^k \otimes {\bf v}^\ell$, the class of $({\bf v}^i,{\bf v}^j,{\bf v}^k,{\bf v}^\ell)$ in that quotient - that expression is an element of $(V^\ast)^{\otimes 4}$. Maybe I shouldn't have been lazy and used another notation for the classes until now - I'll gladly explain it all again if you have trouble following.
2) I am confused about what is a tensor component. As I understand tensor components are the scalars that form a linear combination of basis tensors. In definition 1, I see books defining the tensor components as $T^{a_1,...,a_k}_{b_1,...,b_k} = T(a_1,...,a_k,b_1,...,b_k)$. Where $a_i$ and $\{b_i\}$ are a basis of the vector space and covector space. For definition 2 I see tensor component written as : \begin{align} \sum\sum A_{ij}a_i\otimes b_j \end{align} How do these components relate ?
Components do depend on a choice of basis. The choice of notation used in your textbook was bad, we only keep indices on $T$, not the vectors. I mean one would write $$T^{i_1...i_r}_{\qquad j_1...j_s} \stackrel{\rm def.}{=} T({\bf v}^{i_1},...,{\bf v}^{i_r},{\bf v}_{j_1},...,{\bf v}_{j_s})$$instead. And with this notation, we'd have $$T = \sum_{i_1,...,i_r,j_1,...,j_s} T^{i_1...i_r}_{\qquad j_1...j_s} {\bf v}_{i_1}\otimes \cdots \otimes {\bf v}_{i_r}\otimes {\bf v}^{j_1}\otimes \cdots \otimes {\bf v}^{j_s}.$$With Einstein's summation convention, we'd only write $$T = T^{i_1...i_r}_{\qquad j_1...j_s} {\bf v}_{i_1}\otimes \cdots \otimes {\bf v}_{i_r}\otimes {\bf v}^{j_1}\otimes \cdots \otimes {\bf v}^{j_s}$$ with all the summations implied (and that's why index balance is good - if the same index appears twice, up and below, sum over it).
When you ask how these things relate, the reasonable thing to consider is another basis $\{{{\bf v}_i}'\}$, the corresponding dual basis $\{{{\bf v}^i}'\}$, write $${T^{i_1...i_r}_{\qquad j_1...j_s}}' = T({{\bf v}^{i_1}}',...,{{\bf v}^{i_r}}',{{\bf v}_{j_1}}',...,{{\bf v}_{j_s}}')$$and see how we can express this in terms of the "old" components $T^{i_1...i_r}_{\qquad j_1...j_s}$.
If you're still alive after all that index juggling, you'll certainly pardon me by ilustrating the relation only in the $(1,1)$ case. Write ${{\bf v}_j}' = \sum_i \alpha_{ij} v_i$. It is an easy linear algebra exercise to check that ${{\bf v}^j}' = \sum_i \beta_{ij} {\bf v}^i$, where $(\beta_{ij})$ is the inverse matrix of $(\alpha_{ij})$. Then $${T^i_{\hspace{1ex} j}}'=T({{\bf v}^i}',{{\bf v}_j}') = \sum_{k, \ell }\beta_{ki}\alpha_{\ell j}T({{\bf v}^k}',{{\bf v}^\ell}') = \sum_{k,\ell} \beta_{ki}\alpha_{\ell j}T^k_{\hspace{1ex}\ell}.$$
If we had more entries, then more $\alpha$'s and $\beta$'s would pop out. You can see that in any physics book, for instance. I like A Short Course in General Relativity, by Foster & Nightingale. By the way, it is costumary to denote the entries of the inverse matrix by writing indices upstairs. In this notation, and using Einstein's convention, we'd have simply $${T^i_{\hspace{1ex} j}}' = \alpha^{ki}\alpha_{\ell j}T^k_{\hspace{1ex} \ell}.$$
3) I was trying to work out a basic example of an endomorphism $\mathbb{R}^2 \longrightarrow \mathbb{R}^2$ by using the two definitions but couldn't end up with the same set of components...
When we write components in a basis, they're real numbers, and our discussion does not quite apply if the codomain of the bilinear map isn't $\Bbb R$.
Best Answer
Any alternating $(r,s)$ tensor has a corresponding map that goes $\Lambda^r V \to \Lambda^s V$. Suppose $R \in \Lambda^r V$ and $\Sigma \in \Lambda^s V^*$. Then define $\underline T:\Lambda^r V \to \Lambda^s V$ such that
$$T(R, \Sigma) = \Sigma[ \underline T(R)]$$
The uniqueness of $\underline T$ can be proved by taking a "gradient" with respect to the vector space of $\Lambda^s V^*$.
Geometrically, $\underline T$ maps an $r$-vector (which corresponds to an $r$-dimensional subspace) to an $s$-vector, and the $s$-covector $\Sigma$ allows us to extract the components of $\underline T(R)$.