The difference is all in your head. Literally.
The difference in calling the same object $A$ a "tensor over $\mathfrak{X}(M)$" as opposed to "a tensor field over $M$" is that the former emphasizes the fact that we have an algebraic object: a tensor over some module, while the latter emphasizes the fact that underlying the module there is some manifold and geometry is going on there.
Calling something a tensor field instead of a tensor forces you to remember that $\mathfrak{X}(M)$ is not just some arbitrary module, but that its elements can be identified with smooth sections of the tangent bundle of some manifold. These additional structures are occasionally useful.
I notice this question has been asked some time ago and my answer is very likely too late but since people might search for it I thought providing an answer would still be a valuable contribution. Here ot goes:
A common way (though maybe not the only way) to define (tensor,matrix) products is the $n$-mode product. It's basically an extension of the idea of bilinear forms (multiply "from the left" and "from the right") to multiple dimensions (also "from behind" and in all other dimensions).
Essentially it maps an $I_1 \times I_2 \times ... \times I_N$ tensor to an $I_1 \times ... \times J_n \times ... \times I_N$ matrix by a multiplication in the $n$-th mode with a matrix that is of size $J_n \times I_n$. Therefore, the size of the tensor in the $n$-th mode must agree to the number of columns of the matrix (this being pure convention, one might have as well used the rows). It does so by multiplying each $n$-mode vector by $M$ from the left, where $n$-mode vectors are the generealization of column and row vectors (i.e., the vectors you get if all indices are fixed and one is running in its range).
A common notation of this is $S = T \times_n M$, where $T \in \mathbb{R}^{I_1 \times I_2 \times ... \times I_N}$ and $M \in \mathbb{R}^{J_n \times I_n}$.
For more information about these products and their properties I recommend consulting [*], it's freely available if you google it.
[*] Kolda, Tamara G., and Brett W. Bader. "Tensor decompositions and applications." SIAM review 51.3 (2009): 455-500.
Since you also asked how to implement it in Matlab without loops, it is quite easy:
order = [n:N,1:n-1];
Tsize = size(T);
S = ipermute(reshape(M*reshape(permute(T,order),Tsize(n),[]),[size(M,1),Tsize(order(2:end))]),order);
Best Answer
Maybe to see the difference between rank 2 tensors and matrices, it is probably best to see a concrete example. Actually this is something which back then confused me very much in the linear algebra course (where we didn't learn about tensors, only about matrices).
As you may know, you can specify a linear transformation $a$ between vectors by a matrix. Let's call that matrix $A$. Now if you do a basis transformation, this can also be written as a linear transformation, so that if the vector in the old basis is $v$, the vector in the new basis is $T^{-1}v$ (where $v$ is a column vector). Now you can ask what matrix describes the transformation $a$ in the new basis. Well, it's the matrix $T^{-1}AT$.
Well, so far, so good. What I memorized back then is that under basis change a matrix transforms as $T^{-1}AT$.
But then, we learned about quadratic forms. Those are calculated using a matrix $A$ as $u^TAv$. Still, no problem, until we learned about how to do basis changes. Now, suddenly the matrix did not transform as $T^{-1}AT$, but rather as $T^TAT$. Which confused me like hell: how could one and the same object transform differently when used in different contexts?
Well, the solution is: because we are actually talking about different objects! In the first case, we are talking about a tensor that takes vectors to vectors. In the second case, we are talking about a tensor that takes two vectors into a scalar, or equivalently, which takes a vector to a covector.
Now both tensors have $n^2$ components, and therefore it is possible to write those components in a $n\times n$ matrix. And since all operations are linear resp. bilinear, the normal matrix-matrix and matrix-vector products together with transposition can be used to write the operations of the tensor. Only when looking at basis transformations, you see that both are, indeed, not the same, and the course did us (well, at least me) a disservice by not telling us that we are really looking at two different objects, and not just at two different uses of the same object, the matrix.
Indeed, speaking of a rank-2 tensor is not really accurate. The rank of a tensor has to be given by two numbers. The vector to vector mapping is given by a rank-(1,1) tensor, while the quadratic form is given by a rank-(0,2) tensor. There's also the type (2,0) which also corresponds to a matrix, but which maps two covectors to a number, and which again transforms differently.
The bottom line of this is:
Of course, another difference between matrices and tensors is that matrices are by definition two-index objects, while tensors can have any rank.