It doesn't seem like there's anything here that can't be done with every-day matrix multiplication. Seems like we can calculate $A«B$ as follows:
- flip $A$, call the flipped version $A_F$. Same for $B_F$
- Calculate $B_FA_F$
- Take the product, flip it back
As it ends up, "flipping" a matrix from right to left is itself a matrix operation. Namely, let $K_n$ be the $n\times n$ matrix given by
$$
K_n = \pmatrix{
0&\cdots&0&1\\
\vdots&&1&0\\
0&& &\vdots\\
1&0&\cdots &0
}
$$
Now, suppose $A$ is $k \times m$ and $B$ is $n \times k$. Then we can calculate
$$
A«B= (B K_k)(AK_m)K_m = BK_kA
$$
I have not heard of any application for this particular set of operations.
This analysis also reveals two different ways of finding $A«B$:
- flip $B$ right to left to get $B_F$, and calculate $A«B = B_F A$
- flip $A$ top to bottom to get $A_F$, and calculate $A«B = B A_F$
It has already been pointed out that you can multiply a row vector
and a matrix. In fact, the only difference between the two multiplications
below is that the numeric values in the first result are stacked in
a column vector while the same numeric values are listed in a row
vector in the second result:
$$\pmatrix{6& -7& 10 & 1 \\ 0& 3& -1 & 4 \\ 0& 5& -7 & 5 \\ 4&1&0&-2}
\pmatrix{2\\-2\\-1\\1} = \pmatrix{17\\-1\\2\\4}$$
$$ \pmatrix{2 &-2&-1&1}
\pmatrix{6& 0&0&4\\-7& 3&5&1\\10 & -1&-7&0\\1 & 4 & 5&-2}
= \pmatrix{17&-1&2&4}$$
One simple pragmatic difference between these two equations is that the
second one is a lot wider when it is fully written out.
It seems to me the first equation "fits" more neatly on the page
because we have already committed to making an equation that is four
rows tall (because of the $4\times4$ matrix, this is unavoidable),
so there is no "cost" in also making the vectors four rows tall;
and in return we get vectors that are only one column wide
instead of four columns each.
Now imagine the dimensions of the matrix were $6\times6$;
the multiplication by a column vector would still fit neatly on
this page but we might have some difficulty with the multiplication
that uses row vectors; it might not fit within the margins of this
column of text.
It's also possible that the convention is influenced by the
interpretation of the matrix as a transformation to be applied to
the vector, along with a preference for writing the names of
transformations on the left of the thing they transform
(much as we like to write a function name to the left of the
input parameters of a function, that is, $f(x) = x^2$
rather than $(x)f = x^2$).
But I'm not sure there is a more compelling reason behind this
particular observation other than collective force of habit,
and these patterns are not universal; sometimes people
write the name of the transformation on the right.
Best Answer
The physical significance depends on the matrix. The primary point is that multiplication by a matrix (in the usual sense, matrix on the left) represents the action of a linear transformation. We'll work with one basis throughout which we'll use to represent our matrices and our vectors. Just note that because of linearity, if we have a vector $x = c_1e_1 + \ldots + c_ne_n$, and a linear transformation $L$, then $$ \begin{eqnarray*} L(x) &=& L(c_1e_1 + \ldots + c_ne_n) \\ &=& L(c_1e_1) + \ldots + L(c_ne_n) \\ &=& c_1L(e_1) + \ldots + c_nL(e_n) \end{eqnarray*}. $$
This means that any linear transformation is uniquely determined by its effect on a basis. So to define one, we only need to define its effect on a basis. This is the matrix
$$ \left(L(e_1) \ldots L(e_n)\right) = \left( \begin{array}{c} a_{11} & \ldots & a_{1n} \\ \ldots & \ldots & \ldots \\ a_{n1} & \ldots & a_{nn} \end{array} \right) $$
where $a_{ij}$ is the $i$'th compenent of $L(e_j)$.
Let's call this matrix $M_L$. We want to define multiplication of $M_L$ and some vector $x$ so that $M_L \cdot x = L(x)$. But there's only one way to do this. Because the $j$'th column of $M_L$ is just $L(e_j)$ and in light of our decomposition of the action of $L$ in terms of the $L(e_j)$, we can see that
$$ M_L \cdot x = \left( \begin{array}{c} a_{11} & \ldots & a_{1n} \\ \ldots & \ldots & \ldots \\ a_{n1} & \ldots & a_{nn} \end{array} \right) \cdot \left( \begin{array}{c} c_1 \\ \ldots \\ c_n \end{array} \right) $$
must equal
$$ c_1\left( \begin{array}{c} a_{11} \\ \ldots \\ a_{n1} \end{array} \right) + \ldots + c_n\left( \begin{array}{c} a_{1n} \\ \ldots \\ a_{nn} \end{array} \right) = \left( \begin{array}{c} c_1a_{11} + \ldots + c_na_{1n} \\ \ldots \\ c_1a_{n1} + \ldots + c_na_{nn} \end{array} \right) $$
which is the standard definition for a vector left-multiplied by a matrix.
EDIT: In response to the question "Is a matrix a scalar thing". Kind of but no.
If you consider the most basic linear equation in one variable, $y = mx$, where everything in sight is a scalar, then a matrix generalizes the role played by $m$ to higher dimensions and a vector generalizes the role played by $y$ and $x$ to higher dimensions. But matrices don't commute multiplicatively. So that's one big thing that's different. But they're strikingly similar in a lot of ways. We can define the function of matrices $f(A) = A^2$ and we can differentiate it with respect to $A$. When we do this in one variable with the map $f(x) = x^2$, we get the linear map $f_x'(h) = 2xh$ but when we do it with matrices, we get the linear map $f_A'(H) = AH + HA$. If matrices commuted, then that would just be $2AH$!
EDIT2:
My "add comment" button isn't working for some reason. The $e_j$'s are a basis, $e_1, \ldots, e_n$. I think the best thing to do would be to wait for your teacher to get around to it. I sometimes forget that people introduce matrices before vector spaces and linear transformations. It will all make much more sense then. The main point of a basis though is that it's a set of vectors so that every vector in the given space can be written as a unique linear combination of them.