My answer from the MO thread:
A matrix is just a list of numbers, and you're allowed to add and multiply matrices by combining those numbers in a certain way. When you talk about matrices, you're allowed to talk about things like the entry in the 3rd row and 4th column, and so forth. In this setting, matrices are useful for representing things like transition probabilities in a Markov chain, where each entry indicates the probability of transitioning from one state to another. You can do lots of interesting numerical things with matrices, and these interesting numerical things are very important because matrices show up a lot in engineering and the sciences.
In linear algebra, however, you instead talk about linear transformations, which are not (I cannot emphasize this enough) a list of numbers, although sometimes it is convenient to use a particular matrix to write down a linear transformation. The difference between a linear transformation and a matrix is not easy to grasp the first time you see it, and most people would be fine with conflating the two points of view. However, when you're given a linear transformation, you're not allowed to ask for things like the entry in its 3rd row and 4th column because questions like these depend on a choice of basis. Instead, you're only allowed to ask for things that don't depend on the basis, such as the rank, the trace, the determinant, or the set of eigenvalues. This point of view may seem unnecessarily restrictive, but it is fundamental to a deeper understanding of pure mathematics.
Linear algebra is so named because it studies linear functions. A linear function is one for which
$$f(x+y) = f(x) + f(y)$$
and
$$f(ax) = af(x)$$
where $x$ and $y$ are vectors and $a$ is a scalar. Roughly, this means that inputs are proportional to outputs and that the function is additive. We get the name 'linear' from the prototypical example of a linear function in one dimension: a straight line through the origin. However, linear functions can be more complex than this (or indeed, simpler: the function $f(x)=0$ for all $x$ is a linear function!
Of course, I've brushed a lot of detail under the carpet here. For example, what kind of space are $x$ and $y$ members of? (Answer: They're elements of a vector space). Do $x$ and $f(x)$ have to belong to the same space? (Answer: No). If they belong to different spaces, what does it mean to write $ax$ and $af(x)$? (Answer: you need an action by the same field on each of the vector spaces). Do the vector spaces have to be finite dimensional? (Answer: no, and in fact a lot of really interesting linear algebra takes place over infinite-dimensional vector spaces).
I hope that's enough to get you started.
Best Answer
Note: there is a big difference between the terms "matrix coefficient" and "coefficient matrix". I'll explain first what you are probably asking about:
Coefficient matrix
Suppose you have a system of equations:
$$\begin{align*} 1\cdot x_1 + 2x_2 &= 16\\ 3x_1 + 1\cdot x_2 &= 4 \\ \end{align*} \tag{1}$$
$(I)$ Then the coefficient matrix (in this case, with integer entries) corresponding to this system of linear equations in $(1)$ is:
$$M = \begin{bmatrix} 1 & 2\\ 3 & 1\\ \end{bmatrix} $$ where the entries in first column represents the coefficients of the $x_1$, and those in the second column the coefficients of $x_2$, etc..
The augmented coefficient matrix $M_a$ would include the entries in a third column which correspond to the values at the right of the equals signs in $(1)$:
$$M_a = \begin{bmatrix} 1 & 2 &\;|\; 16\\ 3 & 1 &|\; 4\;\\ \end{bmatrix} $$
Matrix coefficient
$(2)$ On the other hand, this coefficient matrix contrasts with what is meant by a matrix coefficient. (Please read more at the given linked entry from Wikipedia: what follows is a brief excerpt from that entry.)