Note: there is a big difference between the terms "matrix coefficient" and "coefficient matrix". I'll explain first what you are probably asking about:
Coefficient matrix
Suppose you have a system of equations:
$$\begin{align*}
1\cdot x_1 + 2x_2 &= 16\\
3x_1 + 1\cdot x_2 &= 4 \\
\end{align*}
\tag{1}$$
$(I)$ Then the coefficient matrix (in this case, with integer entries) corresponding to this system of linear equations in $(1)$ is:
$$M =
\begin{bmatrix}
1 & 2\\
3 & 1\\
\end{bmatrix}
$$
where the entries in first column represents the coefficients of the $x_1$, and those in the second column the coefficients of $x_2$, etc..
The augmented coefficient matrix $M_a$ would include the entries in a third column which correspond to the values at the right of the equals signs in $(1)$:
$$M_a =
\begin{bmatrix}
1 & 2 &\;|\; 16\\
3 & 1 &|\; 4\;\\
\end{bmatrix}
$$
Matrix coefficient
$(2)$ On the other hand, this coefficient matrix contrasts with what is meant by a matrix coefficient. (Please read more at the given linked entry from Wikipedia: what follows is a brief excerpt from that entry.)
In mathematics, a matrix coefficient (or matrix element) is a function on a group of a special form, which depends on a linear representation of the group and additional data. For the case of a finite group, matrix coefficients express the action of the elements of the group in the specified representation via the entries of the corresponding matrices.
Isomorphisms are defined in many different contexts; but, they all share a common thread.
Given two objects $G$ and $H$ (which are of the same type; maybe groups, or rings, or vector spaces... etc.), an isomorphism from $G$ to $H$ is a bijection $\phi:G\rightarrow H$ which, in some sense, respects the structure of the objects. In other words, they basically identify the two objects as actually being the same object, after renaming of the elements.
In the example that you mention (vector spaces), an isomorphism between $V$ and $W$ is a bijection $\phi:V\rightarrow W$ which respects scalar multiplication, in that $\phi(\alpha\vec{v})=\alpha\phi(\vec{v})$ for all $\vec{v}\in V$ and $\alpha\in K$, and also respects addition in that $\phi(\vec{v}+\vec{u})=\phi(\vec{v})+\phi(\vec{u})$ for all $\vec{v},\vec{u}\in V$. (Here, we've assumed that $V$ and $W$ are both vector spaces over the same base field $K$.)
Best Answer
Linear algebra is so named because it studies linear functions. A linear function is one for which
$$f(x+y) = f(x) + f(y)$$
and
$$f(ax) = af(x)$$
where $x$ and $y$ are vectors and $a$ is a scalar. Roughly, this means that inputs are proportional to outputs and that the function is additive. We get the name 'linear' from the prototypical example of a linear function in one dimension: a straight line through the origin. However, linear functions can be more complex than this (or indeed, simpler: the function $f(x)=0$ for all $x$ is a linear function!
Of course, I've brushed a lot of detail under the carpet here. For example, what kind of space are $x$ and $y$ members of? (Answer: They're elements of a vector space). Do $x$ and $f(x)$ have to belong to the same space? (Answer: No). If they belong to different spaces, what does it mean to write $ax$ and $af(x)$? (Answer: you need an action by the same field on each of the vector spaces). Do the vector spaces have to be finite dimensional? (Answer: no, and in fact a lot of really interesting linear algebra takes place over infinite-dimensional vector spaces).
I hope that's enough to get you started.