It's probably because it's not a geometrically meaningful operation; a linear transformation whose matrix in one basis is all ones, has another matrix in another basis.
Whenever I've seen the notation $A+b$ in mathematics, it has meant $A+bI$ (where $A$ is a quadratic matrix and $I$ is the identity matrix of the same size). Some people write $\det(A-\lambda)$ for the characteristic polynomial, for example.
Fix a basis $\{e_1, \ldots, e_n\}$ of $V$, and consider the dual basis $\{f_1, \ldots, f_n \}$ of $V^\ast$. Then we have a basis
$$\{e_1\otimes f_1,\ldots, e_i \otimes f_j, \ldots, e_n \otimes f_n\}$$
for $V \otimes V^\ast$, and the matrix
$$A = (a_{ij})$$
is just a way of representing the element
$$\sum_{i=1}^n \sum_{j=1}^n a_{ij} \; e_i \otimes f_j \in V \otimes V^\ast.$$
Of course an element of $V \otimes V^\ast$ gives a linear map $V \to V$ by
$$(w \otimes f)(v) := f(v) w$$
and extending by linearity. Given two such elements, we can compose the corresponding functions:
$$(w' \otimes f')(w \otimes f)(v) = (w' \otimes f')(f(v) w) = f(v) f'(w) w' = f'(w) \; (w' \otimes f)(v)$$
so composition of linear maps is given by
$$(w' \otimes f') \circ (w \otimes f) = f'(w) \; (w' \otimes f)$$
extended by linearity. If you write your elements in the $e_i \otimes f_j$ basis and apply this operation to them, you'll see that the usual definition of matrix multiplication pops right out.
Of course all the calculations with explicit tensors above can be rephrased in terms of the universal property of the tensor product if you like.
This is all assuming you want the matrix to represent an element of $V \otimes V^\ast$ rather than an element of $V \otimes V$ or $V^\ast \otimes V^\ast$. But you can work out what should happen in cases like that the same way.
Best Answer
The product of matrices is defined so that it corresponds to the composition of the corresponding linear maps. One can derive the usual formula for matrix multiplication from this fact alone. This should be covered in every good linear algebra textbook, e.g. Axler's Linear Algebra Done Right. $\:$ See also Arturo Magidin's answer here. So your question reduces to why composition of maps is denoted the same as multiplication. One answer is that rings arise naturally as subrings of linear maps on their underlying additive groups (left regular representation). This is a ring-theoretic analog of the Cayley represention of a group as subgroups of permutation, by acting on itself by left multiplication. This allows us to view "functions" as "numbers" and exploit operator theoretic techniques such as factoring characteristic polynomials and differential and difference operators (recurrences), etc. The point of the common notation is to emphasize this common ring structure so that one may exploit it by reusing similar techniques where they apply.
Examples of such techniques abound. For some examples of operator algebra see here, here, here. See also here, here where the fibonacci recurrence is recast into linear system form, yielding an addition formula and fast computation algorithm by repeating squaring of the shift matrix.