Linear independence is linear independence is linear independence. It's defined entirely independently of matrices. It is, instead, defined in terms of a vector equation, which in finite dimensions, can be turned into a system of linear equations. As such, matrices are an excellent tool to determine linear independence.
The definition of linear independence is precisely what you wrote. We say $v_1, \ldots, v_n$ are linearly independent if the only solution of
$$a_1 v_1 + \ldots + a_n v_n = 0 \tag{$\star$}$$
for scalars $a_1, \ldots, a_n$, is the trivial solution $a_1 = a_2 = \ldots = a_n = 0$. That is, no other possible choices of scalars will make the above linear combination into the $0$ vector.
This doesn't matter if they are row vectors, column vectors, or more abstract vectors (such as matrices, functions, graphs on a fixed vertex set, algebraic numbers, etc). All you need to define linear independence is an (abstract) vector space.
For example, the real functions $\sin^2(x)$, $\cos^2(x)$, and the constant function $1$ are not linearly independent because
$$2 \cdot \sin^2(x) + 2 \cdot \cos^2(x) - 2 \cdot 1 \equiv 0,$$
i.e. the linear combination is exactly the $0$ function, even though the scalars aren't all $0$.
On the other hand, the functions $\sin^2$ and $\cos^2$ are linearly independent, because, if we assume
$$a_1 \sin^2(x) + a_2 \cos^2(x) \equiv 0,$$
that is, is equal to $0$ for all $x$, then trying $x = 0$ yields
$$0 = a_1 \sin^2(0) + a_2 \cos^2(0) = a_2$$
and trying $x = \pi/2$ yields
$$0 = a_1 \sin^2(\pi/2) + a_2 \cos^2(\pi/2) = a_1.$$
Thus, we logically come to the conclusion that $a_1 = a_2 = 0$, i.e. the functions are linearly independent.
So, where do matrices come in? If our vectors belong to $\Bbb{R}^m$ (or $\Bbb{C}^m$, or indeed $\Bbb{F}^m$ where $\Bbb{F}$ is a field), then equation $(\star)$ turns into a system of homogeneous linear equations. When you turn this system of linear equations into a matrix of coefficients, the columns will turn out to be precisely the vectors $v_1, \ldots, v_n$, expressed as column vectors. It doesn't matter whether $v_1, \ldots, v_n$ are expressed originally as column vectors or row vectors! Once you turn them into equations, then a matrix, they will become columns. (You should try this for yourself to convince yourself of this fact.)
So, if you take the rows of a given matrix, and try to figure out (by definition) whether they are linearly independent or not, you'll inevitably end up with these vectors being columns, i.e. you'll get the same matrix, just transposed.
Further, we also get a nice technique for proving linear (in)dependence of vectors in $\Bbb{R}^3$, and pruning them down to a linearly independent set: stick them as columns in a matrix $A$, row reduce to a row-echelon form $B$, and if the $i$th column of $B$ does not have a pivot in it, then the $i$th column of $A$ depends linearly on the previous columns of $A$, and hence can be removed without damaging the span.
If you instead stick the vectors in as rows in a matrix and reduce as above, then this will not tell you which vectors depend on each other, in the same way that the column approach does. However, row operations preserve the span of the row vectors, hence the non-zero rows of a row-echelon form of a matrix will be a basis for the span of your vectors. This basis may have no vectors in common with your original set of vectors, however!
The matrix consists of $2$ rows and $2$ columns. We can write the matrix is an element of $\mathbb{R}^{2 \times 2}$.
As for your question regarding matrix multiplication, it is the definition of the matrix multiplication that we define if $C=AB$, where $A\in \mathbb{R}^{m \times p}$ and $B\in \mathbb{R}^{p \times n}$, then $C\in \mathbb{R}^{m \times n}$ where $C_{ij}=\sum_{k=1}^pA_{ik}B_{kj}$, fixing $i$ and $j$, notice that as we increment $k$, we travel along the $i$-th row of $A$ and the $j$-th column of $B$>
Remark: if you want to compute
$$\begin{bmatrix}1 & 2 \\ 3 & 4 \end{bmatrix}\begin{bmatrix}2 \\ 3\end{bmatrix},$$
From the definition, we compute $$\begin{bmatrix} \begin{bmatrix} 1 & 2 \end{bmatrix}\begin{bmatrix}2 \\ 3\end{bmatrix} \\ \begin{bmatrix} 3 & 4 \end{bmatrix}\begin{bmatrix}2 \\ 3\end{bmatrix} \end{bmatrix}$$
but you can also verify that it is equal to $$2\begin{bmatrix} 1 \\ 3\end{bmatrix} + 3\begin{bmatrix} 2 \\ 4\end{bmatrix}.$$
Best Answer
Yes. The following are equivalent for a square matrix $A$:
$A$ is non-singular
the rows of $A$ are linearly independent
the columns of $A$ are linearly independent