I will explain this looking at a much simpler example, that is something in the 2-dimensional case. Say we have the following equations :
\begin{equation}
\begin{aligned}
2x + 3y & =5&\text{ (1) } \\
x + 3y &= 4 & \text{ (2) }
\end{aligned}
\end{equation}
This system can be represented as follows:
$$\begin{pmatrix} 2 & 3 \\ 1 & 3 \end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 4 \end{pmatrix} $$
When doing row reduction, I am allowed to do the following operations :
(1) Interchanging two rows
(2) Multiplying a row by a non-zero scalar.
(3) Adding a multiple of one row to another row
All these operations on the matrix translate to the operations we are familiar with when solving a system of linear equations. For example , subtracting equation $2$ from $1$ will result in the equation $x = 1$. On the matrix this means subtracting row $2$ from row $1$ on both sides or on the augmented matrix, which gives
$$\begin{pmatrix} 1 & 0 \\ 1 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 4 \end{pmatrix}$$ To simplify further, we can subtract row $1$ from $2$ and it follows
$$\begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 3 \end{pmatrix}$$
Why are we doing this ? Matrices became more than just a tool for solving linear equations. They became algebraic objects themselves, with their many properties. Read A.CALEY, A memoir on the theory of matrices. Sorry, I digress.
You can also do column operations but then the matrices have to be different
$$\begin{pmatrix} x & y \end{pmatrix}\begin{pmatrix} 2 &1 \\ 3 & 3 \end{pmatrix} = \begin{pmatrix} 5 & 4 \end{pmatrix}$$ Why don't we represent it this way ? You tell me. I didn't answer your question directly , but I think with the right motivation you will find your way.
Now coming back to linear independence, say we have the vectors $u_1 = \begin{pmatrix} 2 \\ 0 \end{pmatrix} $ and $u_2= \begin{pmatrix} 1 \\ 2 \end{pmatrix} $. As you mentioned, the vectors are linearly independent if the system of equations has only a trivial solution, that is, $xu_1 + yu_1 = 0 $ if $x=y=0$ which means $$\begin{pmatrix} 2x \\ 0 \end{pmatrix} + \begin{pmatrix} y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 0x + 2y \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ if $$ \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ Now the problem of finding whether a set of vectors are linearly independent has been reduced to a problem of finding a solution to a system of linear equations. It is to be noted that $$ \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} u_1, u_2 \end{pmatrix}$$ It is just a representation which is convenient.
Write the general solution of the homogeneous linear ODE
$$
\tag{1}
\frac{\mathrm{d}\vec{y}}{\mathrm{d}x} = A(x) \vec{y}
$$
in the matrix form as
$$
\Phi(x; x_0) \vec{c},
$$
where $\vec{c} = \mathrm{col}(c_1, \dots, c_n)$ is a constant matrix (that is, independent of $x$), and $\Phi$ is the state-transition matrix such that $\Phi(x_0, x_0) = I$ (the identity matrix).
Now we are looking for a general solution of the nonhomogeneous linear ODE
$$
\tag{2}
\frac{\mathrm{d}\vec{y}}{\mathrm{d}x} = A(x) \vec{y} + \vec{b}(x)
$$
in the form
$$
\Phi(x, x_0) \vec{c}(x).
$$
Incidentally, here is the source of the term "variation of constants". We look for conditions for the above function of $x$ to be a solution of $(2)$, that is, to have
$$
\frac{\mathrm{d}}{\mathrm{d}x}(\Phi(x, x_0) \vec{c}(x)) = A(x) (\Phi(x, x_0) \vec{c}(x)) + \vec{b}(x).
$$
But
$$
\frac{\mathrm{d}}{\mathrm{d}x}(\Phi(x, x_0) \vec{c}(x)) = \frac{\mathrm{d}}{\mathrm{d}x} \Phi(x, x_0) \cdot \vec{c}(x) + \Phi(x, x_0) \cdot \vec{c}'(x)
$$
and
$$
\frac{\mathrm{d}}{\mathrm{d}x} \Phi(x, x_0) = A(x) \Phi(x, x_0),
$$
so after cancellation we obtain
$$
\Phi(x, x_0) \cdot \vec{c}'(x) = \vec{b}(x),
$$
which is, for each fixed $x$, a Cramer (because $\det{\Phi(x, x_0)} = W(x, x_0) \ne 0$) system of $n$ linear equations with $n$ unknowns, $c'_1(x), \dots, c'_n(x)$. The unknowns are given by the Cramer rule. We have thus obtained the derivative of the matrix function $\vec{c}(x)$, so we have to integrate that derivative. And that is the source of the integral.
The "Wrońskian" determinant was named after a nineteenth-century Polish philosopher and mathematician, Józef Maria Hoene-Wroński.
EDIT: I changed slightly the terminology and provided a reference to a Wikipedia entry.
Best Answer
Your Einstein notation is correct. Looking at the indices, you can see that $\mathbf{x}$ and $\mathbf{c}$ are both contracted over the same index. In terms of matrix-vector notation, you can write this as $\mathbf{c}^{\text{T}}\mathbf{x}$.
Note that after performing this operation, the result is just a number, call it $k$. You can see from your Einstein notation that none of the other quantities depend on the same indices. So your modified matrix equation could really just be written as $k\mathbf{A}\mathbf{u}=\mathbf{b}$