I will explain this looking at a much simpler example, that is something in the 2-dimensional case. Say we have the following equations :
\begin{equation}
\begin{aligned}
2x + 3y & =5&\text{ (1) } \\
x + 3y &= 4 & \text{ (2) }
\end{aligned}
\end{equation}
This system can be represented as follows:
$$\begin{pmatrix} 2 & 3 \\ 1 & 3 \end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 4 \end{pmatrix} $$
When doing row reduction, I am allowed to do the following operations :
(1) Interchanging two rows
(2) Multiplying a row by a non-zero scalar.
(3) Adding a multiple of one row to another row
All these operations on the matrix translate to the operations we are familiar with when solving a system of linear equations. For example , subtracting equation $2$ from $1$ will result in the equation $x = 1$. On the matrix this means subtracting row $2$ from row $1$ on both sides or on the augmented matrix, which gives
$$\begin{pmatrix} 1 & 0 \\ 1 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 4 \end{pmatrix}$$ To simplify further, we can subtract row $1$ from $2$ and it follows
$$\begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 3 \end{pmatrix}$$
Why are we doing this ? Matrices became more than just a tool for solving linear equations. They became algebraic objects themselves, with their many properties. Read A.CALEY, A memoir on the theory of matrices. Sorry, I digress.
You can also do column operations but then the matrices have to be different
$$\begin{pmatrix} x & y \end{pmatrix}\begin{pmatrix} 2 &1 \\ 3 & 3 \end{pmatrix} = \begin{pmatrix} 5 & 4 \end{pmatrix}$$ Why don't we represent it this way ? You tell me. I didn't answer your question directly , but I think with the right motivation you will find your way.
Now coming back to linear independence, say we have the vectors $u_1 = \begin{pmatrix} 2 \\ 0 \end{pmatrix} $ and $u_2= \begin{pmatrix} 1 \\ 2 \end{pmatrix} $. As you mentioned, the vectors are linearly independent if the system of equations has only a trivial solution, that is, $xu_1 + yu_1 = 0 $ if $x=y=0$ which means $$\begin{pmatrix} 2x \\ 0 \end{pmatrix} + \begin{pmatrix} y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 0x + 2y \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ if $$ \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ Now the problem of finding whether a set of vectors are linearly independent has been reduced to a problem of finding a solution to a system of linear equations. It is to be noted that $$ \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} u_1, u_2 \end{pmatrix}$$ It is just a representation which is convenient.
Since the set of continuous functions over $\mathbb{R}$ is infinite dimensional, it doesn't really make sense to try to create a matrix to determine linear dependence. However, here is a heuristic approach you can try: since equality as functions means equality over every point of the domain, you can try to create a matrix of values of your functions and see if the columns are linearly independent. If the columns are linearly independent, then there is no nontrivial linear combination of your functions that yields the zero function, and your functions are linearly independent. If the columns are not linearly independent, it's not guaranteed that your functions are linearly dependent, although it does put some restrictions on what linear combinations of your functions could yield the zero function.
For example, let's consider your functions $f_1(x) = 2\sin^2 x$, $f_2(x) = 3\cos^2 x$, and $f_3(x) = \cos(2x)$. Let's evaluate them at $x_1 = 0$, $x_2 = \pi/4$, and $x_3 = \pi/2$ (for ease of computation). Let $A = (a_{ij})$ be the matrix where $a_{ij} = f_j(x_i)$. Then
$$ A = \left(\begin{matrix} 0 & 3 & 1 \\ 1 & \frac{3}{2} & 0 \\ 2 & 0 & -1\end{matrix}\right) $$
Now, if your functions $\{f_1,f_2,f_3\}$ were linearly independent, then there would exist $a_1,a_2,a_3\in\mathbb{R}$ not all zero such that $a_1f_1(x)+a_2f_2(x)+a_3f_3(x)=0$ for all $x$. In particular, it would be true for our choices of $x_i$, which leads to the equation
$$ \left(\begin{matrix} 0 & 3 & 1 \\ 1 & \frac{3}{2} & 0 \\ 2 & 0 & -1\end{matrix}\right)\left(\begin{matrix} a_1 \\ a_2 \\ a_3 \end{matrix}\right) = 0 $$
Now, if it happened that $\det A\ne 0$, then the above equation has no nontrivial solutions, and we could then conclude that your set was linearly independent. However, you can check that indeed $\det A = 0$, and that the null space of $A$ is a one-dimensional space spanned by $\left(\begin{matrix} 3 \\ -2 \\ 6 \end{matrix}\right)$.
So that tells us that if $a_1f_1(x)+a_2f_2(x)+a_3f_3(x) = 0$ for all $x$, then $\left(\begin{matrix} a_1 \\ a_2 \\ a_3 \end{matrix}\right)$ would have to be a multiple of $\left(\begin{matrix} 3 \\ -2 \\ 6 \end{matrix}\right)$. It doesn't automatically tell us that such values will definitely work, since your domain has more than three points. You will have to verify that separately, as the other answers have done. But it does give you a way to find the linear combinations that work.
tl;dr: You can plug in the values of functions into a matrix to test for linear independence. If the matrix is invertible, you know the set is linearly independent. If not, you know which linear combinations are possible, and you can just test those combinations directly.
EDIT: This is a response to @dc3rd's question:
Keep in mind that ultimately you want to determine the linear independence of a subset whose elements are functions. The space of functions isn't like Euclidean space--it's more abstract, where the "vectors" aren't entries of real numbers. The linear independence of a subset of functions doesn't depend on the "test point(s)" chosen--choosing different test points can't possibly affect the linear independence of the subset of functions. What you could do is to take a number, plug it into your set of functions, and see if the resulting set of numbers is linearly independent, but that's kind of silly because $\mathbb{R}$ is one-dimensional, so having more than one function automatically results in linear dependence. You could also take a (finite) set of numbers, and for each function create a vector whose entries are the function evaluated at your set of numbers, and see if the resulting set of vectors is linearly independent. That's a bit less silly, and it's what I'm advocating in this answer. But hopefully you see that doing this is not the same thing as evaluating the linear independence of your functions, although it helps.
Sorry for the long spiel, but to answer your questions:
1) If, for a given $x$ (actually a set of $x$'s; hopefully you see why) you find that all coefficients work, then all that means is you haven't really found useful information to conclude either linear independence or dependence. (BTW, the only way that happens is if all of your entries end up being zero.)
2) The linear independence of your set of functions don't depend on the points you choose to test. However, the linear independence of the vectors (of numbers) that you create by evaluating your functions at your choice of numbers may indeed depend on the points you choose. For example, consider the set $\{\sin x, \cos x, \sin x\cos x\}$. See what happens when you test them at $\{0,\pi/4,\pi/2\}$ and what happens at $\{0,2\pi,4\pi\}$.
Best Answer
Suppose $a\cos t + b\sin t +ce^t \equiv 0.$ Then $c= 0,$ otherwise the left side is unbounded as $t \to \infty,$ hardly the behavior of the zero function. So now we know $c=0,$ hence $a\cos t + b\sin t\equiv 0.$ Plug in some simple values of $t$ ($t=0$ comes to mind for one) to see $a,b = 0.$