Every vector space has three properties:
The space must contain a zero element relevant to the other elements
in the set. For example, the set of 2-element vectors with
real-valued elements is $\ R^2 $ which contains the zero vector $\
<0,0> $.
Closure under vector addition must exist for the space in question.
For two elements in the space, $\ a,b $, there must be some element
$\ a + b$ also in the space.
Closure under scalar multiplication must exist for the space in
question. For an element $a$ in the space and some scalar $c$ in the
set of real numbers $R$, there must be an element $ca$ in the space.
No doubt you'd agree that $cos(ωt)$ is a function and $sin(ωt)$ is also a function. Then, we can see that $y(t) = c_1cos(ωt) + c_2sin(ωt)$ is a linear combination of the two functions $cos(ωt)$ and $sin(ωt)$, which we can regard separately.
Functions are the basic elements of the function space we're concerned with (much as vectors are the basic elements of vector spaces), so we know that there must be some way to combine elements of the space to form other elements as with the vector spaces we're more familiar with. We know this as closure under vector addition, or in this case function addition. For example, if you imagine two polynomial equations of order $n$, say $\ A$ = $c_1x^n + c_2x^{n-1} + ...$ and $\ B$ = $r_1x^n + r_2x^{n-1} + ...$, you could add these together to get $\ A + B$ = $(c_1 + r_1)x^n + (c_2 + r_2)x^{n-1} + ... $
We also know that closure under scalar multiplication exists, since for some scalar $c$ in $R$ and a polynomial function $ f(x) = x^n + x^{n-1} + ... $, we could multiply $\ cA$ = $cx^n + cx^{n-1} + ...$ to get another function.
Finally, a zero element exists in the function space: we'd certainly say that $ f(x) = 0 $ is a function, right?
If you look at your initial function $y(t) = c_1cos(ωt) + c_2sin(ωt)$, you'll see that it is closed under scalar multiplication: for some $r$ in the reals $R$, and a function $x(t) = cos(ωt) + sin(ωt)$ in the function space $y(t)$, we can take $r*x(t) = rcos(ωt) + rsin(ωt)$ to also be in that space.
Similarly, two elements of the space $A = c_1cos(ωt) + c_2sin(ωt)$ and $B = d_1cos(ωt) + d_2sin(ωt)$ with $c,d$ in $R$, can be combined with function addition to yield $A+B = c_1cos(ωt) + c_2sin(ωt) + d_1cos(ωt) + d_2sin(ωt) $, or $ (c_1 + d_1)cos(ωt) + (c_2 + d_2)sin(ωt)$ which is of the same form as your initial $y(t) = c_1cos(ωt) + c_2sin(ωt)$.
Finally, we already know that $y(t) = 0$ is an element of the space $y(t) = c_1cos(ωt) + c_2sin(ωt)$ if we set $c_1,c_2 = 0$, so we arrive at the conclusion that $y(t) = c_1cos(ωt) + c_2sin(ωt)$ is indeed a vector space.
Hope this helps.
This is a very general phenomenon which is unrelated to even or odd functions. Let $V$ be a (say, real or complex) vector space and $T\colon V\to V$ be a linear operator, not equal to ${\rm Id}_V$, but with $T^2={\rm Id}_V$. Then the minimal polynomial of $T$ equals $p(t) = t^2-1$. Since this polynomial splits as $(t-1)(t+1)$, we necessarily have that $T$ is diagonalizable, with eigenvalues $1$ and $-1$, and $V = V_+ \oplus V_-$, where $V_+$ is the eigenspace associated to $1$ and $V_-$ is associated to $-1$. What's more, we can actually say what the decomposition of any $v\in V$ relative to this direct sum is: $$v = \frac{v+Tv}{2} + \frac{v-Tv}{2}.$$If one cannot guess the above, do it systematically: write $v= v_+ + v_-$ with $Tv_+ = v_+$ and $Tv_- = -v_-$. Then $$\begin{cases} v=v_++v_- \\ Tv = v_+ - v_-\end{cases} \implies v_+ =\frac{v+Tv}{2}\quad\mbox{and}\quad v_-=\frac{v-Tv}{2}.$$
Examples:
$V$ is the vector space of all functions $W \to \mathbb{R}$, where $W$ is any non-trivial vector space (in particular, we may take $W=\mathbb{R}$), and let $T\colon V\to V$ be defined as $T(f)(x) = f(-x)$. This fits the bill and one may write $$f(x) = \frac{f(x)+f(-x)}{2} + \frac{f(x)-f(-x)}{2},$$for all $f\in V$ and $x\in W$. So any function $W\to \mathbb{R}$ is uniquely expressed as the sum of an even function with an odd function. This works for complex-valued functions too.
$V$ is the space of all real $n\times n$ matrices, and $T\colon V\to V$ is the transposition $T(A) = A^T$. Then $$A = \frac{A+A^T}{2} + \frac{A-A^T}{2}$$says that every real $n\times n$ matrix can be uniquely expressed as the sum of a symmetric matrix with a skew-symmetric matrix.
$V$ is the space of all complex $n\times n$ matrices (regarded as a real vector space), and $T\colon V\to V$ is the conjugate-transposition $T(A) = A^\dagger = \overline{A^T}$ (warning: this is $\mathbb{R}$-linear but not $\mathbb{C}$-linear). Then $$A = \frac{A+A^\dagger}{2} + \frac{A-A^\dagger}{2}$$says that every complex $n\times n$ matrix can be uniquely expressed as the sum of a hermitian matrix with a skew-hermitian matrix.
Best Answer
Take two functions which are in that particular form. Their "sum" (there is an obvious way to define sum of functions) is again in that same form. It forms an abelian group for this sum.
Also for any real number $k$ one can define $k$ times such function: it will gain be of the same form.
This sum defined on this set of functions satisfy all the properties that your "standard vector spaces on real numbers" satisfies. For example $f(x) = c_1+ c_2\sin^2 x + c_3\cos^2 x$, $g(x) = c_1' +c_2'\sin^2x+c_3'\cos^2x$, then $k(f(x)+ g(x)) = kf(x) + kg(x)$. Look at all the axioms of abstract vector spaces. All hold here. The scalars are still real numbers (or equally well complex numbers)