Informally you can recover the coefficient by averaging both sides against $\sin mx$:
$$\frac{1}{\pi}\int_{-\pi}^{\pi}\left( a_0 + \sum_{n=1}^\infty a_n\sin(nx) + b_n\cos(nx)\right)\sin(mx)\, dx =\frac{1}{\pi}\int_{-\pi}^\pi 4\sin x \sin (mx)\, dx, $$
because the left hand side becomes then
$$a_0 \frac{1}{\pi}\int_{-\pi}^\pi\sin(mx)\, dx+ \sum_{n=1}^\infty a_n\frac{1}{\pi}\int_{-\pi}^\pi\sin(nx)\sin(mx)\, dx+b_n\frac{1}{\pi}\int_{-\pi}^\pi\cos(nx)\sin(mx)\, dx =a_m $$
while the left hand side is $4\delta_{1m}$. This procedure, similar to the original reasoning by Fourier, requires the interchange of integral and sum in left hand side, which can be done if the series converges uniformly. However, that request is excessively restrictive: turns out that convergence in $L^2$ sense will suffice. This can be proven by means of Hilbert space methods.
Yes! And these types of expansions occur in a variety of applications, e.g., solving the heat or wave equation on a rectangle with prescribed boundary and initial data.
As a specific example, we can think of the following expansion as a two dimensional Fourier sine series for $f(x,y)$ on $0<x<a$, $0<y<b$:
$$
f(x,y)=\sum_{n=1}^\infty \sum_{m=1}^\infty c_{nm}\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right), \quad 0<x<a,\ 0<y<b,
$$
where the coefficients (obtained from the same type of orthogonality argument as in the 1D case) are given by
\begin{align}
c_{nm}&={\int_0^b \int_0^a f(x,y)\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right)\,dx\,dy\over \int_0^b \int_0^a \sin^2\left({n\pi\, x\over a}\right)\sin^2\left({m\pi\, y\over b}\right)\,dx\,dy}\\
&={4\over a b}\int_0^b \int_0^a f(x,y)\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right)\,dx\,dy, \quad n,m=1,2,3,\dots
\end{align}
For example, the picture below shows (left) the surface $$f(x,y)=30x y^2 (1-x)(1-y)\cos(10x)\cos(10y), \quad 0<x<1,\ 0<y<1,$$ and a plot of the two dimensional Fourier sine series (right) of $f(x,y)$ for $n,m,=1,\dots,5$:
![Mathematica graphics](https://i.stack.imgur.com/JGmAb.png)
Finally, keep in mind that we are not limited just to double sums of the form sine-sine. We could have any combination we like so long as they form a complete orthogonal family on the domain under discussion.
Best Answer
Here's how I would motivate the solution:
First, the problem will be VASTLY simpler if you replace $\cos(nx)$ with one half the sum of a positive and negative complex exponential and $\sin(nx)$ as one half the difference between the positive and negative complex exponential, and then once the two series are expressed in terms of complex exponentials, then multiplication of Fourier terms becomes easy.
That's because the coefficient of the $m$-th term in the product will contain terms from the two multiplier series whose indices add to $m$, such as $(m-2,2)$, $(m-1,1)$, $(m,0)$, $(m+1,-1)$, $(m+2,-2)$, and so on, where the first index in each pair refers to the first series and the second index refers to the second series, and this is where the convolution on indices comes from.