According to the answer by Korman of this question, every irreducible representation is found inside a tensor product of fundamental representations. So there can't be any missing representations.
"The usual tensor constructions" do not include taking direct summands. They include, for example, taking duals, taking tensor products, taking direct sums, and applying Schur functors. In other words, you can apply all functors that come from functors $\text{Vect} \times ... \times \text{Vect} \to \text{Vect}$ (possibly contravariant in some variables).
Is there a 1-1 relationship between representations of double covers & projective representations of the original group?
Yes, in this case, because the double cover is the universal cover. Both are in turn identified with representations of the Lie algebra.
This is a question that touches on many issues. On the one hand, things are indeed easier to deal with in the language of the complexification. For $SL(2,\mathbb{C})$, there is a specific element $H$ in the Lie algebra $\mathfrak{sl}(2,\mathbb{C})$, which acts diagonalizably on any finite dimensional representation. The eigenvalues are called weights, and in the case of an irreducible representation, they are all different. Specifically, the $n$-dimensional irreducible representation $\mathbf{n}$ admits a basis of weight vectors of weight $-n+1,-n+3,\dots,n-3,n-1$. Now in a tensor product, the tensor product of two eigenvectors for $H$ is an eigenvector with eigenvalue the sum of the eigenvalues of the two factors. There is a general result saying that the weight vector with highest possible weight will always generate an irreducible subrepresentation, and there there is an invariant complement to this subrepresentation. This allows you to compute decompositions in an elementary way.
Take the example of $\mathbf{3} \otimes \mathbf{3}$. In $\mathbf{3}$ you have weights $-2$, $0$, and $2$, so in the tensor product, the possible weights are $-4$, $-2$, $0$, $2$, and $4$ and the dimensions of the eigenspaces are $1$, $2$, $3$, $2$, and $1$. The weight vector of weight $4$ generates a subrepresentation $\mathbf{5}$, which has one weight vector for each of the listed weights. So in the complement, you get weights $-2$, $0$, and $2$ with dimensions $1$, $2$, and $1$. The highest of these gives you a subrepresentation $\mathbf{1}$ again including one of each of the weights, so there is just one copy of $\mathbf{0}$ left. To describe the result in terms of symmetry, you observe that the weight vector of the maximal weight $4$ is the tensor product of a highest weight vector with itself, so this sits in $S^2\mathbf{3}$. Either counting dimensions or by direct analysis of the weights you can see that $S^2\mathbf{3} \cong \mathbf{5} \oplus \mathbf{1}$ and $\Lambda^2\mathbf{3} \cong \mathbf{3}$.
This works similarly for general tensor products $\mathbf{n} \otimes \mathbf{m}$ for $n \geq m$. This is isomorphic to $\mathbf{n+m-1} \oplus \dots \oplus \mathbf{n-m+1}$ with dimension decreasing by two in each step (so there are always $m$ summands). For $n=m$, things look similar as in the special case, $S^2\mathbf{n} = \mathbf{2n-1} \oplus \mathbf{2n-5} \oplus \dots$ and $\Lambda^2\mathbf{n} = \mathbf{2n-3} \oplus \mathbf{2n-7} \oplus \dots$ with dimensions going down in steps of $4$.
You can always construct higher tensor powers step by step, say $\mathbf{2}\otimes \mathbf{2} \otimes \mathbf{2} \cong (\mathbf{3} \oplus \mathbf{1})\otimes \mathbf{3} \cong (\mathbf{3} \otimes \mathbf{2}) \oplus \mathbf{2}$ and then the first summand splits as $\mathbf{4} \oplus \mathbf{2}$. However, this is just one possible way to “decompose according to symmetry”, since one has distinguished the first two factors. In fact, the canonical way to decompose higher tensor products is just into so-called isotypical components. Here this would be $\mathbf{2} \otimes \mathbf{2} \otimes \mathbf{2} \cong \mathbf{4} \oplus W$, where $W$ is an invariant subspace isomorphic to a direct sum of two copies of $\mathbf{2}$. However, there are various possible realizations for this isomorphism, none of which is canonical. the systematic way to deal with this is simultaneously decomposing as a representation of $\mathfrak{sl}(2,\mathbb{C})$ and the permutation group $\mathfrak{S}_3$. This indeed leads towards Young diagrams, which are also needed to deal with $SL(n,\mathbb{C})$ and hence with $SU(n)$. If you are looking for literature in that direction, I would recommend the book of Fulton and Harris on representation theory.
Best Answer
I think I can help with this one, too, though I'm only going to give a sketch. I think the details are worked out somewhere in the lecture notes I linked to in my comment on your last question.
First you should understand why the corresponding result (that every irrep is the tensor product of the fundamental representation) for $SU(2)$ is true. To begin, note that any representation of $SU(2)$ restricts to a representation of a $U(1)$ subgroup, so by decomposing a $SU(2)$ irrep into $U(1)$ irreps we see that every irrep of $SU(2)$ has the form $e^{i\theta k_1} \oplus \ldots \oplus e^{i \theta k_n}$. Here the $k_i$'s are integers, and conjugating with the matrix
$$\left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right)$$
shows that $-k_i$ is an exponent whenever $k_i$ is. If the chosen $U(1)$ subgroup is generated by the basis vector $X_1$ for the Lie algebra of $SU(2)$, then it turns out that the eigenvalues of $X_1$ are simply $k_i/2$ (the $2$ comes from the fact that $SO(3)$ is a simply connected Lie group with the same Lie algebra as $SU(2)$ and $SU(2)$ double covers $SO(3)$). Using the other two directions in the Lie algebra of $SU(2)$, one can construct operators $X_+$ and $X_-$ (the "raising and lowering" or "creation and annihilation" operators) such that $X_+$ sends any $k/2$-eigenvector of $X_1$ to either $0$ or a $k/2 + 1$-eigenvector of $X_1$ and similarly for $X_-$.
This tells you exactly what irreps of $SU(2)$ have to look like: pick an eigenvector $v$ of $X_1$ whose eigenvalue $n/2$ is as large as possible (a "highest weight" vector), and repeatedly apply $X_-$. You'll generate a list of eigenvectors of $X_1$ with eigenvalues $-n/2, -n/2 + 1, \ldots, n/2-1, n/2$. None of this depended on which $X_1$ we started with, so in fact this sequence of numbers determines the irrep up to conjugation. Moreover any such sequence can be obtained by tensoring the fundamental representation (direct calculation), so we're done.
If you believe all of this works for $SU(2)$, then $SO(3)$ is easy. Every $SO(3)$ irrep gives rise to a $SU(2)$ irrep via the double cover $SU(2) \to SO(3)$, so we have already constrained the list of possible $SO(3)$ representations dramatically. The list can be restricted further by noting that a representation of $SU(2)$ descends to a representation of $SO(3)$ if and only if $-1$ goes to the identity, and this holds only when the largest eigenvalue $n/2$ is an integer, i.e. the dimension of the representation is odd.