I am guessing that your question is: What is $\bigwedge^k (V \oplus W)$? (This is the wedge product, not the tensor product.)
And also the same question for $S^k = Sym^k$, which is the symmetric product.
Extended hint:
The thing to observe is that there is a natural basis for $V \oplus W$. Namely, you take a basis for $V$ and union it with a basis for $W$.
Let's call our natural basis for $V$ $\{v_1, \ldots, v_n\}$, and our basis for $W$ $\{w_1, \ldots, w_m\}$.
Now, given a basis for a vector space $X$, there is a natural basis for $\bigwedge^k X$ and for $Sym^k X$:
Suppose that $\{x_1, \ldots, x_r\}$ is a basis for $X$.
1) Then a basis for $\bigwedge^k X$ is given by all of the $\{x_{i_1} \wedge \ldots \wedge x_{i_k} \}$ for all $i_1 < \ldots < i_k$, $i_j \in \{1, 2, \ldots r\}$.
2) A natural basis for $Sym$ is given similarly, only now you are allowed to take repeated vectors (so $<$ will be replaced by $\leq$) - another description is the set of monomials of degree k in $\mathbb{Z}[x_1, \ldots, x_r]$.
Finally: Given that $\{x_1, \ldots, x_r\} = \{v_1, \ldots, v_n, w_1, \ldots, w_m\}$ is a basis for $V \oplus W = X$, you can now play a combinatorial game to divide up $\bigwedge^k X$ into direct sums of smaller exterior powers of $V$ and $W$. Similarly for the symmetric product. Do you see how to proceed? Please feel free to ask if you have questions.
Best Answer
Take two subspaces $S,T$ of your vector space $V$. One can form the vector space $V'=S\times T$ that consists of pairs $(s,t)$ with coordinatewise addition and scalar multiplication. We can define a linear map $\eta:S\times T\to V$ that sends $(s,t)\to s+t$. This has image $S+T$ and kernel $\{(x,-x):x\in S\cap T\}\simeq S\cap T$ (prove this!) so it is an isomorphism of vector spaces if, and only if, $S+T=V$ and $S\cap T=0$. So, we have the two seemingly different concepts of "external" and "internal" direct sums, but essentially they are the same. We have a vector space $V$ and two subspaces $S,T$ such that $S\cap T=0$ and $S+T=V$. In the first case, $V=S\times T$, $S=\{(s,0):s\in S\}$ and $T=\{(0,t):t\in T\}$; in the second $V$ is an "arbitrary" vector space and $S,T$ satisfy the conditions mentioned above.
Essentially, this allows us to think of every vector of $V$ as decomposed into an $S$ component and a $T$ component, and uniquely so, and is very useful to understand vector spaces and their linear transformations. A great example is the Jordan canonical form or the Rational canonical forms, which are particular examples of a structure theorem for modules over PIDs. It allows, also, to inductively work our way out of a problem for a finite dimensional vector space by chopping out a one dimensional subspace for example, consider the proof that every orthogonal transformation is a composition of rotations and reflections, say.