In Lee's 'Intro to Smooth Manifolds', $\Lambda^k(V)$ refers to the space of alternating $k$-tensors on a vector space $V$, as you mentioned. However, the space $\Omega^k(M)$ is the space of smooth $k$-forms on a smooth manifold $M$. That is, an element of $\omega \in \Omega^k(M)$ is a smooth map $M \to \Lambda^k(T^* M)$ (called a smooth section of of the bundle $\Lambda^k(T^* M)$), so for each point $x \in M$, we get an alternating $k$-tensor $\omega(x) \in \Lambda^k(T^* M)$. This space is often written as $\Omega^k(M) = \Gamma(\Lambda^k(T^* M))$.
Not entirely sure however what $\Omega^k(V)$ is, when $V$ is just a vector space.
I assume that $e_1,\dots,e_n$ is a basis of $V$ and that $e^1,\dots,e^n$ the associated dual basis of $V^*$.
First, let's consider the case of arbitrary (not necessarily symmetric) tensors. We note that, by linearity,
$$
T(v^{(1)}, \dots, v^{(k)}) =
T\left( \sum_{i=1}^n v^{(1)}_i e_i, \dots, \sum_{i=1}^n v^{(k)}_i e_i \right) =
T\left( \sum_{i_1=1}^n v^{(1)}_{i_1} e_i, \dots, \sum_{i_k=1}^n v^{(k)}_{i_k} e_{i_k} \right) = \\
\sum_{i_1=1}^n \cdots \sum_{i_k=1}^n v^{(1)}_{i_1} \cdots v^{(k)}_{i_k} T\left(e_{i_1}, \dots, e_{i_k} \right)
$$
Now, define the tensor $\tilde T$ by
$$
\tilde T = \sum_{i_1=1}^n \cdots \sum_{i_k=1}^n T\left(e_{i_1}, \dots, e_{i_k} \right) e^{i_1} \otimes \cdots \otimes e^{i_k}
$$
Prove that $\tilde T(v^{(1)},\dots,v^{(k)}) = T(v^{(1)},\dots,v^{(k)})$ for any $v^{(1)},\dots,v^{(k)}$. That is, $\tilde T = T$. We've thus shown that any (not necessarily symmetric) $k$-tensor can be written as a linear combination of $e^{i_1} \otimes \cdots \otimes e^{i_k}$.
The same applies for symmetric tensors. However, if $T$ is symmetric, then
$$
T\left(e_{i_1}, \dots, e_{i_k} \right) =
T\left(e_{\sigma(i_1)}, \dots, e_{\sigma(i_k)} \right)
$$
for any permutation $\sigma$. Thus, we may regroup the above sum as
$$
T = \tilde T = \sum_{i_1=1}^n \cdots \sum_{i_k=1}^n T\left(e_{i_1}, \dots, e_{i_k} \right) e^{i_1} \otimes \cdots \otimes e^{i_k} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n} \;
\frac 1{\alpha(i_1,\dots,i_k)}\sum_{\sigma \in S_k} T\left(e_{\sigma(i_1)}, \dots, e_{\sigma(i_k)} \right)
e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n} \;
\frac 1{\alpha(i_1,\dots,i_k)}\sum_{\sigma \in S_k} T\left(e_{i_1}, \dots, e_{i_k} \right)
e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n}
\frac 1{\alpha(i_1,\dots,i_k)}
T\left(e_{i_1}, \dots, e_{i_k} \right)
\underbrace{\sum_{\sigma \in S_k} e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)}}_{\text{basis element for } Sym^k(V)}
$$
Thus, we have expressed $T$ as a linear combination of the desired basis elements.
${\alpha(i_1,\dots,i_k)}$ counts the number of times any element $(\sigma(i_1),\dots,\sigma(i_n))$ appears in the summation over $\sigma \in S_n$. As the comment below points out, we have
$$
\alpha(i_1,\dots,i_k) = m_1! \cdots m_n!
$$
where $m_j$ is the multiplicity of $j \in \{1,\dots,n\}$ in the tuple $(i_1,\dots,i_k)$.
Best Answer
Take in $\Bbb R^2$, for instance, any tensor represented by a matrix which is not symmetric nor skew. Say, the matrix $$\begin{pmatrix} 1 & 0 \\ 1 & 0 \end{pmatrix}$$fits the bill and provides $T = e_1\otimes e_1 + e_1\otimes e_2 = e_1\otimes (e_1+e_2)$, where $\{e_1,e_2\}$ is the standard basis.