I think putting tensors in the right context will clear much of this up. I'll stick with $\mathbb R^3$ since that's the example you use. Rank $2$ tensors are elements of the so-called tensor product of $\mathbb R^3$ with itself, which is denoted by $\mathbb R^3 \otimes \mathbb R^3$. This space consists of all linear combinations of expressions of the form $u \otimes v$ under the stipulations that:
$$u\otimes(v + w) = u\otimes v + u\otimes w,$$
$$(u+v)\otimes w = u\otimes w + v\otimes w, \text{and}$$
$$u\otimes (cv) = c(u \otimes v) = (cu) \otimes v$$
where $u,v,w$ are vectors and $c,d$ are scalars.
Taking the standard basis $e_1,e_2,e_3$ of $\mathbb R^3$, any rank $2$ tensor can then be written as a linear combination of the $9$ "pure" tensors $e_i \otimes e_j$ for $i,j = 1,2,3$. The $9$ scalars you take as coefficients in such a linear combination make up the $3 \times 3$ matrix which "represents" that rank $2$ tensor. A rank $3$ tensor, an element of the tensor product $\mathbb R^3 \otimes \mathbb R^3 \otimes \mathbb R^3$, would then consists of linear combinations of the $27$ pure tensors:
$$e_i \otimes e_j \otimes e_k$$
where $i,j,k=1,2,3$. The $27$ coefficients in such a linear combination make up the $3 \times 3 \times 3$ array you mention.
Tensor multiplication is then just given by the good ol' distributive property. For instance, the product of the rank $1$ tensor $2e_1+ 3e_2$ and the rank $2$ tensor $-2(e_1 \otimes e_2) + 2(e_2 \otimes e_3)$ is:
$$[2e_1 + 3e_2] \otimes [-2(e_1 \otimes e_2) + 2(e_2 \otimes e_3)]$$
$$-4(e_1 \otimes e_1 \otimes e_2)+4(e_1\otimes e_2 \otimes e_2)-6(e_2 \otimes e_1\otimes e_2)+6(e_2 \otimes e_2\otimes e_3).$$
For two rank $1$ tensors
$$ae_1+be_2+ce_3 \text{ and } xe_1+ye_2+ze_3,$$
tensor multiplication gives a rank $2$ tensor whose coefficient matrix (i.e. the matrix whose entries are the coefficients of the $e_i \otimes e_j$ terms) is the product of the matrices
$$\begin{pmatrix}a\\b\\c\end{pmatrix} \text{ and } \begin{pmatrix}x&y&z\end{pmatrix},$$
as you alluded to in your question. However, in general there is no simple relation between tensor multiplication and matrix multiplication.
I assume that $e_1,\dots,e_n$ is a basis of $V$ and that $e^1,\dots,e^n$ the associated dual basis of $V^*$.
First, let's consider the case of arbitrary (not necessarily symmetric) tensors. We note that, by linearity,
$$
T(v^{(1)}, \dots, v^{(k)}) =
T\left( \sum_{i=1}^n v^{(1)}_i e_i, \dots, \sum_{i=1}^n v^{(k)}_i e_i \right) =
T\left( \sum_{i_1=1}^n v^{(1)}_{i_1} e_i, \dots, \sum_{i_k=1}^n v^{(k)}_{i_k} e_{i_k} \right) = \\
\sum_{i_1=1}^n \cdots \sum_{i_k=1}^n v^{(1)}_{i_1} \cdots v^{(k)}_{i_k} T\left(e_{i_1}, \dots, e_{i_k} \right)
$$
Now, define the tensor $\tilde T$ by
$$
\tilde T = \sum_{i_1=1}^n \cdots \sum_{i_k=1}^n T\left(e_{i_1}, \dots, e_{i_k} \right) e^{i_1} \otimes \cdots \otimes e^{i_k}
$$
Prove that $\tilde T(v^{(1)},\dots,v^{(k)}) = T(v^{(1)},\dots,v^{(k)})$ for any $v^{(1)},\dots,v^{(k)}$. That is, $\tilde T = T$. We've thus shown that any (not necessarily symmetric) $k$-tensor can be written as a linear combination of $e^{i_1} \otimes \cdots \otimes e^{i_k}$.
The same applies for symmetric tensors. However, if $T$ is symmetric, then
$$
T\left(e_{i_1}, \dots, e_{i_k} \right) =
T\left(e_{\sigma(i_1)}, \dots, e_{\sigma(i_k)} \right)
$$
for any permutation $\sigma$. Thus, we may regroup the above sum as
$$
T = \tilde T = \sum_{i_1=1}^n \cdots \sum_{i_k=1}^n T\left(e_{i_1}, \dots, e_{i_k} \right) e^{i_1} \otimes \cdots \otimes e^{i_k} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n} \;
\frac 1{\alpha(i_1,\dots,i_k)}\sum_{\sigma \in S_k} T\left(e_{\sigma(i_1)}, \dots, e_{\sigma(i_k)} \right)
e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n} \;
\frac 1{\alpha(i_1,\dots,i_k)}\sum_{\sigma \in S_k} T\left(e_{i_1}, \dots, e_{i_k} \right)
e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n}
\frac 1{\alpha(i_1,\dots,i_k)}
T\left(e_{i_1}, \dots, e_{i_k} \right)
\underbrace{\sum_{\sigma \in S_k} e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)}}_{\text{basis element for } Sym^k(V)}
$$
Thus, we have expressed $T$ as a linear combination of the desired basis elements.
${\alpha(i_1,\dots,i_k)}$ counts the number of times any element $(\sigma(i_1),\dots,\sigma(i_n))$ appears in the summation over $\sigma \in S_n$. As the comment below points out, we have
$$
\alpha(i_1,\dots,i_k) = m_1! \cdots m_n!
$$
where $m_j$ is the multiplicity of $j \in \{1,\dots,n\}$ in the tuple $(i_1,\dots,i_k)$.
Best Answer
(Why are you saying that a rank-2 tensor is always symmetric? That's certainly not true. Maybe you are not referring to symmetry in the sense of $A_{ij} = A_{ji}$?)
The choice of notation to write a matrix as rows and columns, or to think of a rank-3 tensor as a cube, is entirely a choice of notation and is not required by the objects anywhere.
When thinking of a matrix as a linear transformation $U \rightarrow V$, its terms are elements of $U^* \otimes V$, where $U^*$ is the dual space to $U$. This is the usual meaning of $U_i^j$. But sometimes you find yourself with a rank-2 tensor $U \otimes V$ or $U^* \otimes V^*$, where neither or both sides are dual spaces, and I find these more natural to think of as 'columns of columns' or 'rows of rows', like this:
$$A_{ij} = \big( (A_{11}, A_{12}, A_{13}), (A_{21}, A_{22}, A_{23}), (A_{31}, A_{32}, A_{133}) \big)$$
This lets one maintain the notation that an inner product is always a contraction of a row with a column.
Anyway, a mental model for a rank-3 tensor could be a cube, or a matrix whose entries are themselves rows or columns, or a column of columns of columns. Whatever you want. Once you're dealing with $>2$ dimensions, it tends to be a lot easier to use https://en.wikipedia.org/wiki/Einstein_notation rather than trying to figure out how to write the object out as a matrix-like thing. If you really want a visualization, though, I suggest a matrix whose components are columns or rows, like this:
$$A_{ij}^k = \begin{pmatrix} \begin{pmatrix} A_{11}^1 & A_{12}^1 \end{pmatrix} & \begin{pmatrix} A_{21}^1 & A_{22}^1 \end{pmatrix} \\ \begin{pmatrix} A_{11}^2 & A_{12}^2 \end{pmatrix} & \begin{pmatrix} A_{21}^2 & A_{22}^2 \end{pmatrix} \\ \end{pmatrix}$$
Easier to write. You have to be careful when multiplying it with anything to keep track of which index is which -- $A_{ij}^k v^i \neq A_{ij}^k v^j$, as the former multiplies the outer dimension but the latter multiplies the inner one.