Let $B$ and $B'$ be two bases of $E$ and denote by $A$ the matrix of base change from $B$ to $B'$. First there are the induced dual bases on $E^\ast = T_1(E)$, denoted by $B^\ast$ and $B'^\ast$. The matrix of base change from $B^\ast$ to $B'^\ast$ is $A^\ast:= {}^t A^{-1}$ as you can easily check.
Now you have induced bases $\tilde{B}$ and $\widetilde{B'}$ on $E^\ast \otimes E^\ast := T_2(E)$ with your notations. The matrix of base change from $\tilde{B}$ to $\widetilde{B'}$ is the Kronecker product of $A^\ast$ with itself.
Look at the Wikipedia page if you don't know about Kronecker products.
I think that the best way to understand this stuff is to learn tensor products of vector spaces and not only tensor products of maps.
Edit: In order to prove these things, you have to write carefully the matrix of basis changes. Write $B = (e_1,\dots,e_n)$ and $B' = (f_1,\dots,f_n)$. By definition of $A$, $f_i = \sum_{j=1}^n A_i^j e_j$ (you can also adopt the other convention). Now you want the matrix from $B^\ast$ to $B'^\ast$. This matrix $C$ should satisfy $f_i^\ast = \sum_{j=1}^n C_i^j e_j^\ast$. Evaluating at $f_k$ you find, $\delta_i^k = \sum_{j=1}^n C_i^j e_j^\ast(f_k)$. But $f_k$ is $\sum_{l=1}^n A_k^l e_l$. So $\delta_i^k = \sum_{j=1}^n C_i^j A^j_k$. These equations are satisfied for $C = {}^t A^{-1}$ which shows the first part.
For the tensor product, you have to understand how to write $f_{j_1}^\ast \otimes f_{j_2}^\ast$ with $e_{i_1}^\ast \otimes e_{i_2}^\ast$. We have $f_{j_1}^\ast \otimes f_{j_2}^\ast = (\sum_{i_1} C_{j_1}^{i_1} e_{i_1}^\ast) \otimes (\sum_{i_2} C_{j_2}^{i_2} e_{i_2}^\ast) = \sum_{i_1, i_2} C_{j_1}^{i_1} C_{j_2}^{i_2} e_{i_1}^\ast \otimes e_{i_2}^\ast$.
Compare these coefficients with the definition of the Kronecker product; you find the same thing (with a certain ordering of the bases in $T_2(E)$).
I assume that $e_1,\dots,e_n$ is a basis of $V$ and that $e^1,\dots,e^n$ the associated dual basis of $V^*$.
First, let's consider the case of arbitrary (not necessarily symmetric) tensors. We note that, by linearity,
$$
T(v^{(1)}, \dots, v^{(k)}) =
T\left( \sum_{i=1}^n v^{(1)}_i e_i, \dots, \sum_{i=1}^n v^{(k)}_i e_i \right) =
T\left( \sum_{i_1=1}^n v^{(1)}_{i_1} e_i, \dots, \sum_{i_k=1}^n v^{(k)}_{i_k} e_{i_k} \right) = \\
\sum_{i_1=1}^n \cdots \sum_{i_k=1}^n v^{(1)}_{i_1} \cdots v^{(k)}_{i_k} T\left(e_{i_1}, \dots, e_{i_k} \right)
$$
Now, define the tensor $\tilde T$ by
$$
\tilde T = \sum_{i_1=1}^n \cdots \sum_{i_k=1}^n T\left(e_{i_1}, \dots, e_{i_k} \right) e^{i_1} \otimes \cdots \otimes e^{i_k}
$$
Prove that $\tilde T(v^{(1)},\dots,v^{(k)}) = T(v^{(1)},\dots,v^{(k)})$ for any $v^{(1)},\dots,v^{(k)}$. That is, $\tilde T = T$. We've thus shown that any (not necessarily symmetric) $k$-tensor can be written as a linear combination of $e^{i_1} \otimes \cdots \otimes e^{i_k}$.
The same applies for symmetric tensors. However, if $T$ is symmetric, then
$$
T\left(e_{i_1}, \dots, e_{i_k} \right) =
T\left(e_{\sigma(i_1)}, \dots, e_{\sigma(i_k)} \right)
$$
for any permutation $\sigma$. Thus, we may regroup the above sum as
$$
T = \tilde T = \sum_{i_1=1}^n \cdots \sum_{i_k=1}^n T\left(e_{i_1}, \dots, e_{i_k} \right) e^{i_1} \otimes \cdots \otimes e^{i_k} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n} \;
\frac 1{\alpha(i_1,\dots,i_k)}\sum_{\sigma \in S_k} T\left(e_{\sigma(i_1)}, \dots, e_{\sigma(i_k)} \right)
e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n} \;
\frac 1{\alpha(i_1,\dots,i_k)}\sum_{\sigma \in S_k} T\left(e_{i_1}, \dots, e_{i_k} \right)
e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)} =
\\
\sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n}
\frac 1{\alpha(i_1,\dots,i_k)}
T\left(e_{i_1}, \dots, e_{i_k} \right)
\underbrace{\sum_{\sigma \in S_k} e^{\sigma(i_1)} \otimes \cdots \otimes e^{\sigma(i_k)}}_{\text{basis element for } Sym^k(V)}
$$
Thus, we have expressed $T$ as a linear combination of the desired basis elements.
${\alpha(i_1,\dots,i_k)}$ counts the number of times any element $(\sigma(i_1),\dots,\sigma(i_n))$ appears in the summation over $\sigma \in S_n$. As the comment below points out, we have
$$
\alpha(i_1,\dots,i_k) = m_1! \cdots m_n!
$$
where $m_j$ is the multiplicity of $j \in \{1,\dots,n\}$ in the tuple $(i_1,\dots,i_k)$.
Best Answer
Let's apply the definitions a bit more carefully. Define $v_1,\dots,v_p$ to be the vectors $e_{i_1},\dots,e_{i_p}$. In other words: for $1 \leq k \leq p$, $v_k = e_{i_k}$. We then have $$ \begin{align} (\sigma T)_{i_1,\dots,i_p}&=(\sigma T)(e_{i_1},\dots,e_{i_p}) = (\sigma T)(v_1,\dots,v_p) = T(v_{\sigma(1)},\dots,v_{\sigma(p)}) \\ & = T(e_{i_{\sigma(1)}},\dots,e_{i_{\sigma(p)}}) = T_{i_{\sigma(1)},\dots,i_{\sigma(p)}}. \end{align} $$