Let $X$ be the square matrix whose each element is $1$.
(Is there canonical notation for this?)
This is a symmetric matrix.
$\DeclareMathOperator{\tr}{tr}$
The sum of elements of $A$ is $s(A)=\tr(AX)$.
For symmetric $A$ and $B$ the sum of elements in the product is
$$
s(AB)=\tr(ABX)=\tr((ABX)^T)=\tr(X^TB^TA^T)=\tr(XBA)=\tr(BAX)=s(BA).
$$
This you already knew, but I just wanted to give a new point of view to this fact.
Similarly, it's easy to check that $s(A^T)=s(A)$ for any matrix $A$.
Given the commutativity properties of the trace — and more importantly, lack thereof — I don't think there will be a nice identity to allow you to treat arbitrary permutations nicely.
Matrices don't commute with $X$ in general.
It's hard to prove non-existence of useful things to say, but perhaps the trace helps clarify your thoughts.
For example, in your example you can take $A=\begin{pmatrix}0&1\\0&0\end{pmatrix}$ and $B=X$.
Then all terms with $A^2$ vanish, and $Symm(A^3B^2)=ABABA=A$.
This $A$ is not symmetric; the point is just to emphasize that the order of matrices can have big effects (also on trace: typically $\tr(ABC)\neq\tr(BAC)$), and this is true with or without symmetry.
In general, the order of the matrices will have a significant effect on the sum of elements of the product, but it's hard to say more than that.
If you have a more specific question, I can try to think of a specific answer.
One can use non elementary tools (that do not rely on $\pi$ or geometry !) to prove this as well, in the following way.
You already checked that $|c(x)| =1$ for all $x\in\mathbb R$. Now let $S=\{z\in S^1\mid z\in c(\mathbb R)\}$.
The point will be to prove that it is open and closed in $S^1$, and to prove that $S^1$ is connected.
First to show that it is open : this is where I would use differential geometry, although there are probably more elementary solutions.
Indeed one can check that $S^1$ is a $1$-dimensional submanifold of $\mathbb R^2$ from its defining equation, and one can check that $c$ has a nonzero (therefore invertible, because we're in dimension $1$) differential by just computing it and knowing that $\exp$ doesn't vanish, so that $c$ is a local diffeomorphism. It follows that it has an open image.
Now to show that the image is closed : this is where I would use differential calculus, although again there are probably more elementary solutions.
Indeed $c$ satisfies the following differential equation $(E)\,\, y' =iy$ from $\mathbb R\to \mathbb C$. Moreover, for any $z\in S^1$, by taking $zc$ we see that $(E)$ has a solution with $z$ in its image.
Let $U:= zc(\mathbb R)$ which is an open neighbourhood of $z$. If $z\in \overline{S}$ then there is $y\in \mathbb R$ with $c(y) \in U$, say $c(y) = zc(x)$ for some $x\in \mathbb R$. By uniqueness of solutions to degree $1$ differential equations given an initial condition, it follows that $t\mapsto zc(t+x-y)$ and $c$ agree on a neighbourhood of $y$ and so we can glue them to get a solution $d$ to the equation which agrees with $c$ on $0$ and has $z$ in its image (this is very badly written, hopefully you'll understand what I mean - if you don't I can come back and make this bit more precise)
But that solution must be $c$ ! Therefore $z$ is in the image of $c$. So $S$ is closed.
Therefore $S$ is clopen in $S^1$.
Finally, to show that $S^1$ is connected, note that the usual retraction from algebraic topology $\mathbb R^2\setminus \{0\}\to S^1$, $x\mapsto \frac{x}{||x||}$, and the proof of its continuity don't rely in any way on $\pi$. It's quite easy to show that $\mathbb R^2\setminus\{0\}$ is path-connected, therefore so is $S^1$.
Since $0\in S$, it follows that $S=S^1$ so that $c:\mathbb R\to S^1$ is surjective. Let $x$ be such that $c(x) = -1$ (in complex notation). Then $c(2x) = 1$ and $2x\neq 0$, so $c$ is $2x$-periodic; now by standard arguments we find a smallest period $T$ which we call $2\pi$. And the rest, as they say, is history.
This solution has the disadvantage of using heavier artillery (differential calculus, differential geometry), but the end result is that it is more conceptual, and somewhat cleaner than the alternative computational solution offered by LutzL (which, to be honest, I didn't read in detail) - although their solution is also instructive, in that it is quite elementary and readable with less background.
It is very likely that the "local diffeomorphism" bit of the argument can be made more elementary, so the "$S$ is open" bit should be doable elementarily. I'm less certain about the "$S$ is closed" bit. As to connectedness of $S^1$, I mentioned algebraic topology, but in fact of course the argument is completely elementary.
Best Answer
Proposition: Let $E=\{A\in M_n(\mathbb{C})|A\text{ has algebraic entries }\}$. Then the exponential map is injective on $E$.
Proof: Assume that $e^A=e^B$. Here $A,B$ have algebraic entries ; according to "Wermuth, 2 remarks on matrix exponentials" (in free access) http://www.sciencedirect.com/science/article/pii/0024379589905545 $AB=BA$. Thus $A,B$ are simultaneously triangularizable over $\mathbb{C}$ with diagonals $(\lambda_j)_j,(\mu_j)_j$. Necessarily $e^{\lambda_j}=e^{\mu_j}$, that is $\lambda_j=\mu_j+2k_j\pi$. $\lambda_j$ and $\mu_j$ are algebraic numbers, that implies $k_j=0$ and therefore $\lambda_j=\mu_j$.
$A$ is similar over $\mathbb{C}$ to a matrix in the form $A_1=diag(\lambda_1I_{i_1}+N_1,\cdots)$ where the $\lambda_j$ are distinct and the $N_j$ are nilpotent.
EDIT: Since $AB=BA$, $B_1$ is in the form $B_1=diag(U_1,\cdots)$ with $U_jN_j=N_jU_j$. The previous reasoning and $e^{U_j}=e^{\lambda_j}e^{N_j}$ imply that $U_j$ has a sole eigenvalue $\lambda_j$. Then $B_1=diag(\lambda_1 I_{i_1}+ M_1,\cdots)$ with $M_j$ nilpotent and $N_jM_j=M_jN_j$ ;
in particular, $M_j-N_j$ is nilpotent. Moreover $e^{\lambda_jI_{i_j}+N_j}=e^{\lambda_jI_{i_j}+M_j}$, that is $e^{N_j-M_j}=I_{i_j}$. The exponential map is an isomorphism between nilpotent matrices and unipotent ones ; then $N_j-M_j=0$, $A_1=B_1$ and finally $A=B$.