Define some variables for convenience
$$\eqalign{
P &= {\rm Diag}(\beta) \cr
B &= cP^{-1} \cr
b &= {\rm diag}(B) \cr
S &= A+B \cr
M &= AS^{-1}A \cr
}$$
all of which are symmetric matrices, except for $b$ which is a vector.
Then the function and its differential can be expressed in terms of the Frobenius (:) product as
$$\eqalign{
f &= {\rm tr}(M) \cr
&= A^2 : S^{-1} \cr\cr
df &= A^2 : dS^{-1} \cr
&= -A^2 : S^{-1}\,dS\,S^{-1} \cr
&= -S^{-1}A^2S^{-1} : dS \cr
&= -S^{-1}A^2S^{-1} : dB \cr
&= -S^{-1}A^2S^{-1} : c\,dP^{-1} \cr
&= c\,S^{-1}A^2S^{-1} : P^{-1}\,dP\,P^{-1} \cr
&= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : dP \cr
&= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : {\rm Diag}(d\beta) \cr
&= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big)^T d\beta \cr
}$$
So the derivative is
$$\eqalign{
\frac{\partial f}{\partial\beta} &= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big) \cr
&= \frac{1}{c}\,{\rm diag}\big(BS^{-1}A^2S^{-1}B\big) \cr
&= \Big(\frac{b\circ b}{c}\Big)\circ{\rm diag}\big(S^{-1}A^2S^{-1}\big) \cr\cr
}$$
which uses Hadamard ($\circ$) products in the final expression. This is the same as joriki's result, but with more details.
You can say a lot more about the matrix you presented. Lets define the following two vectors
$$
u=\begin{pmatrix}1\\1\\\vdots\\1\end{pmatrix}
\:{\rm and}\:v=\begin{pmatrix}a_{1}\\a_{2}\\\vdots\\a_{n}\end{pmatrix}
$$
Then your matrix is exactly
$$A=uv^{T}$$
Assume $v\neq 0$ (because the zero matrix is a trivial case)
First Case: $\sum_{i=1}^{n}a_{i}\neq 0$.
You can easily see that
- The eigenvalue $\lambda=0$ is of multiplicity $n-1$, with $n-1$ linearly independent eigenvectors given by any basis of the subspace ${\rm Span}\left\{v\right\}^{\perp}$ (that's true because ${\rm Span}\left\{v\right\}\oplus{\rm Span}\left\{v\right\}^{\perp}=\mathbb{R}^{n}$ and ${\rm Span}\left\{v\right\}$ is of dimension $1$).
- The eigenvalue $\lambda=\sum_{i=1}^{n}a_{i}$ corresponds to the eigenvector $u$, since
$$Au=uv^{T}u=\left(v^{T}u\right)u=\left(\sum_{i=1}^{n}a_{i}\right)u$$
Therefore, in this case you have $n$ linearly independent eigenvectors and the matrix is diagonalizable.
Second Case: $\sum_{i=1}^{n}a_{i}=0$.
In this case the first point still applies, but the
matrix is not diagonalizable since it has only $n-1$ linearly independent eigenvectors (you can't have $n$ linearly independent eigenvectors with eigenvalue $\lambda=0$, unless the matrix is zero). It does, however, admits the following Jordan's canonical form
$$A\sim\begin{pmatrix}J_{2}\left(0\right)&0_{2\times\left(n-2\right)}\\0_{\left(n-2\right)\times2}&0_{\left(n-2\right)\times\left(n-2\right)}\end{pmatrix}=J_{2}\left(0\right)\oplus 0_{\left(n-2\right)\times\left(n-2\right)}$$
where
$$J_{2}\left(0\right)=\begin{pmatrix}0&1\\0&0\end{pmatrix}$$
is a $2\times 2$ Jordan's block of eigenvalue $\lambda=0$.
Best Answer
No, each row permutation commutes with each column permutations and altogether they generate a group isomorphic to $S_m \times S_n$ which (at least when $m,n>1$) is a proper subgroup of $S_{mn}$.