Yes, you are: If $A$ is a symmetric matrix then $A$ is diagonalizable (semi-simple). Hence, the minimal polynomial of $A$, $m_A(x)$ consists of linear factors of multiplicity $1$ each, i.e. $m_A(x)=(x-\lambda_1)\cdot...\cdot(x-\lambda_k)$, where $\lambda_1,...,\lambda_k$ are all the distinct eigenvalues of $A$.
Now, you know that $I,A,…,A^d$ are linearly independent, which implies that there is no polynomial of degree $\leq d$ annulling $A$ (applying any such polynomial to gives you a linear combination of $I,A,…,A^d$). Since $m_A(x)$ is annulling $A$, it follows that $d<\deg(m_A(x))=k$. Hence $k\geq d+1$.
This is an approach different from yours. Let's precise some things.
Definition Let $G$ be a group and $F$ be a field.
- A character from $G$ to $F$, it's a group homomorphism $\sigma:G\to F^\ast$, being $F^\ast$ the multiplicative group of units of $F$.
- We say that a finite set of characters $\{\sigma_1,\ldots,\sigma_n\}$ is dependent, if there exist scalars $a_1,\ldots,a_n\in F$, not all $0$, such that
$$\sum_{j=1}^n a_j \sigma_j(x) = 0\quad \forall x\in G.$$
- A finite set of characters is independent if it is not dependent.
Theorem Let $G$ be a group and $F$ be a field. For any $n\in \Bbb N$, any set $\{\sigma_1,\ldots,\sigma_n\}$ of $n$ characters from $G$ to $F$ is independent.
Proof. Proceed by induction.
If $n=1$ and $a\in F$ then $$a\sigma(x) =0\quad \forall x\in G$$ implies $a=0$ because $\sigma(G)\subseteq F^\ast$.
Suppose that the theorem holds for any $k\in\{1,\ldots,n-1\}$, being this our induction hypothesis.
Arguing by contradiction, suppose that there is a set $\{\sigma_1,\ldots,\sigma_n\}$ of $n$ characters from $G$ to $F$ such that there exists $a_1,\ldots,a_n\in F$, not all $0$, such that
$$\sum_{j=1}^n a_j\sigma_j(x) = 0\quad\forall x\in G. \tag{1}$$
Notice that if some $a_j$ is $0$ we'll have a dependent set of characters with less than $n$ elements. By our induction hypothesis, this can not be, so all the $a_j$ are not $0$.
Dividing in (1) by $a_n$, we can assume that $a_n=1$. So, we have
$$0=a_1\sigma_1(x)+\cdots+a_{n-1}\sigma_{n-1}(x) + \sigma_n(x)\quad \forall x\in G.\tag{2}$$
Now, $\sigma_1\neq \sigma_n$ (otherwise $\{\sigma_1,\ldots,\sigma_n\}$ has not $n$ elements) and thus there is some $g\in G$ such that $\sigma_1(g)\neq \sigma_n(g)$. Equation (2) is valid for any element of $G$, particularly it is valid for elements of the from $gx$ with $x\in G$, then we get
$$0=a_1\sigma_1(g)\sigma_1(x)+\cdots+a_{n-1}\sigma_{n-1}(g)\sigma_{n-1}(x)+\sigma_n(g)\sigma_n(x)\quad\forall x\in G.$$
Divide this last equation by $\sigma_n(g)$:
$$0=a_1\frac{\sigma_1(g)}{\sigma_n(g)}\sigma_1(x)+\cdots+a_{n-1}\frac{\sigma_{n-1}(g)}{\sigma_n(g)}\sigma_{n-1}(x)+\sigma_n(x)\quad\forall x\in G.$$
Subtracting the equation (2) from this last one, we get
$$0=a_1\left[\frac{\sigma_1(g)}{\sigma_n(g)}-1\right]\sigma_1(x)+\cdots+a_{n-1}\left[\frac{\sigma_{n-1}(g)}{\sigma_n(g)}-1\right]\sigma_{n-1}(x)\quad\forall x\in G.$$
Thanks to the independence of $\{\sigma_1,\ldots\sigma_{n-1}\}$ we obtain
$$a_1\left[\frac{\sigma_1(g)}{\sigma_n(g)}-1\right]=0,$$
and since $a_1\neq 0$, this implies $\sigma_1(g)=\sigma_n(g)$, which is absurd due to the choose of $g$.
Best Answer
Suppose there are nontrivial linear relations between the maps $f_1,\dots,f_n$ seen as elements of the vector space $K^G$; among them choose one with the minimum number of nonzero coefficients. Upon a reordering, we can assume it is $$ \alpha_1f_1+\dots+\alpha_kf_k=0 $$ with all $\alpha_i\ne0$. This means that, for every $x\in G$, $$ \alpha_1f_1(x)+\dots+\alpha_kf_k(x)=0 $$ Note that $k>1$ or we have a contradiction.
Fix $y\in G$; then also $$ \alpha_1f(yx)+\dots+\alpha_kf_k(yx)=0 $$ and, since the maps are homomorphisms, $$ \alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_k(y)f_k(x)=0\tag{1} $$ for every $x\in G$ and $$ \alpha_1f_1(y)f_1(x)+\dots+\alpha_kf_1(y)f_k(x)=0\tag{2} $$ By subtracting $(2)$ from $(1)$ we get $$ \alpha_2(f_2(y)-f_1(y))f_2(x)+\dots+\alpha_k(f_k(y)-f_1(y))f_k(x)=0 $$ for all $x$, hence $$ \alpha_2(f_2(y)-f_1(y))f_2+\dots+\alpha_k(f_k(y)-f_1(y))f_k=0 $$ which would be a shorter linear relation, so we conclude that $$ f_2(y)=f_1(y),\quad \dots,\quad f_k(y)=f_1(y) $$ Now, choose $y$ such that $f_1(y)\ne f_2(y)$ and you have your contradiction.