This is not a reference, but a short proof.
We use the following (probably known, but see later) lemma on representing a symmetric tensor as a linear combination of rank-1 symmetric tensors.
Lemma. Let $A$ be a finite set, $K$ an infinite field. Denote by $\mathcal S$ the set of symmetric functions $p:A^n\to K$. Then $\mathcal S$ is the $K$-span of rank-one functions, that is, the functions of the type $h(x_1)h(x_2)\ldots h(x_n)$, where $h:A\to K$.
Proof. Note that the product of two rank-one functions is a rank-one function. Thus the linear space $\mathcal T$, generated by rank-one functions, coincides with the $K$-algebra generated by them.
We may suppose that $A\subset K$. For $k=0,1,\ldots,n$ denote $e_k(x_1,\ldots,x_n)$ the elementary symmetric polynomial, that is, $\varphi_t(x_1,\ldots,x_n):=\prod(1+tx_i)=\sum_{k=0}^n t^ke_k$. We identify $e_k$ and the corresponding element of $\mathcal S$. Choosing $n+1$ distinct values $t_1,\ldots,t_{n+1}\in K$ and solving the corresponding (Vandermonde's) linear system of equations we represent each $e_k$ as a linear combinations of $\varphi_{t_i}\in \mathcal T$. Thus $e_k\in \mathcal S$ for all $k=0,1,\ldots,n$. It is well known that $e_k$'s generate the algebra of symmetric polynomials (over any field). Thus any symmetric polynomial function belongs to $\mathcal T$. It remains to note that any symmetric function $f\in \mathcal S$ may be represented by a symmetric polynomial. Indeed, a symmetric function $f$ may be represented as $F(e_1,e_2,\ldots,e_n)$ for certain function function $F$ defined on the corresponding finite set (because the values of $e_1,\ldots,e_n$ determine the values of $x_1,\ldots,x_n$ up to permutation). $F$ in turn coincides with a polynomial function on this finite set. $\square$
Now we may prove your theorem for finitely supported function $i\mapsto p_i$. Due to Lemma it may be supposed to have the form $p_i=\prod_{k=1}^n H(i_k)$ for a certain finitely supported function $H$ on $\mathbb{N}$ (as OP, I denote here $\mathbb{N}=\{0,1,\ldots\}$). In this case both parts of your identity are equal to $\det (\sum_m H(m)A^m)$.
Comment. Lemma does not hold for finite fields. For example, if $A=K=\{0,1\}$. Then the function $x+y+z$ is not a linear combination of rank-one functions 1, $xyz$, $(x+1)(y+1)(z+1)$: if $x+y+z=a+bxyz+c(x+1)(y+1)(z+1)$, then for $y=0,z=1,x=a$ we get $0=1$. I must make a warning that in the subject-related paper "Symmetric tensors and symmetric tensor rank" by Pierre Comon, Gene Golub, Lek-Heng Lim, Bernard Mourrain (SIAM Journal on Matrix Analysis and Applications, 2008, 30 (3), pp.1254-1279) this statement, after equation (1.1), is stated for any field, although proved for complex numbers, and the proof uses that a non-zero polynomial has non-zero values.
In any case, you may always enlarge the ground field and safely think that it is infinite.
Best Answer
Yes, provided you assume Hermitianity. Then, even more is true.
Take A to be Hermitian (or real symmetric, if you like) matrix. As for B, it can be any positive semidefinite matrix (including your rank 1 case and without regard to the eigenvectors of A). Then your assertion follows from Weyl's Theorem about the eigenvalues of the sum of Hermitian matrices. This is actually stated as Problem 1 on page 198 of the Horn & Johnson Matrix Theory.
Here's a Google Books link to it:
http://books.google.ie/books?id=PlYQN0ypTwEC&pg=PA198&dq=interlacing+horn+johnson&hl=en&sa=X&ei=2HZ0T_DrIZOAhQeeqKSmBQ&redir_esc=y#v=onepage&q&f=false
Since B is positive semidefinite, $\lambda_{1}(B) \geq 0$.
As you can see there, you can even bound the $d_{i}$'s from above.