Probably, the following elementary argument using sparseness will help. Assume that $|A_{i,j}|\le c$. Let $\tau\in[0,1]$ be a proportion of non-zero entries among all entries in matrix, so that there are exactly $\tau n^2$ non-zero entries. Then
$
\max_{k}{|\lambda_k|} \le \sqrt{\mathrm{tr}(A^*A)}= \sqrt{\sum_{i,j=1}^{n}{|A_{i,j}|^2}} \le \sqrt{\tau n^2 c^2} = \sqrt{\tau}n c.
$
For $\tau$ small this beats Zhan's estimate, which is roughly $nc$ in this context.
Gershgorin theorem is good for matrices with small off-diagonal elements.
Consider the simplex of nonzero diagonal matrices W with nonnegative entries up to scaling, and the simplex of nonzero vectors V with nonnegative entries up to scaling.
There is a map, $V=\max(WX′XW\mathbf 1,0)$, from the first simplex to the second, with $\max(a,0)$ interpreted entrywise. This is well-defined because $WX'XW\mathbf 1$ always has some positive entry, because the sum of its entries is $1'W X' X W1$, with $W1$ a nonzero vector and $X'X$ positive-definite.
This map clearly sends each k-cell of the first simplex into the corresponding k-cell of the second simplex, since if some of the coordinates of $W$ are $0$ then some of the coordinates of $V$ are $0$.
Every such map on simplices must be surjective. This is because the map from the boundary sphere of one simplex to the boundary sphere of the other is degree one, because every such map on simplices has a boundary-preserving homotopy to the standard isomorphism between those simplices, by induction.
So there is some $W$ such that $\max(WX′XW\mathbf 1,0)=\mathbf 1$. So $WX'XW\mathbf 1=\mathbf 1$.
Best Answer
This is false. Let $c$ be a quadratic nonresidue modulo $p$. Our matrix will be $(p^2-1) \times (p^2-1)$, with rows and colums indexed by pairs $(x,y) \in \mathbb{F}_p^2 \setminus \{ (0,0) \}$.
Our matrix is defined by $$A_{(x_1,y_1) \ (x_2, y_2)} = x_1 x_2 - c y_1 y_2.$$ This is obviously symmetric. Since $c$ is a nonresidue, we have $x^2-cy^2 \neq 0$ for $(x,y) \in \mathbb{F}_p^2 \setminus \{ (0,0) \}$, so the diagonal entries are nonzero.
Each column of this matrix is a linear function of $(x,y)$. So every vector in the image of this matrix is a linear function $\mathbb{F}_p^2 \setminus \{ (0,0) \} \longrightarrow \mathbb{F}_p$ and, hence, takes the value $0$ somewhere.
We could make a smaller $(p+1) \times (p+1)$ example by just taking one point $(x,y)$ on each line through $0$ in $\mathbb{F}_p^2$.
Moreover, I claim that $(p+1) \times (p+1)$ is optimal. In other words, if $A$ is an $n \times n$ matrix with $n \leq p$ and nonzero entries on the diagonal, then some vector in the image of $A$ has all coordinates nonzero. Interestingly, I don't need the symmetry hypothesis.
Let $W$ be the image of $A$. Note that $W$ is not contained in any of the coordinate hyperplanes.
Let $\vec{u}= (u_1,\ u_2, \ \ldots,\ u_n)$, among all elements of $W$, have the fewest $0$ entries. Suppose for the sake of contradiction that some $u_i$ is $0$. Then there is some other $\vec{v}$ in $W$ with $v_i \neq 0$. Consider the points $(u_j : v_j)$ in $\mathbb{P}^1(\mathbb{F}_p)$ as $j$ ranges over all indices where $(u_j, v_j) \neq (0,0)$. There are fewer then $p+1$ such $j$, so some point of $\mathbb{P}^1(\mathbb{F}_p)$ is not hit, call it $(a:b)$. Then $-b \vec{u} + a \vec{v}$ is in $W$ and has fewer nonzero entries than $\vec{u}$, a contradiction.