Geometrically, it's probably best to think about self-adjoint operators in terms of their eigenspaces. An operator on a finite-dimensional inner product space is self-adjoint if and only if its eigenvalues are real and its eigenspaces are orthogonal and sum (directly) to the whole space.
The real eigenvalues means, roughly, there can't be any kind of rotation happening in any plane. All of the orthogonal spaces must stretch, shrink, and/or reflect.
Here's some examples, and geometric reasoning to support why/why not they are self adjoint:
Rotations in a plane
As stated before, there can't really be rotations while remaining self-adjoint, as these produce complex eigenvalues (of modulus $1$, in fact).
Projections onto a line/plane/subspace by least distance
Yep! These are self-adjoint. In essence, we are decomposing the space into the space we are projecting onto (the range), and its orthogonal complement (the kernel). We are leaving the vectors in the range alone (i.e. multiplying them by $1$), and shrinking the vectors in kernel to nothing (i.e. multiplying them by $0$).
Reflections, by least distance
Also self-adjoint. Rather than shrinking the complement to nothing, instead we are reflecting and multiplying the vectors by $-1$. This still makes them self-adjoint, but it will mean that the map is not positive-(semi)definite.
Projections onto one subspace, along a complementary subspace
This is a more general type of projection, which won't generally be self-adjoint, as the complementary subspace need not be orthogonal to the original subspace.
Hope that helps!
EDIT: Regarding orthogonal eigenspaces, suppose that $T : V \to V$ is self-adjoint, and $v_1, v_2$ are eigenvalues for distinct eigenvalues $\lambda_1, \lambda_2$. We simply need to show $\langle v_1, v_2 \rangle = 0$.
To prove this, consider
\begin{align*}
\lambda_1 \langle v_1, v_2 \rangle &= \langle \lambda_1 v_1, v_2 \rangle \\
&= \langle Tv_1, v_2 \rangle \\
&= \langle v_1, Tv_2 \rangle \\
&= \langle v_1, \lambda_2 v_2 \rangle \\
&= \overline{\lambda_2} \langle v_1, v_2 \rangle \\
&= \lambda_2 \langle v_1, v_2 \rangle,
\end{align*}
where the last line uses the fact that $\lambda_2$ is real. Thus, we have
$$(\lambda_1 - \lambda_2)\langle v_1, v_2 \rangle = 0 \implies \langle v_1, v_2 \rangle = 0$$
since $\lambda_1 - \lambda_2 \neq 0$.
Best Answer
Yes: the characteristic polynomial of a linear operator always equal to the characteristic polynomial of its representation under any choice basis $\mathcal B$. The fact that this is so is a direct consequence of the structure theorem for PIDs (which presumably you have seen if you are doing anything with "elementary divisors"). In particular, there is a direct correspondence between elementary divisors of an operator and elementary divisors of the associated matrix.
For a more thorough discussion, see this wikipedia page.
Regarding your second question: if our vector space is over the field $\Bbb R$ and the bilinear form is positive definite, then "symmetric operators" will correspond to "symmetric (real) matrices", which means that the spectral theorem applies and the roots of the characteristic polynomial must be real. For other bilinear forms, we can make no such guarantee.
An example of a "symmetric" operator that fails to have real eigenvalues. Consider the bilinear form over $\Bbb R^2$ given by $$ (x,y) = x_1y_2 + x_2 y_1. $$ Relative to this bilinear form, we find that the operator $x \mapsto Ax$ where $$ A = \pmatrix{0&-1\\1&0} $$ is "symmetric", but has no real eigenvalues.