You get no information whatsoever about non-zero eigenvalues/eigenvectors from $[T]_{B,C}$ unless you know $B$ and $C$. Of course, knowing the matrix for an operator with respect to known bases, does allow you to reconstruct the operator, and hence information such as eigenvectors/eigenvalues. But, if you just have a matrix, with respect to two unknown bases, you have basically no information.
You do get a bit of information about eigenvalues/eigenvectors of $0$. Recall that the eigenvectors corresponding to $0$ are the kernel of $T$. The nullspace of the matrix $[T]_{B, C}$ is always the image of the kernel of $T$ mapped under the coordinate vector map with respect to $B$. That is,
$$p \in \operatorname{ker} T \iff [p]_B \in \operatorname{null} [T]_{B, C}.$$
As the coordinate map is injective, this makes the two spaces isomorphic, and hence of the same dimension.
This map therefore takes eigenvectors of $T$ to eigenvectors of $[T]_{B, C}$, each corresponding to $0$. Note that this is the usual correspondence we get when considering eigenvectors of $T$ and eigenvectors of $[T]_{B, B}$ (for more general eigenvalues). In other words, the situation is largely unchanged for the eigenvalue $0$.
However, it's also worth noting that generalised eigenvectors corresponding to $0$ are free game. While the geometric multiplicity (the dimension of the eigenspace) is fixed, the algebraic multiplicity (the dimension of the generalised eigenspace, a.k.a. the exponent of the factor $\lambda$ in the characteristic polynomial) can definitely change. These extra dimensions can become new non-zero eigenvalues, be absorbed by other non-zero eigenvalues, or still contribute to the $0$ eigenvalue (possibly changing the structure of the Jordan Blocks corresponding to $0$).
I've given no examples of this, but here's something that may help. Pick your favourite invertible map $T : V \to V$, where $V$ is finite-dimensional. Pick your favourite basis $B = (b_1, b_2, \ldots, b_n)$ of $V$. Then $C = (Tb_1, Tb_2, \ldots, Tb_n)$ is also a basis! Further, using these bases, it's straightforward to show that
$$[T]_{B, C} = I_{n \times n},$$
i.e. the identity matrix. So, any invertible map, with any array of eigenvalues and eigenvectors, can become totally homogenised to the point of being the identity map. Any subtleties about the structure of the eigenspaces (e.g. diagonalisability) is totally gone, and now the whole space is one big, undifferentiated eigenspace corresponding to the single eigenvalue $1$.
In other words, if you care about eigenvalues/eigenvectors, definitely consider $[T]_{B, B}$ or $[T]_{C, C}$. I hope that helps!
Best Answer
For your first question, I want to cite @Theo Bendit's comment and generally add: Gaussian elimination changes many properties of matrices if you're not careful with it, including eigenvalues and e.g. the determinant(at least with in the general way).
For your second question: No, the eigenvalue of the operator $T$ does not depend on the basis chosen, if you calculate the roots of $p_A$ for $A$ being a corresponding representation:
Let $A$ and $B$ be similar, that is they represent the same endomorphism w.r.t. different bases, i.e. $B=CAC^{-1}$ for some invertible $C$(which you might call the change-of-basis matrix). Then
$$B-xI=CAC^{-1}-xI=CAC^{-1}-xCIC^{-1}=C(A-xI)C^{-1}$$
Thus, as the determinant distributes over matrix multiplication, we have
$$p_B=\mathrm{det}(B-xI)=\mathrm{det}(C(A-xI)C^{-1})=\mathrm{det}(C)\cdot\mathrm{det}(A-xI)\cdot\mathrm{det}(C^{-1})=\mathrm{det}(C)\cdot\mathrm{det}(A-xI)\cdot\mathrm{det}(C)^{-1}=\mathrm{det}(A-xI)=p_A$$
The last steps follow from the elementary property of determinants for invertible matrices that $\mathrm{det}(C^{-1})=\mathrm{det}(C)^{-1}$.
EDIT: Note that is makes thus sense to define the characteristic polynomial for an endomorphism, i.e. defining $p_T$ as it made sense to define the determinant for endomorphisms instead of only matrices.