1) No. Any choice of basis determines an endomorphism that the matrix corresponds to and in general different choices of basis will give different endomorphisms. (They are, of course, all similar.)
2) No. Consider the case that $A$ is not diagonal. Two matrices have the same entries if and only if they represent the same linear transformation with respect to a fixed basis.
3) No. This is the same question as 1).
4) To see this concretely, write out the condition $AS = SD$ for $D$ a diagonal matrix explicitly. To see this abstractly, let $T$ be a linear transformation with respect to a basis $e_i$, and suppose it has a basis $v_i$ of eigenvectors with eigenvalues $\lambda_i$. Then $T$ is diagonal with respect to the basis $v_i$. If $S$ denotes the linear transformation which sends $v_i$ to $e_i$, then
$$STS^{-1}(e_i) = ST v_i = S \lambda_i v_i = \lambda_i S v_i = \lambda_i e_i$$
so $STS^{-1}$ is diagonal with respect to the basis $e_i$. Now, the above is a statement about linear transformations which is independent of basis. Writing everything above in terms of the basis $e_i$ gives you a corresponding statement about matrices, and in that statement $S^{-1}$ is more or less by definition the matrix whose columns are the entries of $v_i$ (with respect to the basis $e_i$).
I have often thought that elementary linear algebra would be less confusing if it were made explicit that when changing bases one is really working with two vector spaces; first the original vector space $V$ one cares about and second the concrete vector space $\mathbb{C}^n$ where $n = \dim V$ with its distinguished basis. A basis for $V$ is then equivalent to the choice of an isomorphism $f : \mathbb{C}^n \to V$ and changing bases corresponds to changing the choice of this map. Crucially, there are two natural ways to do this: either precompose with an automorphism $\mathbb{C}^n \to \mathbb{C}^n$ or postcompose with an automorphism $V \to V$. The two give the same result, but the notion of sameness here is itself dependent on the choice of $f$.
In category theory, one says that the finite-dimensional vector space (over a fixed field) of a given dimension is unique up to isomorphism, but not unique up to unique isomorphism, and so when identifying different vector spaces one must keep track of the identifications one is using or else risk getting hopelessly lost.
In other words, when dealing with objects that are isomorphic but for which the isomorphism is not unique, it is better to behave as if they are different objects even if they are in some sense "the same."
It might help to think of the same linear transformation with respect to two different bases as operating on two different data types. That is, with respect to a given basis $\mathcal{B}$, the corresponding matrix should perhaps be thought of as a function which accepts and spits out "$\mathcal{B}$-type" vectors, and with respect to a different basis $\mathcal{C}$ accepts and spits out $\mathcal{C}$-type vectors. These operations are compatible but to specify the compatibility you need to typecast from $\mathcal{B}$-type vectors to $\mathcal{C}$-type vectors and back again, and this is exactly what the change-of-basis matrix is supposed to do.
The change-of-basis matrix therefore has different input and output types.
To understand, WLOG let's take a basis change given by
$$b_1={B^1}_1e_1+{B^2}_1e_2$$
$$b_2={B^1}_1e_1+{B^2}_2e_2$$
where $\{e_1,e_2\}$ is an old basis and $\{b_1,b_2\}$ is the new.
Relation which can be succintily
expressed as $b_i={B^s}_ie_s$
(here we see how bases covariate ).
Agree that the matrix of such data is
$$[B]=\begin{bmatrix}
{B^1}_1, {B^1}_2\\
{B^2}_1, {B^2}_2\\
\end{bmatrix}$$
Then, to get the new components of a vector
$v=v^1e_1+v^2e_2$,
you will see
that $$v_b=[B]^{-1}v_e,$$
(here we see how components contravariate ),
$v_e$ is a column arrange from the old components; $v_b$ is the data on the new components of the very same vector $v$.
Unfolded is
$$
\begin{bmatrix}
w^1\\
w^2\\
\end{bmatrix}
\ =\
\begin{bmatrix}
{B^1}_1, {B^1}_2\\
{B^2}_1, {B^2}_2\\
\end{bmatrix}^{-1}
\begin{bmatrix}
v^1\\
v^2\\
\end{bmatrix}$$
such that $v=w^1b_1+w^2b_2$ in the new basis.
Take an explicit example to illuminate even more:
Let
$$b_1=e_1+2e_2,$$
$$b_2=e_1+3e_2,$$
be a basis change. Its change-of-basis matrix is
$[B]=
\begin{bmatrix}
1& 1\\
2&3\\
\end{bmatrix}
$.
Now solving for $e_i$ we get
$$e_1=3b_1-b_2,$$
$$e_2=-2b_1+b_2.$$
Which substitution on $v$ gives:
$$v=v^1(3b_1-b_2)+v^2(-2b_1+b_2).$$
This simplifies into
$$v=(3v^1-2v^2)b_1+(-v^1+v^2)b_2.$$
Now follow with your eyes the $[B]^{-1}v_e$ product:
$$
\begin{bmatrix}
3&-1\\
-2&1\\
\end{bmatrix}
\begin{bmatrix}
v^1\\
v^2\\
\end{bmatrix}
=
\begin{bmatrix}
3v^1-v^2\\
-2v^1+v^2\\
\end{bmatrix}.
$$
Best Answer
But in this example, $\Phi$ is bijective (since the rank is 3, and the kernel is $0$). It's exactly as you said: "endomorphism" just means a linear map from a vector space to itself (there is no assumption on whether or not it is bijective...it might be, but it might not).
That's right: it means both the domain and range are written in the standard basis. If we call them $$e_1 = \begin{pmatrix}1\\0\\0\end{pmatrix}, \quad e_2 = \begin{pmatrix} 0\\1\\0 \end{pmatrix}, \quad e_3 = \begin{pmatrix}0\\0\\1\end{pmatrix} $$ then saying $A_\Phi$ represents $\Phi$ in the standard basis means $$ \begin {align*} \Phi(e_1) &= e_1+e_2+e_3 \\ \Phi(e_2) &= e_1-e_2+e_3 \\ \Phi(e_3) &= e_3 \end{align*} $$ This is just obtained from looking at the columns of $A_\Phi$ (the coefficients of $e_1,e_2,e_3$ on the right-hand sides are the columns of the matrix).
Now, if we write the new basis as $$ v_1 = \begin{pmatrix}1\\1\\1\end{pmatrix}, \quad v_2 = \begin{pmatrix}1\\2\\1 \end{pmatrix}, \quad v_3 = \begin{pmatrix} 1\\0\\0\end{pmatrix} $$ Then the new matrix $\widetilde{A}_\Phi$ for $\Phi$ in the new basis is $$ \widetilde{A}_\Phi = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} $$ where these coefficients satisfy $$ \begin{align*} \Phi(v_1) &= a_{11}v_1 + a_{21}v_2 + a_{31}v_3 \\ \Phi(v_2) &= a_{12}v_1 + a_{22}v_2 + a_{32} v_3 \\ \Phi(v_3) &= a_{13}v_1 + a_{23}v_2 + a_{33}v_3 \end {align*} $$
So part $(b)$ is asking you to find these coefficients $a_{ij}$ of this matrix $\widetilde{A}_\Phi$.
The formula you gave is the correct one: $\widetilde{A}_\Phi = T^{-1} A_\Phi T$, where $T$ is the matrix whose columns are $v_1,v_2,v_3$: $$ T = \begin{pmatrix} 1&1&1\\1&2&0\\1&1&0 \end{pmatrix} $$