Let's first show that $\text{ker } \varphi$ is a two-sided ideal of $\mathcal{L}(V)$.
By definition, $$\text{ker } \varphi = \{E \in \mathcal{L}(V) : \varphi(E) = 0\}$$
Consider any $S \in \text{ker } \varphi$ and $T \in \mathcal{L}(V)$. Since $(S\in \text{ker } \varphi) \implies (S \in \mathcal{L}(V))$ we have, by definition of $\varphi$, that $\varphi(ST) = \varphi(S)\varphi(T)$. But $\varphi(S) = 0$ which implies
$$
\varphi(ST) = 0\varphi(T) = 0
$$
This means $ST \in \text{ker } \varphi$. I have shown in my previous response that $\varphi(ST) = \varphi(TS)$. This implies that $TS \in \text{ker } \varphi$ as well. Thus, $\text{ker } \varphi$ is a two-sided ideal of $\mathcal{L}(V)$.
The only two-sided ideals of $\mathcal{L}(V)$ are $\{0\}$ and $\mathcal{L}(V)$. Let's now show $\text{ker } \varphi \neq \{0\}$ by showing there exists a non-zero mapping $S \in \text{ker } \varphi$.
Let $v_1,\ldots, v_n$ be a basis of $V$. Define a linear mapping $S: V \to V$ on the basis of $V$ as follows:
If $n$ is even:
$$
Sv_j =
\begin{cases}
v_{j+1}, & j=1, 3, \ldots, n-1 \\
0, & j=2, 4, \ldots, n
\end{cases}
$$
If $n$ is odd:
$$
Sv_j =
\begin{cases}
v_{j+1}, & j=1, 3, \ldots, n-2 \\
0, & j=2, 4, \ldots, n-3, n-1, n
\end{cases}
$$
In either case, we'll have that $\text{range }S \subseteq \text{ker }S$. This implies that $S^2 = 0$, which further implies $S^2 \in \text{ker }\varphi$. But by definition of $\varphi$, we have that $\varphi(S^2) = \varphi(S)\varphi(S) = 0$. This implies $\varphi(S) = 0$ and so, $S \in \text{ker }\varphi$. Since $S$ is non-zero, we have that $\text{ker }\varphi \neq \{0\}$. This must mean that $\text{ker }\varphi = \mathcal{L}(V)$, which means that for every linear map $T \in \mathcal{L}(V)$, we have that $\varphi(T) = 0$, as was to be proven.
Plan.
We will prove this by showing
$$
(a) \implies (b) \implies (c) \implies (d) \implies (e) \implies (a)
$$
This is sufficient to show the above are equivalent to each other due to the transitivity of implications.
In the proofs below, we will make use of the following definitions and results from the text up to this exercise:
Any other statements that aren't justified in the proofs below should already be well known by the time the reader comes to this exercise in the text.
Also, the proofs below assume $\mathcal{M}(T)$ is denoted as \begin{pmatrix}
A_{1,1} & \cdots & A_{1,n} \\
\vdots & & \vdots \\ A_{n,1} & \cdots & A_{n,n}
\end{pmatrix}
Proofs.
$(a) \implies (b)$
Suppose $T$ is injective. This implies $\text{null }T = \{0\}$, that is, $v=0$ is the only vector to make the equation $Tv=0$ true. To show this implies $(b)$ is to show that
$$
\lambda_1 \begin{pmatrix} A_{1,1} \\ \vdots \\ A_{n,1} \end{pmatrix} + \cdots + \lambda_n \begin{pmatrix} A_{1,n} \\ \vdots \\ A_{n,n} \end{pmatrix} = 0 \tag{1}
$$
is true only when $\lambda_1 = \cdots = \lambda_n = 0$.
By definition $\color{gray}{3.31}$, we have
$$
Tv_k = A_{1,k}u_1 + \cdots + A_{n,k}u_n \tag{2}
$$
for $k=1,\ldots,n$. This implies that
$$
\lambda_kTv_k = \lambda_kA_{1,k}u_1 + \cdots + \lambda_kA_{n,k}u_n \tag{3}
$$
for $k=1,\ldots,n$.
Notice that the coefficients in equation $(3)$ correspond to the entries of $\lambda_k\mathcal{M}(T)$, and hence, represent each term in equation $(1)$. Thus, we can consider the equation $\lambda_kTv_k = 0$ and see when it would be true. We know $Tv_k \neq 0$ since $v_k \notin \text{null }T$.
The reason $v_k \notin \text{null }T$ is because if otherwise, it would imply $v_k = 0$ since $\text{null }T = \{0\}$, and it would contradict that $v_k$ is part of a linearly independent list.
Since $Tv_k \neq 0$, then the only way for $\lambda_kTv_k = 0$ is when $\lambda_k = 0$ for $k = 1, \ldots, n$. This, along with equation $(1)$, shows that the columns of $\mathcal{M}(T)$ are linearly independent. We have thus shown $(a) \implies (b)$.
$(b) \implies (c)$
Suppose the columns of $\mathcal{M}(T)$ are linearly independent in $\mathbf{F}^{n,1}$. There are $n$ of these columns. Also, $\text{dim }\mathbf{F}^{n,1}=n$ by $\color{blue}{3.40}$. Thus, by $\color{blue}{2.38}$, the columns of $\mathcal{M}(T)$ is a basis of $\mathbf{F}^{n,1}$, and hence span $\mathbf{F}^{n,1}$. We have thus shown $(b) \implies (c)$.
$(c) \implies (d)$
Suppose the columns of $\mathcal{M}(T)$ span $\mathbf{F}^{n,1}$. There are $n$ of these columns. Also, $\text{dim }\mathbf{F}^{n,1}=n$ by $\color{blue}{3.40}$. Thus, by $\color{blue}{2.42}$, the columns of $\mathcal{M}(T)$ is a basis of $\mathbf{F}^{n,1}$. Hence, the column rank of $\mathcal{M}(T)$ equals $n$ by definition $\color{gray}{3.52}$. But the row rank of $\mathcal{M}(T)$ equals the column rank of $\mathcal{M}(T)$ by $\color{blue}{3.57}$. With the facts that $\text{dim }\mathbf{F}^{1,n}=n$, row rank of $\mathcal{M}(T)$ equals $n$ and the span of the rows of $\mathcal{M}(T)$ is a subspace of $\mathbf{F}^{1,n}$, we have, by $\color{blue}{2.39}$, that the rows of $\mathcal{M}(T)$ span $\mathbf{F}^{1,n}$. We have thus shown $(c) \implies (d)$.
$(d) \implies (e)$
Suppose the rows of $\mathcal{M}(T)$ span $\mathbf{F}^{1,n}$. Since $\text{dim }\mathbf{F}^{1,n}=n$ and since row rank of $\mathcal{M}(T)$ equals $n$, we must have, by $\color{blue}{2.42}$, that the rows of $\mathcal{M}(T)$ is a basis of $\mathbf{F}^{1,n}$, and hence linearly independent. We have thus shown $(d) \implies (e)$.
$(e) \implies (a)$
The approach here will be to investigate the columns of $\mathcal{M}(T)$ to show $T$ is injective rather than the rows of $\mathcal{M}(T)$ because definition $\color{gray}{3.31}$ is with respect to the columns.
Suppose the rows of $\mathcal{M}(T)$ are linearly independent in $\mathbf{F}^{1,n}$. This implies the rows of $\mathcal{M}(T)$ is a basis of $\mathbf{F}^{1,n}$. Thus, the row rank of $\mathcal{M}(T)$ equals $n$ by definition $\color{gray}{3.52}$. This implies the column rank of $\mathcal{M}(T)$ equals $n$ by $\color{blue}{3.57}$; this means the $n$ columns of $\mathcal{M}(T)$ span $\mathbf{F}^{n,1}$, implying they're a basis of $\mathbf{F}^{n,1}$ and hence, linearly independent.
The $n$ columns of $\mathcal{M}(T)$ being linearly independent implies any column of $\mathcal{M}(T)$ cannot be expressed as a linear combination of the other columns. What this means in particular is that for all $k=1,\ldots,n, \ j=1,\ldots,n$ where $k \neq j$ we have
$$
\begin{pmatrix} A_{1,k} \\ \vdots \\ A_{n,k} \end{pmatrix} \neq
\begin{pmatrix} A_{1,j} \\ \vdots \\ A_{n,j} \end{pmatrix} \tag{4}
$$
Consider a $v_k$ and $v_j$ in the basis $v_1, \ldots, v_n$ of $V$ such that $k \neq j$. This implies $v_k \neq v_j$ since they are in a linearly independent list. We want to show $Tv_k \neq Tv_j$.
By definition $\color{gray}{3.31}$, we have that
$$
Tv_k = A_{1,k}u_1 + \cdots + A_{n,k}u_n \\
Tv_j = A_{1,j}u_1 + \cdots + A_{n,j}u_n
$$
Since statement $(4)$ is true by our assumption that $k \neq j$, we must have
$$
A_{1,k}u_1 + \cdots + A_{n,k}u_n \neq A_{1,j}u_1 + \cdots + A_{n,j}u_n
$$
implying $Tv_k \neq Tv_j$. That is, we have shown $(v_k \neq v_j) \implies (Tv_k \neq Tv_j)$, which means $T$ is injective by the contrapositive of definition $\color{gray}{3.14}$. We have thus shown $(e) \implies (a)$.
Best Answer
Let $M = \max_{i,j} \vert T_{ij}\vert.$
Take $x$ a vector in $V$ whose coordinates with respect to $v_1,...,v_n$ are $(x_1,...,x_n)'$.
Then
$$ (Tx)_j = \sum_{k = 1 }^n T_{jk} x_k.$$
By the triangular inequality and the definition of M we obtain
$$ \vert (Tx)_j \vert \leq \sum_{k = 1}^n \vert T_{jk} x_k \vert \leq M \sum_{k = 1}^n \vert x_k \vert,$$
which holds for any $j = 1,...,n$. As suggested in the comment consider the special case where $j$ maximises $\vert x_j \vert$ then we have that $\sum_{k = 1}^n \vert x_k \vert \leq n \vert x_j \vert$. Hence
$$ \vert (Tx)_j \vert \leq n M \vert x_j \vert.$$
In the particular case where $x \neq 0$ is an eigenvector for the eigenvalue $\lambda$ then $Tx = \lambda x$ so we have
$$ \vert \lambda \vert \vert x_j \vert \leq n M \vert x_j \vert.$$
If $\vert x_j \vert \neq 0$ we can divide to obtain the inequality we wanted to prove.
If $\vert x_j \vert = 0$ then $x = 0$, a contradiction.