$\newcommand{\ab}[1]{\langle #1 \rangle}$ $\newcommand{\mr}{\mathscr}$ $\newcommand{\mc}{\mathcal}$
$\newcommand{\bw}{\bigwedge}$
$\newcommand{\tr}{\text{trace}}$
Below I give the definitions using exterior powers. I also prove some theorems so that the reader can gain some familiarity if this approach is new to him/her.
Trace
Let $V$ be an $n$-dimensional vector space over a field $F$ and $T$ be a linear operator on $V$.
Define a map $f:V^n\to \bw^n V$ as
$$
f(v_1 , \ldots, v_n)=\sum_{i=1}^n v_1\wedge \cdots \wedge v_{i-1}\wedge Tv_i\wedge v_{i+1}\wedge \cdots \wedge v_n
$$
Then $f$ is an alternating multilinear map.
Therefore, we get a unique linear map $\theta:\bw^n V\to \bw^n V$ such that the following diagram commutes:
Since $\dim(\bw^n V)=1$, there is a unique $c\in F$ such that $\theta(v_1 \wedge \cdots \wedge v_n)=c(v_1 \wedge \cdots \wedge v_n)$ for all $v_1 , \ldots, v_n\in V$.
Definition.
This unique element $c\in F$ is termed as the trace of $T$ and is written as $\tr(T)$.
Theorem 1.
Let $V$ be a finite dimensional vector space and $T$ be a linear operator on $V$.
Let $\ab{\cdot, \cdot}$ be a non-degenerate symmetric bilinear form on $V$ which affords an orthonormal basis $\mc B=(e_1 , \ldots, e_n)$ on $V$.
Then
$$
\text{trace}(T) = \sum_{i=1}^n\ab{Te_i, e_i}
$$
Proof.
Immediate from the definition of trace.
$\blacksquare$
Going back to the definition of trace, more generally, given an $n$ tuple $(v_1, \ldots, v_n)$ of vectors in $V$ and an increasing $k$-tuple $I=(i_1, \ldots , i_k)$ of integers between $1$ and $n$, write $v_{I, j}$ to denote $Tv_j$ if $j$ appears in $I$ and simply $v_j$ if $j$ does not appear in $I$. Further write $v_I$ to denote $v_{I, 1}\wedge \cdots \wedge v_{I, n}$.
Define $f_k:V^n\to \Lambda^n V$ as
$$
f_k(v_1, \ldots, v_n)= \sum_{I \text{ an increasing }k\text{-tuple}}v_I
$$
Then $f_k$ is an alternating multilinear map and this induces a unique linear map $\Lambda^n V\to \Lambda^n V$. Again, this linear map is multiplication by a constant which we call the $k$-th-trace of $T$ and denote it as $\text{trace}_k(T)$.
Now we define determinants and the reader should convince oneself that the determinant is the $n$-th trace.
Determinants
Definition.
Let $V$ be a finite dimensional vector space.
Writing $\dim V=n$, and knowing that $\dim(\bw^n V)=1$, we deduce that there is a unique constant $c\in F$ such that
$$
\textstyle{\bw^n T(v_1 \wedge \cdots \wedge v_n)} =c\cdot(v_1 \wedge \cdots \wedge v_n)
$$
for all $v_1 , \ldots, v_n\in V$.
This constant $c$ is termed as the determinant of $T$ and is written as $\det T$.
Since $\bw^n V$ has dimension $1$, the determinant is well-defined.
Theorem 2.
Let $T$ and $S$ be linear operators on $V$.
Then $\det(TS)=(\det T)(\det S)$.
Proof.
Say $\dim V=n$.
We have
$$
\begin{array}{rcl}
(\det(TS))v_1\wedge \cdots \wedge v_n &=& T(Sv_1)\wedge \cdots \wedge T(Sv_n)\\
\\
&=& (\det T)Sv_1 \wedge \cdots \wedge Sv_n\\
\\
&=& (\det T)(\det S) v_1\wedge \cdots\wedge v_n
\end{array}
$$
Since this is true for all $v_1 , \ldots, v_n\in V$, we must have $\det(TS)=(\det T)(\det S)$.
$\blacksquare$
Best Answer
It is true because, for any two $n\times n$ matrices $A$ and $B$, $\operatorname{tr}(AB)=\operatorname{tr}(BA)$.