Let's consider $M_n(\mathbb K)$ and $\mathbb K$ as monoids under multiplication. Here are fairly complete answers to your technical questions:
- Every monoid homomorphism $f$ from $M_n(\mathbb{K})$ to $\mathbb{K}$ is of the form $f=\phi \circ \text{det}$ for some monoid endomorphism $\phi:\mathbb{K}\to\mathbb{K}$. In other words, the determinant has the universal property that every monoid homomorphism from $M_n(\mathbb{K})$ to $\mathbb{K}$ factors through it. To make things more concrete, take, for example, $\mathbb{K}=\mathbb{R}$: if we restrict to continuous homomorphisms, the result implies that the only such homomorphisms are $f=(\text{sgn}(\text{det} A))^{\varepsilon}\cdot|\text{det} A|^r$ for $\varepsilon\in\{0,1\}$ and $r\in\mathbb{R}$.
Here is how we can prove this: Let $f : M_n(\mathbb{K}) \to \mathbb{K}$ be a monoid homomorphism.
First, we claim that either $f(A)=1$ for all $A$, or $f(A)=0$ for all non-invertible $A$: We have $f(0)=f(00)=f(0)^2$, so either $f(0)=0$ or $f(0)=1$. If $f(0)=1$, then for all $A\in M_n(\mathbb{K})$, $f(A)=f(A)f(0)=f(A0)=f(0)=1$. So assume $f(0)=0$. Let $J$ be a nilpotent matrix of rank $n-1$, e.g., take $J$ to be a single Jordan block with zeros on the diagonal. Then $J^n=0$, so $f(J)^n=f(J^n)=f(0)=0$, hence $f(J)=0$. Now, any matrix $A$ of rank $n-1$ may be written $A=SJT$ for suitable matrices $S$ and $T$, and it follows that $f(A)=f(S)f(J)f(T)=0$. Finally, since any non-invertible matrix $A$ may be written as a product of several rank $n-1$ matrices, we obtain $f(A)=0$ for all non-invertible $A$, which proves the claim.
Thus, if we ignore the trivial case $f=1$ and restrict attention to the case where $f(A)=0$ for all non-invertible $A$, then $f$ is completely determined by its restriction to $GL_n(\mathbb{K})$, which is a group homorphism into the multiplicative group $K^*$, whose kernel is a normal subgroup $N \unlhd GL_n(\mathbb{K})$. If we exclude the exceptional situation where $n=2$ and $\mathbb{K}=\mathbb{F}_2$ or $\mathbb{F}_3$, then either $N$ is a subgroup of scalar matrices or $N$ contains $SL_n(\mathbb{K})$. In the former case, $GL_n(\mathbb{K})/N$ has the group $PGL_n(\mathbb{K})$ as a homomorphic image, which is non-abelian (for $n\geq 2$), in contradiction to the fact that the First Isomorphism Theorem applied to $f$ provides an embedding of $GL_n(\mathbb{K})/N$ into the abelian group $\mathbb{K}^*$. Therefore, $N$ must contain $SL_n(\mathbb{K})$, which is the kernel of the determinant homomorphism $\text{det} : GL_n(\mathbb{K}) \to \mathbb{K}^*$. It follows that $f$ factors through the determinant homomorphism, i.e., for $A\in GL_n(\mathbb{K})$ we have $f(A) = \phi(\text{det} A)$ for some group endomorphism $\phi$ of $\mathbb{K}^*$. If we extend $\phi$ to a monoid endomorphism on $\mathbb{K}$ by setting $\phi(0)=0$, then the equation $f(A)=\phi(\text{det} A)$ also holds for non-invertible matrices $A$ since in this case both $f(A)=0$ and $\phi(\text{det}(A))=\phi(0)=0$. Thus $f = \phi \circ \text{det}$.
In the exceptional situation that $n=2$ and $\mathbb K=\mathbb F_2$ or $\mathbb F_3$, it is straightforward to check that although there are some additional normal subgroups of $GL_2(\mathbb K)$, none of them give rise to any new group homomorphisms from $GL_2(\mathbb K)$ into $\mathbb K^*$.
- There are no ring homomorphisms $M_n(\mathbb K) \to \mathbb K$ except for $n=1$, in which case the ring homomorphisms are just the field automorphisms of $\mathbb K$.
This is just a consequence of the fact that $M_n(\mathbb K)$ is a simple ring.
Since $AA'$ is positive semidefinite, in the eigendecomposition (or Jordan Canonical form) of $AA' = S^{-1}JS$, the diagonal matrix $J$ only has positive values on the diagonal, hence
$$\det(AA') = \det(S^{-1}JS) = \det(S^{-1})\det(J)\det(S)=\det(J) \geq 0$$
Best Answer
Friedberg is not wrong, at least on a historical standpoint, as I am going to try to show it.
Determinants were discovered "as such" in the second half of the 18th century by Cramer who used them in his celebrated rule for the solution of a linear system (in terms of quotients of determinants). Their spread was rather rapid among mathematicians of the next two generations ; they discovered properties of determinants that now, with our vision, we mostly express in terms of matrices.
Cauchy has given two important results about determinants as explained in the very nice article by Hawkins referenced below :
around 1815, Cauchy discovered the multiplication rule (rows times columns) of two determinants. This is typical of a result that has been completely revamped : nowadays, this rule is for the multiplication of matrices, and determinants' multiplication is restated as the homomorphism rule $\det(A \times B)= \det(A)\det(B)$.
around 1825, he discovered eigenvalues "associated with a symmetric determinant" and established the important result that these eigenvalues are real ; this discovery has its roots in astronomy, in connection with Sturm, explaining the word "secular values" he attached to them: see for example this.
Matrices made a shy apparition in the mid-19th century (in England) ; "matrix" is a term coined by Sylvester see here. I strongly advise to take a look at his elegant style in his Collected Papers.
Together with his friend Cayley, they can rightly be named the founding fathers of linear algebra, with determinants as permanent reference. Here is a major quote of Sylvester:
"I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent".
A lot of important polynomials are either generated or advantageously expressed as determinants:
the characteristic polynomial (of a matrix) is expressed as the famous $\det(A-\lambda I)$,
in particular, the theory of orthogonal polynomials mainly developed at the end of 19th century, can be expressed in great part with determinants,
the "resultant" of two polynomials, invented by Sylvester (giving a condition for these polynomials to have a common root), etc.
Let us repeat it : for a mid-19th century mathematician, a square array of numbers has necessarily a value (its determinant): it cannot have any other meaning. If it is a rectangular array, the numbers attached to it are the determinants of submatrices that can be "extracted" from the array.
The identification of "Linear Algebra" as an integral (and new) part of Mathematics is mainly due to the German School (say from 1870 till the 1930's). I don't cite the names, there are too many of them. An example among many others of this german domination: the germenglish word "eigenvalue". The word "kernel" could have remained the german word "kern" that appears around 1900 (see this site).
The triumph of Linear Algebra is rather recent (mid-20th century). "Triumph" meaning that now Linear Algebra has found a very central place. Determinants in all that ? Maybe the biggest blade in this swissknife, but not more ; another invariant (this term would deserve a long paragraph by itself), the trace, would be another blade, not the smallest.
In 19th century, Geometry was still at the heart of mathematical education; therefore, the connection between geometry and determinants has been essential in the development of linear algebra. Some cornerstones:
A side remark: this kind of reflexions has been capital in the decision of Bourbaki team to avoid all figures and adopt the extreme view of reducing geometry to linear algebra (see the "Down with Euclid" of J. Dieudonné in the sixties).
Different examples of the emergence of new trends :
a) the concept of rank: for example, a pair of straight lines is a conic section whose matrix has rank 1. The "rank" of a matrix used to be defined in an indirect way as the "dimension of the largest nonzero determinant that can be extracted from the matrix". Nowadays, the rank is defined in a more straightforward way as the dimension of the range space... at the cost of a little more abstraction.
b) the concept of linear transformations and duality arising from geometry: $X=(x,y,t)^T\rightarrow U=MX=(u,v,w)$ between points $(x,y)$ and straight lines with equations $ux+vy+w=0$. More precisely, the tangential description, i.e., the constraint on the coefficients $U^T=(u,v,w)$ of the tangent lines to the conical curve has been recognized as associated with $M^{-1}$ (assuming $\det(M) \neq 0$!), due to relationship
$$X^TMX=X^TMM^{-1}MX=(MX)^T(M^{-1})(MX)=U^TM^{-1}U=0$$ $$=\begin{pmatrix}u&v&w\end{pmatrix}\begin{pmatrix}A & B & D \\ B & C & E \\ D & E & F \end{pmatrix}\begin{pmatrix}u \\ v \\ w \end{pmatrix}=0$$
whereas, in 19th century, it was usual to write the previous quadratic form as :
$$\det \begin{pmatrix}M^{-1}&U\\U^T&0\end{pmatrix}=\begin{vmatrix}a&b&d&u\\b&c&e&v\\d&e&f&w\\u&v&w&0\end{vmatrix}=0$$
as the determinant of a matrix obtained by "bordering" $M^{-1}$ precisely by $U$
(see the excellent lecture notes (http://www.maths.gla.ac.uk/wws/cabripages/conics/conics0.html)). It is to be said that the idea of linear transformations, especially orthogonal transformations, arose even earlier in the framework of the theory of numbers (quadratic representations).
Remark: the way the former identities have been written use matrix algebra notations and rules that were unknown in the 19th century, with the notable exception of Grassmann's "Ausdehnungslehre", whose ideas were too ahead of his time (1844) to have a real influence.
c) the concept of eigenvector/eigenvalue, initially motivated by the determination of "principal axes" of conics and quadrics.
d) The concept of "companion matrix" of a polynomial $P$, that could be considered as a tool but is more fundamental than that (https://en.wikipedia.org/wiki/Companion_matrix). It can be presented and "justified" as a "nice determinant" : In fact, it has much more to say, with the natural interpretation for example in the framework of $\mathbb{F}_p[X]$ (polynomials with coefficients in a finite field) as the matrix of multiplication by $P(X)$. (https://glassnotes.github.io/OliviaDiMatteo_FiniteFieldsPrimer.pdf), giving rise to matrix representations of such fields. Another remarkable application of companion matrices : the main numerical method for obtaining the roots of a polynomial is by computing the eigenvalues of its companion matrix using a Francis "QR" iteration (see (https://math.stackexchange.com/q/68433)).
References:
I discovered recently a rather similar question with a very complete answer by Denis Serre, a specialist in the domain of matrices : https://mathoverflow.net/q/35988/88984
The article by Thomas Hawkins : "Cauchy and the spectral theory of matrices", Historia Mathematica 2, 1975, 1-29.
See also (http://www.mathunion.org/ICM/ICM1974.2/Main/icm1974.2.0561.0570.ocr.pdf)
An important bibliography is to be found in (http://www-groups.dcs.st-and.ac.uk/history/HistTopics/References/Matrices_and_determinants.html).
See also a good paper by Nicholas Higham : (http://eprints.ma.man.ac.uk/954/01/cay_syl_07.pdf)
For conic sections and projective geometry, see a) this excellent chapter of lectures of the University of Vienna (see the other chapters as well) : (https://www-m10.ma.tum.de/foswiki/pub/Lehre/WS0809/GeometrieKalkueleWS0809/ch10.pdf). See as well : (maths.gla.ac.uk/wws/cabripages/conics/conics0.html).
Don't miss the following very interesting paper about various kinds of useful determinants : https://arxiv.org/pdf/math/9902004.pdf
See also this
Very interesting precisions on determinants in this text and in these answers.
A fundamental work on "The Theory of Determinants" in 4 volumes has been written by Thomas Muir : http://igm.univ-mlv.fr/~al/Classiques/Muir/History_5/VOLUME5_TEXT.PDF (years 1906, 1911, 1922, 1923) for the last volumes or, for all of them https://ia800201.us.archive.org/17/items/theoryofdetermin01muiruoft/theoryofdetermin01muiruoft.pdf. It is very interesting to take random pages and see how the determinant-mania has been important, especially in the second half of the 19th century. Matrices appear at some places with the double bar convention that lasted a very long time. Matrices are mentionned here and there, rarely to their advantage...
Many historical details about determinants and matrices can be found here.