One of my favourites is the odd-town puzzle.
A town with $n$ inhabitants has $m$ clubs such that
- Each club has an odd number of members
- Any two clubs have an even number of common members (zero included)
Show that $m \le n$.
It becomes easy once you treat each club as a vector. The conditions imply that the vectors are linearly independent over $\mathbb{F}_{2}$.
Personally, I feel that intuition isn't something which is easily explained. Intuition in mathematics is synonymous with experience and you gain intuition by working numerous examples. With my disclaimer out of the way, let me try to present a very informal way of looking at eigenvalues and eigenvectors.
First, let us forget about principal component analysis for a little bit and ask ourselves exactly what eigenvectors and eigenvalues are. A typical introduction to spectral theory presents eigenvectors as vectors which are fixed in direction under a given linear transformation. The scaling factor of these eigenvectors is then called the eigenvalue. Under such a definition, I imagine that many students regard this as a minor curiosity, convince themselves that it must be a useful concept and then move on. It is not immediately clear, at least to me, why this should serve as such a central subject in linear algebra.
Eigenpairs are a lot like the roots of a polynomial. It is difficult to describe why the concept of a root is useful, not because there are few applications but because there are too many. If you tell me all the roots of a polynomial, then mentally I have an image of how the polynomial must look. For example, all monic cubics with three real roots look more or less the same. So one of the most central facts about the roots of a polynomial is that they ground the polynomial. A root literally roots the polynomial, limiting it's shape.
Eigenvectors are much the same. If you have a line or plane which is invariant then there is only so much you can do to the surrounding space without breaking the limitations. So in a sense eigenvectors are not important because they themselves are fixed but rather they limit the behavior of the linear transformation. Each eigenvector is like a skewer which helps to hold the linear transformation into place.
Very (very, very) roughly then, the eigenvalues of a linear mapping is a measure of the distortion induced by the transformation and the eigenvectors tell you about how the distortion is oriented. It is precisely this rough picture which makes PCA very useful.
Suppose you have a set of data which is distributed as an ellipsoid oriented in $3$-space. If this ellipsoid was very flat in some direction, then in a sense we can recover much of the information that we want even if we ignore the thickness of the ellipse. This what PCA aims to do. The eigenvectors tell you about how the ellipse is oriented and the eigenvalues tell you where the ellipse is distorted (where it's flat). If you choose to ignore the "thickness" of the ellipse then you are effectively compressing the eigenvector in that direction; you are projecting the ellipsoid into the most optimal direction to look at. To quote wiki:
PCA can supply the user with a lower-dimensional picture, a "shadow" of this object when viewed from its (in some sense) most informative viewpoint
Best Answer
I am surprised both by the approach of your textbook (you don't need determinants to introduce the distinction between singlar and non-singular matrices, nor to solve linear systems), and by the fact that you qualify this approach as abstract. I would qualify a don't-ask-questions-just-compute attitude as concrete rather than abstract. Maybe you use "abstract" to mean "hard to grasp", but it is not the same thing; for me often the things hardest to grasp are complicated but very concrete systems (in biochemistry for instance). In mathematics (and elsewhere, I suppose) it is often asking conceptual questions that leads to abstraction, and I sense that what you would like is a more conceptual, and therefore more abstract approach.
But abstraction is present in many fields of mathematics, like linear algebra, for a more improtant reason as well, namely for the sake of economy and generality. Linear algebra arose as a set of common techniques that apply to problems in very diverse areas of mathematics, and only by an abstract formulation can one express them in such a way that they can be applied whereever needed, without having to reformulate them in each concrete situation. It would be motivating to have seen at least one such concrete application area before entering the abstraction of the subject, and I think that would be a sound approach. However this would involve introducing many details that in the end are independent of the methods of linear algebra, and I guess there is often just not the time to go into such preparations.
So to answer your questions.
Linear algebra is an abstract subject, so it should not surprise tht freshmen feel it is so. But it is not abstract because of determinants, which are just a concrete tool that allows certain things to be expressed more explicitly than without them. Saying a linear map is invertible is a more abstract formulation then saying $\det A\neq0$ where $A$ is a matrix of that linear map in some basis.
Yes, geometric insight helps understanding linear algebra, and you should have some geomtric intuition for notions as subspaces, span, kernels, images, eigenvalues. But determinants are somewhat different; while you certainly should have some geometric intuition for determinants in terms of volume when doing calculus, there is not much to gain from this in purely algebraic situations, and in fact I would know no geometric interpretation at all of the determinant of a complex matrix, or of the determinant that defines the characterisitic polynomial.
To understand linear algebra better, you should try to go beyond concrete computational questions, and try obtain a more conceptual understanding of what is being done.
As for the mysteries of determinants, you may want to have a deeper understanding than just that they exists and magically solve certain problems (like determining which square matrices are invertible). For that I would refer to this question.