In my opinion, what you should stress in a course on linear algebra depends more on what the particular students in your class want and/or need, and less on what you can "see beyond the course." However, since you asked this on MathOverflow, you are presumably asking for some insight into how professional mathematicians think about linear algebra, so I will try to address that question.
I would say that one the main hallmarks of those who have truly mastered linear algebra is that they can see how linear algebra is applicable in situations where the less well-trained do not. They are able to detect the presence of the "abstract structure" of linear algebra lying under the surface, even when it is not immediately evident from the statement of the problem.
Here are some examples.
Sound waves can be decomposed into a weighted sum of pure tones. "Weighted sum" signals "linear algebra" to the cognoscenti. It doesn't matter that what you're adding together are functions and not finite sequences of numbers, and it doesn't matter that there are infinitely many possible pure tones. What matters is that you can take weighted sums, and that there is a precise sense in which different pure tones are "orthogonal" to each other. That means that linear algebra is applicable, and the concepts of eigenvalues and eigenvectors (or eigenfunctions) are applicable.
The Netflix Prize competition asked for an algorithm to recommend new movies based on your ratings of movies you've already seen. Where's the linear algebra? Start by writing down a large matrix with rows representing people, columns representing movies, and entries representing ratings. Experienced mathematicians know that the biggest singular values of this matrix capture most of the relevant information in it, and provide a good start to constructing the desired algorithm.
An old Putnam problem asked whether two matrices $A$ and $B$ with the property that $ABAB=0$ must also satisfy $BABA=0$. The obvious approach is to start playing around with examples, and there's nothing wrong with that. However, a more insightful approach is to build an abstract vector space with the basis $e_\emptyset, e_A, e_{BA}, e_{ABA}, e_{BABA}$ and define the linear transformations $Ae_S := e_{AS}$ and $Be_S := e_{BS}$, where $S$ is any string of $A$'s and $B$'s, $AS$ and $BS$ denote concatenation, and $e_{S} = 0$ if $S$ is not one of the strings $\emptyset$, $A$, $BA$, $ABA$, $BABA$. This is admittedly a very clever proof and even professional mathematicians might not think of it right away, but this example underlines the power of understanding that anything can be used as the basis of a vector space, even strings of symbols.
Suppose you have a large system of polynomial equations in $x$, $y$, and $z$, containing equations such as $xyz + 4x^2y - z^3 + 7 = 0$ and $y^2z^2 - xyz + 3 = 0$ and many others. At first glance we might think that linear algebra does not help here because we have variables multiplied together, and multiplication is nonlinear. However, if we have enough equations, and if the same terms appear often enough (in this example, $xyz$ appears in both equations), then we might be able to solve the system by using linear algebra, by treating each term as a separate variable and think of the system as a giant system of linear equations in a much larger number of variables. This might seem like a hopelessly optimistic approach, but in fact it is the basis for a general technique for solving systems of polynomial equations. Again, my point is that with a practiced eye, you can learn to see an entity such as $xyz$ not only as a product of three variables, but as a basis vector in a very large vector space.
These examples may not translate directly into useful material for your teaching. However, I do believe that they give a good taste of how mathematicians think about linear algebra. They have internalized what "linear structure" means in the abstract and are able to detect it everywhere, to their advantage. Ideally, one would like to train students to think the same way. Of course, that may be more easily said than done.
This is false. Let $c$ be a quadratic nonresidue modulo $p$. Our matrix will be $(p^2-1) \times (p^2-1)$, with rows and colums indexed by pairs $(x,y) \in \mathbb{F}_p^2 \setminus \{ (0,0) \}$.
Our matrix is defined by
$$A_{(x_1,y_1) \ (x_2, y_2)} = x_1 x_2 - c y_1 y_2.$$
This is obviously symmetric. Since $c$ is a nonresidue, we have $x^2-cy^2 \neq 0$ for $(x,y) \in \mathbb{F}_p^2 \setminus \{ (0,0) \}$, so the diagonal entries are nonzero.
Each column of this matrix is a linear function of $(x,y)$. So every vector in the image of this matrix is a linear function $\mathbb{F}_p^2 \setminus \{ (0,0) \} \longrightarrow \mathbb{F}_p$ and, hence, takes the value $0$ somewhere.
We could make a smaller $(p+1) \times (p+1)$ example by just taking one point $(x,y)$ on each line through $0$ in $\mathbb{F}_p^2$.
Moreover, I claim that $(p+1) \times (p+1)$ is optimal. In other words, if $A$ is an $n \times n$ matrix with $n \leq p$ and nonzero entries on the diagonal, then some vector in the image of $A$ has all coordinates nonzero. Interestingly, I don't need the symmetry hypothesis.
Let $W$ be the image of $A$. Note that $W$ is not contained in any of the coordinate hyperplanes.
Let $\vec{u}= (u_1,\ u_2, \ \ldots,\ u_n)$, among all elements of $W$, have the fewest $0$ entries. Suppose for the sake of contradiction that some $u_i$ is $0$. Then there is some other $\vec{v}$ in $W$ with $v_i \neq 0$. Consider the points $(u_j : v_j)$ in $\mathbb{P}^1(\mathbb{F}_p)$ as $j$ ranges over all indices where $(u_j, v_j) \neq (0,0)$. There are fewer then $p+1$ such $j$, so some point of $\mathbb{P}^1(\mathbb{F}_p)$ is not hit, call it $(a:b)$. Then $-b \vec{u} + a \vec{v}$ is in $W$ and has fewer nonzero entries than $\vec{u}$, a contradiction.
Best Answer
Google will find for you V. Prasolov's Problems and Theorems in Linear Algebra, which has beautiful more or less hard problems.