Your first question is hard to answer without going through the others, so I'll come back to it.
Your second question:
The "normal" vectors are vectors perpendicular to the lines in question. These are often used because, in an N-dimensional vector space, an equation of the form $\sum_i^N c_i x_i = d$ describes an N-1 dimensional object--in 3d, it describes a plane; in 4d, it describes a 3d hypersurface, and so on. Normal vectors--the vectors orthogonal to these objects--generalize well to any case in arbitrary dimensions.
In essence, when you add the equations of two lines (talking about the 2d case), you get another line, which is described by the new normal vector.
Otherwise, I think your understanding is correct.
For your third question: As we've seen, it's not possible to characterize an arbitrary line just from its normal vector. Rather, you must have some third degree of freedom to characterize a line's offset from the origin. This is the justification for homogeneous coordinates and projective geometry, which I'll touch on in just a moment.
For your fourth question: You use an "alternating bilinear form" $\Psi$. This is a natural time to talk about exterior algebra and wedge products. Define the wedge product $e_1 \wedge e_2 = - e_2 \wedge e_1$ for two orthogonal vectors. A linear operator $\underline T$ obeys $\underline T(e_1) \wedge \underline T(e_2) = \lambda e_1 \wedge e_2$ in two dimensions (the left-hand side is also taken as the definition of $\underline T(e_1 \wedge e_2)$ also, which generalizes to higher dimensions). Anyway, the notation is different, but we're talking about the same math.
One way to interpret Cramer's rule is in the projective geometry I spoke of. As I said, lines with an aribtrary offset from the origin require 3 degrees of freedom to describe: a point that the line goes through (two degrees) and a direction (one degree). The natural way to work with this is in a three dimensional space. In $ax + by = c$, let $c$ go along the third axis, and the vector $a e_1 + b e_2 + c e_3$ goes normal to a plane through the origin. Take another such vector, $u e_1 + v e_2 + w e_3$, and take the cross product.
$$(a e_1 + b e_2 + c e_3) \times (u e_1 + v e_2 + w e_3) = (av - bu) e_3 + (bw - cv) e_1 + (cu - av) e_2$$
Note that this is a homogeneous representation, and any non-unit factor of $e_3$ can be rescaled. Dividing through by the coefficient of $e_3$ generates exactly the terms of cramer's rule, and the interpretation geometrically is simple: we found the common line between the two planes (planes which, in this space, represent lines on the original space), whose direction must of course be perpendicular to the planes' normal vectors. Where the common line in the projective space intersects the projective plane is the ordinary point of intersection.
There is a way to extend this to arbitrary dimensions (ones in which the cross product doesn't exist), but you're on the right track, talking about alternating bilinear forms.
I'm afraid I'm not really able to follow from this point (or even to answer your original question), but I think I can probe at your difficulty as follows: the idea of a system of linear equations being a linear map is, indeed, quite arbitrary (you can choose the ordering of the equations as they correspond to components however you like and add and subtract equations at will). It doesn't have a neat geometric interpretation, as far as I can see. Homogeneous coordinates and projective geometry, on the other hand, give very neat and clean geometric interpretations, something you can intuitively understand (or at least that I can).
I won't call this a complete answer, but perhaps delving into projective geometry and homogeneous coordinates will give you further insights into the problem of finding the intersections between lines (or the common lines between planes, etc.). It's the only method I use anymore. In particular, I highly recommend the geometric (clifford) algebra approach to this stuff. Doran and Lasenby or Dorst, Fontijne, and Mann give excellent descriptions of projective geometry (and conformal geometry) using that formalism.
Is the point of math and math classes to learn the big-picture concepts of how to apply mathematical tools or is the point to learn the details to the ground level?
Both. One is difficult without the other. How are you going to solve equations that Maple can't solve? How are you going to solve it, exactly or numerically? What's the best way to solve something numerically? How can you simplify the problem to get an approximate answer? How are you going to interpret Maple's output, and any issues you have with its solution? How can you simplify the answer it gives you? What if you are only interested in the problem for a particular set of values/parameters/in a particular range? What happens if a parameter is small? How many solutions are there? Does a solution even exist?
Using a CAS without knowing the background maths behind the problems you're trying to solve is like punching the buttons on a calculator without knowing what numbers are, what the operations mean or what the order of operations might be.
Best Answer
By asking "What is important about homogeneous equations?" You come pretty close to asking "Why is linear algebra important?" Most every problem in linear algebra no matter how abstract at some point boils down to solving linear systems.
In general a linear system can be written in the form $A{\bf x} = {\bf b}$. If you take any two solutions ${\bf x}_1$ and ${\bf x}_2$ then ${\bf x}_1-{\bf x}_2$ is a solution of the corresponding homogeneous system $A{\bf x}={\bf 0}$. This is turn implies that if you find one solution ${\bf x}_p$ (a particular solution) of $A{\bf x} = {\bf b}$ and then find the general solution of the corresponding homogeneous system $A{\bf x}={\bf 0}$, say ${\bf x}_h$, then ${\bf x}={\bf x}_p+{\bf x}_h$ is the general solution of $A{\bf x}={\bf b}$. So in some sense the homogeneous solutions account for all of the redundant solutions of $A{\bf x}={\bf b}$ once you've found a particular solution.
If you have a linear transformation, say $T:V \to W$, then the kernel (or nullspace) of $T$ is the subspace $\mathrm{Ker}(T)=\{ v \in V \;|\; T(v)={\bf 0} \}$ (everything in $V$ that maps to the zero vector in $W$). If your linear transformation is $T(v)=Av$ for some matrix $A$, then the kernel of $T$ is nothing more than the null space of the matrix $A$. The range of $T$ is all of the vectors of $W$ that get mapped to: $\mathrm{Range}(T) = T(V) = \{ T(v) \;|\; v\in V\}$. Again if your transformation is $T(v)=Av$, then the range of $T$ is nothing more than the column space of the matrix $A$.
Again the kernel (a set of solutions of a homogeneous linear system) accounts for redundancies. If $T(v_1)=w=T(v_2)$ (i.e. $v_1$ and $v_2$ both map to the same output $w$), then $v_1-v_2 \in \mathrm{Ker}(T)$. So if $w \in T(V)$ (the range of $T$) and $v_p \in V$ is a vector such that $T(v_p)=w$, then $T(v_p+k)=w$ for any $k \in \mathrm{Ker}(T)$. Briefly, let $K=\mathrm{Ker}(T)$ and $v_p+K=\{v_p+k\;|\; k\in K\}$. Then $v_p+K$ is the set of all vectors which map to $w$.
In general, each element in the range of $T$ corresponds to a set of the form $v+K=v+\mathrm{Ker}(T)$ (these are called cosets of the kernel). So if we take $V$ and quotient out $K$ (whatever that means), denoted $V/K$, then we are left with a collection of sets which exactly correspond (i.e. isomorphic) with the range $T(V)$. This is written: $V/\mathrm{Ker}(T) = \mathrm{Range}(T)$. This result is known as the first ismorphism theorem. A quick consequence is that "rank plus nullity equals the dimension of the domain".
I know that doesn't complete answer your question, but maybe it'll get you started.