The "precalculus" topics are probably not hard prerequisites to the "discrete mathematics" book -- it looks like one that starts from square one and doesn't assume much beyond elementary arithmetic. But most of them are part of what one would generally assume as "mathematical literacy" in science and engineering.
Therefore, more advanced computer science text will often assume that you're already familiar with the notations and ways of computation presented in "precalculus" whenever they happen to be relevant for a CS problem. If your goal is just CS, you can probably get away with not remembering any proofs from the precalculus stuff, but many of the methods may be needed sporadically.
So sooner or later you will have to learn enough of it that you can skim that table of contents and think "I know all that, so I don't have to read the book". But there's no reason to insist of getting it before you dig into the discrete mathematics.
The difference between countable and uncountable sets is well formalized and there is never any doubt. These are two different "sizes of infinity". You can read this page for more information on why countable is not the same as uncountable: http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument
But I think one intuition which is really helpful, and also linking this with computer science, is the fact that a countable set is a set whose elements are finitely describable.
For instance each integer can be written on a piece of paper, so the set of integers is countable. This makes integers manageable by computers: since you can completely describe an integer in a finite way, you can always pass it to a computer, as a finite sequence of $0$'s and $1$'s.
This is also why reals are not countable: you might need to write down all the decimals, that is an infinite sequence. This makes "continuous mathematics" not well-suited for automatic treatment by computers.
Of course this is very schematic and can be further detailed, but this intuition is very important. It is possible to formally prove: "every element of $E$ contains a finite amount of information $\implies$ $E$ is countable".
Following this intuition, rationals are countable, because a rational $r$ can be given by two integers $a,b$ with $r=a/b$. This does not prevent rationals to be an important tool of analysis, because reals can be approached by rationals arbitrarily close (we say $\mathbb Q$ is $dense$ in $\mathbb R$). But most computer algorithms dealing with "arbitrary" numbers actually deal with only rational numbers.
As for the classification of maths into "discrete" and "continuous", the frontiers are really not well-defined, and everything interacts with everything else, so it is almost impossible to give a sound definition. A big part of it is subjective. At best, you have a "flavour" in some fields that is mostly discrete (like graph theory) or continuous (analysis), but in both cases, you might need also to consider the other side in order to get a good understanding (like using probability theory in graph theory).
Best Answer
I think a lot of it comes from the fact that discrete structures are easy to operate on in computing, and that computations themselves are usually modeled by discrete concepts.
For example, strictly speaking, formal language theory is a branch of discrete mathematics. It deals with finite structures (strings) in a from a countably-infinite universe. Modelling a yes/no problem as a set of strings allows for a large degree of formal reasoning about computations, so knowing the math/theory behind it is useful for a computer scientist.
Likewise, lots of problems can be reduced to discrete problems. For example, say you want to write a program which schedules a large set of classes so that no student has two classes at the same time. This is in fact a graph theory problem, known as "graph coloring". By knowing the theory behind graph coloring, a programmer realizes first that the problem is NP-complete, so there's not likely a fast general solution, but also can use pre-defined solutions for special types of graphs (chordal, comparability, planar, etc.)
Also, computer science and discrete math share a lot of ground. Both are heavily based in the concepts of recursion/induction.
Perhaps a more relevant question is why continuous mathematics isn't widely applied to computer science?