The unstated (or, rather, very vaguely stated) assumption in the problem is that the probabilities of observing a car during any given non-overlapping time intervals of equal length are equal and independent.
(Of course, this assumption can't really be true in practice, even if "observing a car" is taken to be a point event — for example, if the road has $n$ lanes and you observe a different car within each of $n$ consecutive 1 millisecond intervals, you're not going to observe another one within the next millisecond — but it can be a fairly good approximation if the intervals are of moderate length and the road not very busy.)
This assumption (almost; see comments) implies that the arrival of cars is (assumed to be) a Poisson process. More specifically, it implies that the probability of no cars arriving within any given 10 minute interval is the same. Since we know that the probability of no cars arriving within a 30 minute interval equals the product of the probabilities of no cars arriving in each of the three consecutive 10 minute intervals within it, the answer follows.
To be specific, let $A$, $B$ and $C$ denote the events "no cars are observed within the first / second / third 10 minutes" respectively. Then we have
$$ \mathrm{Pr}[A \text{ and } B \text{ and } C] = \mathrm{Pr}[A] \cdot \mathrm{Pr}[B \text{ if } A] \cdot \mathrm{Pr}[C \text{ if } A \text{ and } B].$$
Since the events $A$, $B$ and $C$ are independent by assumption, we get
$$ \mathrm{Pr}[A \text{ and } B \text{ and } C] = \mathrm{Pr}[A] \cdot \mathrm{Pr}[B] \cdot \mathrm{Pr}[C],$$
and, since by assumption $\mathrm{Pr}[A] = \mathrm{Pr}[B] = \mathrm{Pr}[C]$,
$$ \mathrm{Pr}[A \text{ and } B \text{ and } C] = \mathrm{Pr}[A]^3.$$
We know that $\mathrm{Pr}[A \text{ and } B \text{ and } C] = 0.05$, and we want to solve for $\mathrm{Pr}[A]$ (which, by assumption, equals the a priori probability of observing no cars within any given 10 minute interval), so we take the cube root of both sides and get
$$ \mathrm{Pr}[A] = \sqrt[3]{\mathrm{Pr}[A \text{ and } B \text{ and } C]} = \sqrt[3]{0.05} \approx 0.3684.$$
Subtract that from one to get $\mathrm{Pr}[\text{not } A] \approx 0.6316$.
The simple answer is that Inferential Statistics simply cannot exist without probability.
Every major result in Inferential Statistics has a rigorous underpinning in Probability/measure theory.
The Laws of Large Numbers say obvious things: "The sample mean will converge in probability/almost surely to the true population mean", but how on earth would you prove this without formal probability axioms?
But then go further, what about the completely counter intuitive results like the central limit theorem. Why on earth should we expect the sample mean and sample variance to be asymptotically normal?
Finally, consider Bayesian statistics. How could you possibly undertake Bayesian inference without a proper understanding of conditional probability?
On the surface inferential statistics may seem to be common sense results. However, a good chunk of those common sense results require extensive and rigorous proofs, which is where probability theory is important. Without probability theory, statistical inference would not have any of the important results used every day.
Best Answer
I believe there is a Schaum's Outline of Probability, which contains a bunch of problems with full solutions. I'm not sure if this would be above or below the level you are looking for...