It is not necessary to write your Ph.D. dissertation as a direct continuation of your masters thesis. I will not write my Ph.D. as such continuation.
You could study more model theory on the side, or you could study more pure logic, or you could expand into another area. Then when the time comes to write your Ph.D. you could make a much better decision. Furthermore, I know several people who were set to solve one problem in their Ph.D. and gave up halfway only to switch to an unrelated problem.
Some universities even support external advising (especially for Ph.D. students) which means that you have a local advisor, and another advisor (often the actual advisor) to work with on your problems. You might also find it easier just to switch universities, if that's a viable option.
Besides that, it is true that it is the easiest thing to just continue your masters research into a Ph.D. dissertation, but the main use of a masters thesis is like "research training wheels" which give you a taste of doing mathematical (or otherwise) research. In the university I did my M.Sc. you are not even expected to do original research or publish papers at the end of your masters. You are only expected to write a thesis which shows that you know, a bit more, how to research a problem in mathematics.
The important thing is to do what you love. Writing a thesis, especially a good one, takes a lot of effort and time. Spending so much energy on something you dislike is not a good advice.
Let me share one experience from my masters degree. I was set to research into axiom of choice related topics, and I actually dragged my advisor into the topic. I came up with most of the questions and problems, and I made him curious about things so we studied together. Certainly if I would stay there for a Ph.D. with him we would continue to study together, even though my advisor's main interest is proper forcing, and order theory.
All of your criticisms are equally valid when applied to.. well, anything. How does a football coach know what a "formation" is, and whether it really applies to football? How does a software engineer know the difference between a "program" and the instructions executed by a computer? How does a dog know that a "frisbee" is something that you can catch in your mouth? How does a general use little flags to signify troop positions, when they are really just flags?
None of this is to say that these are not interesting questions—I personally find them quite fascinating. But saying that they are reasons not to take something seriously is rather antisocial. If a lover stares into your eyes on a moonlit night and professes his or her adoration, do you start measuring oxytocin concentrations?
I do think that many mathematicians are a bit too attached to the Cantorian or Platonist views, and have incorrectly made mathematics out to be about things which are more than what they are—and that starts many arguments unnecessarily (for example, when someone claims that a theorem is true "in all possible universes", as if that meant anything). In my opinion, topos theory provides a better foundation for mathematics in this sense, because it is easier to understand the relationship between semantics, syntax, and the ever-elusive ontology. One speaks of this topos or that topos (or "topic", if you prefer), and never needs to worry about whether something "is" this or "is" that.
One relatively recent paper which I think has helped advance this more enlightened way of thinking is the quantum mechanics paper (heavily inspired by the philosophical work of Heidegger) What is a Thing?. There it is argued that set theory has not quite succeeded in providing the proper background for interpreting the world as it appears to us. The "state space" of physics professes to arrange possible worlds into a set, and runs headfirst into various paradoxes as we realize that our experimental equipment itself changes what is being measured, blurring our picture of how things really work and necessitating the continual introduction of new concepts and interpretations.
In short: perhaps truth, in the pragmatic sense, is more sheaf-like than set-like. But I digress.
If anybody tells you that you should take math seriously because it has figured out, once and for all, the correct way to divide the abstract from the concrete, and has firmly established the foundations for rational thought, then they are too caught up in their subject and you really shouldn't pay attention to them. And, if you really want, you can simply walk away, shaking your head in disappointment that mathematicians have failed to live up to their promise.
But, however seriously you take it, mathematics remains a powerful force in the world. While we're not particularly better than anybody else at explaining what we're talking about, what we are good at is bringing disparate things together under the same semantical umbrella—to a large extent, precisely because we are given the freedom not to explain ourselves. Measure theory, for example, has allowed us to shuttle insights between discrete phenomena and continuous phenomena. Algebra has, for hundreds of years, improved our speed of numerical reasoning by a billion-fold, by knowing when to compute and when to encode. Algebraic geometry has provided a language that is equally at home with basic arithmetic, encryption, signal processing, causality, and phylogenetic trees. And so people keep finding it useful, however many students will stand up angrily in our classes and insist that they don't think it could possibly be useful because something something.
In short, mathematics saves time for certain kinds of projects. If you don't do any of those projects, then of course you don't need to take it seriously. But it's under no obligation to explain itself, particularly not to somebody who thinks he is entitled to answers and "justice". If you find the foundations lacking, then we would love for you to come make a career of improving those foundations. If you are mostly complaining however, then pardon us while we focus on our other students.
Best Answer
Like Florian, I really like Gowers' definition of obvious. Of course this is a very personal definition. A proof that instantly springs to mind for one person may not spring to mind for another. I am not really sure what there is to say at this level of generality beyond that.
Really phrases like "it is obvious that..." and "clearly..." are bad habits. In a mathematical argument they are the places you should look at first for possible errors.
Perhaps another story will be illuminating: a professor of mine once made an assertion in lecture that I didn't quite see instantly. I asked him "is that obvious?" and he replied "yes." I asked him "is it obvious that that's obvious?" and, after a short pause, he replied "no."