You might want to look at applied sciences. Weather forecast seems to be a nice example which shows how a rather easy question leads to high dimensional vector spaces.
Suppose you want to predict the temperature for tomorrow. Obviously you need to take today's temperature into account so you start with a function $$f:\mathbb R\rightarrow\mathbb R,~x\mapsto f(x),$$ where $x$ is the current temperature temperature and $f(x)$ your prediction. But there is more than just the current temperature you have to consider. The humidity is important as well, so you modify your function and get $$\tilde{f}:\mathbb R^2\rightarrow\mathbb R,~(x,y)\mapsto f(x,y),$$ where $y$ is the current humidity. Now, the barometric pressure is important as well, so you modify again and get $$\hat{f}:\mathbb R^3\rightarrow\mathbb R,~(x,y,z)\mapsto f(x,y,z),$$ where $z$ is the current barometric pressure. Already this function can't be visualized, as it takes a 4-dimensional coordinate system to graph it. When you now take into account, that there are many more factors to consider (e.g. wind speed, wind direction, amount of rainfall) you easily get a domain with 5,6,7 or more dimensions.
I guess that you are half right, assuming I have understood you correctly.
You are right when you say that one can do mathematics inside the logic of set theory (although one could consider a different theory, but I do not want to go in this now).
Indeed model theory falls in this way of doing things. Model theory is done in set theory, its objects (interpretations, languages, theories etc) are made out of sets, and results of model theory follow from the axioms of set theory (for instance compactness theorem require axiom of choice, or at least dependent choice).
Nevertheless we can observe that much of the work done in algebra is actual model theory in disguise, indeed lots of results in model theory are clever generalization of algebraic techniques.
Finally we can say that model theory provides techniques to prove independence result (i.e. results that prove that some statements cannot be proven by certain set of axioms) which cannot be proven without it (at least at the best of my knowledge).
Edit: looking to the comments below I think that probably I should some other things.
It seems to me that you see syntactic and semantics in logic as two ways to prove statements. In the appropriate technical context of formal logic this could be true but it seems to me that you are thinking to something different.
Allow me to explain. When "doing" mathematics we only have one way to prove statement inside mathematics, and that is by writing down proofs.
Both proof theory and model theory, and any other field of mathematics, are about doing proofs via inference rules.
From this point of view mathematics is just a syntactic activity. But of course in choosing the axioms and how to apply the inference rules we are guided by the intuition of the "intended models" for our formulas. So mathematics is just a syntactic activity but nevertheless its formulas are not just meaningless strings (despite what some common folks may think), they are statements regarding structures of some sort.
With this in mind we can say that the actual practice of mathematics, finding proofs by reasoning about structures that satisfy some axioms, is much closer to the underlying idea of model theory than proof theory.
Indeed syntactical methods are methods where you find proofs of theorems reasoning about the proofs themselves instead of the mathematical objects described by the formulas and I argue that rarely people do real mathematics in this way.
So one could say that mathematics is more model theoretic than proof theoretic.
Of course in various fields of maths one does not necessarily have to use all the results of model theory, but it is also true that the same holds for the results of proof theory.
I hope this helps.
Best Answer
It's closer to true that all the questions in finite-dimensional linear algebra that can be asked in an introductory course can be answered in an introductory course. This is wildly far from true in most other areas. In number theory, algebraic topology, geometric topology, set theory, and theoretical computer science, for instance, here are some questions you could ask within a week of beginning the subject: how many primes are there separated by two? How many homotopically distinct maps are there between two given spaces? How can we tell apart two four dimensional manifolds? Are there sets in between a given set and its powerset in cardinality? Are various natural complexity classes actually distinct?
None of these questions are very close to completely answered, only one is really anywhere close, in fact one is known to be unanswerable in general, and partial answers to each of these have earned massive accolades from the mathematical community. No such phenomena can be observed in finite dimensional linear algebra, where we can classify in a matter of a few lectures all possible objects, give explicit algorithms to determine when two examples are isomorphic, determine precisely the structure of the spaces of maps between two vector spaces, and understand in great detail how to represent morphisms in various ways amenable to computation. Thus linear algebra becomes both the archetypal example of a completely successful mathematical field, and a powerful tool when other mathematical fields can be reduced to it.
This is an extended explanation of the claim that linear algebra is "thoroughly understood." That doesn't mean "totally understood," as you claim!