The question is broad, so I'll just try to address one of the sub-questions. I'll change it a little bit to allow for the possibility of pluralism:
What is the meaning of finding a "foundation of mathematics”?
As an example, I'll try to explain why set theory is a foundation of mathematics, and what it means for it to be one. This isn't intended to exclude other possible foundations, although they tend to be subsumed by set theory in a sense discussed below.
Based on one of the OP's comments above, I'll treat the mention of computability in the question figuratively rather than literally. Roughly speaking, a Turing-complete system of computation is one that can simulate any Turing machine, and so by the Church–Turing thesis, can simulate any other (realistic) system of computation. Therefore a Turing-complete system of computation can be taken as a "foundation" for computation.
As far as this question is concerned, I think that under the appropriate analogy between computation and mathematical logic, the notion of simulation of one computation by another corresponds to the notion of interpretation of one theory in another. By an interpretation of a theory $T_1$ in a theory $T_2$ I mean an effective translation procedure which, given a statement $\varphi_1$ in the language of $T_1$, produces a corresponding statement $\varphi_2$ in the language of $T_2$ such that $\varphi_1$ is a theorem of $T_1$ if and only if $\varphi_2$ is a theorem of $T_2$. So a mathematician working in the theory $T_1$ is essentially (modulo this translation procedure) working in some "part" of the theory $T_2$.
In these terms, a foundation of mathematics could be reasonably described as a (consistent) mathematical theory that can interpret every other (consistent) mathematical theory. However, it follows from the incompleteness theorems that no such "universal" theory can exist. But remarkably, the set theory $\mathsf{ZFC}$ can interpret almost all of "ordinary" (non-set-theoretic) mathematics.
For example, if I'm working in analysis and I prove some theorem about continuous functions, then my theorem and its proof can in principle be translated into a theorem of $\mathsf{ZFC}$ and its proof in $\mathsf{ZFC}$. (Under this translation, a continuous function is a set of ordered pairs, which are themselves sets of sets, satisfying a certain set-theoretic property, etc.)
This means that set theory can serve as a foundation for most of mathematics. For set theory itself, no one theory (e.g. $\mathsf{ZFC}$) can serve as a universal foundation. But we can informally define a hierarchy of set theories ($\mathsf{ZFC}$, $\mathsf{ZFC} + {}$"there is an inaccessible cardinal", $\mathsf{ZFC} + {}$"there is a measurable cardinal", etc.) called the large cardinal hierarchy, strictly increasing in interpretability strength, which seems to be practically universal in the sense that every "natural" mathematical theory, set-theoretic or otherwise, can be interpreted in one of these theories. (The reason this doesn't violate the incompleteness theorem is that the large cardinal hierarchy is defined informally in an open-ended way, so we can't just take its union and get a universal theory.)
Your last point about the continuum hypothesis raises an important question: what if the various candidate set theories branch off in many different directions rather than lining up in a neat hierarchy? Well, it turns out so far that the natural theories we consider do line up in the interpretability hierarchy, even if they do not line up in the stronger sense of inclusion. For example, the methods of inner models and of forcing used to prove the independence of $\mathsf{CH}$ from $\mathsf{ZFC}$ also give interpretations of the two candidate extensions $\mathsf{ZFC} + \mathsf{CH}$ and $\mathsf{ZFC} + \neg \mathsf{CH}$ by one other (and by $\mathsf{ZFC}$ itself,) showing that they inhabit the same place in the interpretability hierarchy. If you believe $\mathsf{CH}$ and I believe $\neg\mathsf{CH}$ we can still interpret each others results, rather than dismissing them as meaningless.
Many students, myself, some years ago, included, are often confused about the aim of logic. It is certainly not the aim of logic to somehow provide a rigorous framework to do all of mathematics in a somehow pure manner, without invoking any form of intuition or axiomatisation. In particular, it is not the aim of logic to define the meaning of such phrases as for all
or there exists
at any level other than the intuitive obvious one. It is simply of little interest to do so (that is actually a big fat lie, as there are various flavours of logic in which the meaning of, e.g., there exists
is defined differently to the classical one, and some mathematicians object to the classical definition, for many plausible reasons - such issues obivously fall under the umberla of foundations, since it is a study of different variants at a very low level, the very basic building blocks of all arguments).
So, what is logic about then? Well, logic comprises of several fields, so let's just look at one: model theory. Model theory can be described, somewhat, as the study of the expressive power of formal languages. For a long time, mathematicians debated the correctness of certain objects subjectively. Axioms were treated as evident truths, indicating they are indisputably true. Famously, Euclid's fifth postulate in geometry was debated, and argued by many to be an evident axiom of geometry, while literally walking on a counter example. Development in analysis and other realms dictated a much more intricate understanding of the role of axioms, definitions, and the structure of proof. The old axiomatic, platonistic, approach was replaced by a formalism approach: the axioms are just sentences chosen to hold, rather than somehow being externally obvious, and the practice of mathematics is the study of that which flows from the axioms. Of course, when Hilbert said that mathematics is just a formal game, he (I presume) did not mean that when we actually do mathematics, we just randomly choose some axioms, and see what happens. That would be futile. Instead, the formality is there, practiced informally by human beings. However, it cannot be disputed that it is a formal game, and that formal aspect can be studied. So, along comes model theory: a formal definition of language, predicates, statements, etc., together with its semantics. This sets the ground to study the relationship between the language we use in order to study something, and the actual properties of that something. Few things are more foundations than the understanding of the strengths and limitations of the language we use in our quest to understand mathematical objects.
Goedel's completeness and incompleteness theorems are remarkable examples of amazing answers to such foundational questions as the following. If I found a proof of a property from some axioms, does it mean that the property holds in any model of the axioms? (Answer: yes, by a simple proof by induction following the definitions of the semantics). If a property is true (semantics!) in every model of some axioms, does it follow that there exists a proof (a syntactic thing!) of the property from the axioms? (Answer: Yes, this is the completeness theorem). Given a model of interest, is it possible to capture its semantics (i.e., to capture precisely all of the statements that hold for it) as precisely those theorems provable from a regonisable set of axioms? (Answer: depending on the language, but if the model is interesting enough, and the language is rich enough, then the answer is no (the incompleteness theorem)).
I will stop here, though there is much more to say. Perhaps the discussion can continue in the comments, if you have further questions.
Best Answer
As much as I can confirm that (the introduction and the type theory chapter of) the homotopy type theory book is well written, I have the impression that there are shorter texts focusing more directly on (the history of) type theory as foundations. The Stanford Encyclopedia of Philosophy (SEP) is an excellent place where one can find such shorter texts. A good place to start might be the section on typed theories in Randall Holmes' SEP entry on Alternative Axiomatic Set Theories. A more complete picture including $\lambda$-calculus can be found in Thierry Coquand's SEP entry on Type Theory. The latest revision of that entry even contains a section on univalent foundations (homotopy type theory).
I have to admit that the SEP articles are quite broad and contain many historical and philosophical references. For some more mathematically minded people, a more focused account like William Farmer's The Seven Virtues of Simple Type Theory might be better suited as an introduction to type theories (as foundation). One problem with type theory as a foundation is that there are so many different (non-equiconsistent) type theories. For set theory, this problem can be avoided by using ZF (or NBG) as the "canonical" set theory. Saunders Mac Lane (1986) tried to address this problem by advocating "bounded Zermelo set theory" as a foundation for mathematics:
In an answer to another question, I tried to explain how TST is related to other type theories, and how it derives its consistency strength from non-circular impredicative quantification.