- What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative.
Don't think of type theory, categorical logic, and model theory as alternatives to one another. Each step on the progression forgets progressively more structure, and whether that structure is essence or clutter depends on the problem you are trying to solve. Roughly speaking, the two poles are type theory and model theory, which focus on proofs and provability, respectively.
To a model theorist, two propositions are the same if they have the same provability/truth value. To a type theorist, equisatisfiability means that we have a proof of the biimplication, which is obviously not the same thing as the propositions being the same. (In fact, even the right notion of equivalence for proofs is still not settled to type theorists' satisfaction.)
Categorical logicians tend to move between these two poles; on the one hand, gadgets like Lawvere doctrines and topoi are essentially model-theoretic, since they are provability models. On the other hand, gadgets like cartesian closed categories give models of proofs, up to $\beta\eta$-equivalence.
- Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics?
It depends on what you are doing. If you are building a computerized tool, then typically either natural deduction or sequent calculus is the way to go, because these calculi both line up with human practice and help constrain proof search in ways helpful to computers. It makes sense to cook up a sequent calculus or natural deduction system even if the theory you want to use (e.g., set theory) is not normally cast in these terms.
On the other hand, model theory has been spectacularly successful in applications to mathematics, and this is in part because it does not have a built-in notion of proof system -- so there is simply less machinery you need to reinterpret before you can apply it to a mathematical problem. (The corresponding use of type theory is much less developed; homotopy theorists are in the very earliest stages of turning dependent type theory into ordinary mathematics.)
- Where does typed lambda calculus come into proof verification?
Every well-behaved intuitionistic logic has a corresponding typed lambda calculus. See Sorensen and Urcyczyn's Lectures on the Curry-Howard Correspondence for (many) more details.
- Are there any other approaches than higher order logic to proof verification?
Yes and no. If you're interested in actual, serious mathematics, then there is no alternative to HOL or the moral equivalent (such as dependent type theory or set theory) because mathematics deals intrinsically with higher-order concepts.
However, large portions of any development involve no logically or conceptually complex arguments: they are just symbol management, often involving decidable theories. This is often amenable to automation, if the problems in question are not stated in unnaturally higher-order language. (Sometimes, as in the case of the Kepler conjecture, there is an artificial way of stating the problem in a simple theory. This is essentially the reason why Hales' proof relies so heavily on computers: he very carefully reduced the Kepler conjecture to a collection of machine-checkable statements about real closed fields.)
- What are the limitations/shortcomings of existing proof verification systems (see below)?
The main difficulty with these tools is finding the right balance between automation and abstraction. Basically, the more expressive the logic, the harder automated proof search becomes, and the more easily and naturally you can define abstract theories that can be used in many different contexts.
Here are some further possibilites:
Propositional logic: can be interpreted in any Boolean algebra (assuming you are talking about classical logic here), but perhaps your notion of truth assignment allows for truth values as elements of Boolean algebras. Alternatively, via the Curry-Howard correspondence propositions may be interpreted as types and proofs as terms of those types.
Predicate logic (first-order and higher-order): can in general be interpreted in suitable categories or fibrations. For example, higher-order logic can be interpreted in a topos (which must be Boolean if your logic is classical).
In general, if you are looking for interpretations of various kinds of logic in ways other than first-order model theory, you should look at categorical logic. There various possibilities (multi-sorted vs. single-sorted, intuitionistic vs. classical, etc) are systematically considered. Also, the ad-hoc distinction between "full" vs. "Henkin-style" semantics (which depends on the idea that there is a "true" set theory in the background, as far as I can tell) is replaced with a much more advanced notions (such as properties of fibrations that are used to interpret logic). A place to start looking would be Bart Jacob's "Categorical logic and type theory", although it's not an easy reading. There should be some lecture notes on the internet that are more accessible, perhaps someone can suggest them?
I am not sure what you mean by "valid interpretation". If you just mean soundness (provable things are valid) then all of the possibilities mentiond are "valid". But I suspect you mean something else.
Best Answer
There is no single answer to this question, because of the high degree of nondeterminism inherent in the sequent calculus -- to get a computational interpretation, you need to resolve the ambiguity, and each way of doing so leads to different operational interpretations. Furthermore, the answers differ somewhat for classical and intuitionistic sequent calculi.
Proof term assignments for classical sequent calculi correspond closely to continuation calculi, such as Herbelin's $\lambda\mu$ calculus. Now continuation transforms correspond to double-negation translations of classical logic into intuitionistic logic, and of there are many different double-negation translations possible. Each choice of double-negation translation gives rise to a different evaluation order, which explains why they are so valuable for the study for abstract machines (as in supercooldave's link) -- you can get the benefits of studying different evaluation orders via CPS transforms, without having to code everything up in functions.
Now, you can also try to eliminate the nondeterminism in another way: by fiat. This relies upon Andreoli's idea of focusing for linear logic. In operational terms, the idea is to define logical connectives either in terms of their values, or their elimination forms (this is called polarity). After picking one, the other is determined by the need to satisfy cut-elimination. For linear logic, each of the connectives has unique polarity. For classical logic, every connective can be given either polarity (ie, every connective can be defined either in terms of values or continuations), and the choice of polarities for the connectives essentially determines the evaluation order. This is very well explained in Noam Zeilberger's PhD thesis, http://www.cs.cmu.edu/~noam/thesis.pdf.
For intuitionistic sequent calculi, the proof term assignments still have a lot of nondeterminism in them. When you resolve this through focalization, you find that the resulting calculi are pattern matching calculi. A beautiful abstract take on this is also in Noam's thesis, and I wrote a very concrete paper about this for POPL 2009, "Focusing on Pattern Matching", which showed how to explain Haskell/ML style pattern matching in terms of proof term assignments for focused sequent calculi.