- What exactly is the role of type theory in creating higher-order logics? Same goes with category theory/model theory, which I believe is an alternative.
Don't think of type theory, categorical logic, and model theory as alternatives to one another. Each step on the progression forgets progressively more structure, and whether that structure is essence or clutter depends on the problem you are trying to solve. Roughly speaking, the two poles are type theory and model theory, which focus on proofs and provability, respectively.
To a model theorist, two propositions are the same if they have the same provability/truth value. To a type theorist, equisatisfiability means that we have a proof of the biimplication, which is obviously not the same thing as the propositions being the same. (In fact, even the right notion of equivalence for proofs is still not settled to type theorists' satisfaction.)
Categorical logicians tend to move between these two poles; on the one hand, gadgets like Lawvere doctrines and topoi are essentially model-theoretic, since they are provability models. On the other hand, gadgets like cartesian closed categories give models of proofs, up to $\beta\eta$-equivalence.
- Is extending a) natural deduction, b) sequent calculus, or c) some other formal system the best way to go for creating higher order logics?
It depends on what you are doing. If you are building a computerized tool, then typically either natural deduction or sequent calculus is the way to go, because these calculi both line up with human practice and help constrain proof search in ways helpful to computers. It makes sense to cook up a sequent calculus or natural deduction system even if the theory you want to use (e.g., set theory) is not normally cast in these terms.
On the other hand, model theory has been spectacularly successful in applications to mathematics, and this is in part because it does not have a built-in notion of proof system -- so there is simply less machinery you need to reinterpret before you can apply it to a mathematical problem. (The corresponding use of type theory is much less developed; homotopy theorists are in the very earliest stages of turning dependent type theory into ordinary mathematics.)
- Where does typed lambda calculus come into proof verification?
Every well-behaved intuitionistic logic has a corresponding typed lambda calculus. See Sorensen and Urcyczyn's Lectures on the Curry-Howard Correspondence for (many) more details.
- Are there any other approaches than higher order logic to proof verification?
Yes and no. If you're interested in actual, serious mathematics, then there is no alternative to HOL or the moral equivalent (such as dependent type theory or set theory) because mathematics deals intrinsically with higher-order concepts.
However, large portions of any development involve no logically or conceptually complex arguments: they are just symbol management, often involving decidable theories. This is often amenable to automation, if the problems in question are not stated in unnaturally higher-order language. (Sometimes, as in the case of the Kepler conjecture, there is an artificial way of stating the problem in a simple theory. This is essentially the reason why Hales' proof relies so heavily on computers: he very carefully reduced the Kepler conjecture to a collection of machine-checkable statements about real closed fields.)
- What are the limitations/shortcomings of existing proof verification systems (see below)?
The main difficulty with these tools is finding the right balance between automation and abstraction. Basically, the more expressive the logic, the harder automated proof search becomes, and the more easily and naturally you can define abstract theories that can be used in many different contexts.
You have not really encoded the sum type $\alpha \sqcup \beta$, but made a "virtual embedding" of the sum type into $(\rho^{\rho^\beta})^{\rho^\alpha}$. You could not encode sum types in such a calculus, simply because sum types are not expressible in it (you may know this result in a different form --- disjunctions are not definable in an implicational fragment of intuitionistic propositional logic). To see how your embedding works, let me assume for a moment that our calculus has both sum and product types. Then:
$$(\rho^{\rho^\beta})^{\rho^\alpha} \approx \rho^{\rho^\alpha \times \rho^\beta} \approx \rho^{\rho^{\alpha \sqcup \beta}}$$
Therefore, your embedding may be thought of as canonical morphism:
$$\alpha \sqcup \beta \rightarrow \rho^{\rho^{\alpha \sqcup \beta}}$$
Actually, $\rho^{\rho^{(-)}}$ is a continuation monad, and the embedding $\tau \overset{\mathit{ret}}\rightarrow \rho^{\rho^{\tau}}$ lifts types to the continuation semantics, where as you have figured out, coproducts "virtually exist". By the way, I have said "embedding", but $\mathit{ret}$ generally is not an embedding --- it is a monomorphism if and only if $\rho$ is an internal cogenerator --- which means that the internal contravariant hom-functor $\rho^{(-)}$ is faithfull (see also this answer).
As pointed out by Peter in the comments, you may similarly "embed" product types:
$$\alpha \times \beta \rightarrow \rho^{\rho^{\alpha \times \beta}} \approx \rho^{(\rho^{\beta})^\alpha}$$
Then instead of working with $\alpha \times \beta$, work with continuation type $\rho^{(\rho^{\beta})^\alpha}$. For example you may define the first projection $\rho^{(\rho^{\beta})^\alpha} \rightarrow \rho^{\rho^\alpha}$ as:
$$\lambda p \colon \rho^{(\rho^{\beta})^\alpha} . \lambda f \colon \rho^\alpha . p (\lambda a \colon \alpha . \lambda b \colon \beta . f a \colon \rho)$$
and in the same manner the second projection $\rho^{(\rho^{\beta})^\alpha} \rightarrow \rho^{\rho^\beta}$ as:
$$\lambda p \colon \rho^{(\rho^{\beta})^\alpha} . \lambda g \colon \rho^\beta . p (\lambda a \colon \alpha . \lambda b \colon \beta . g b \colon \rho)$$
However, it is worth saying that to emulate product types you do not need the above construction --- functions from a product type $\alpha \times \beta \rightarrow \tau$ are tantamount to functions $\alpha \rightarrow \tau^\beta$, and function to a product type $\tau \rightarrow \alpha \times \beta$ are tantamount to pairs of functions $\tau \rightarrow \alpha$ and $\tau \rightarrow \beta$, therefore in both cases are (in some sense) "redundant".
To not be worse, here is my GHC session:
pair :: Integer -> Bool -> (Integer -> Bool -> Bool) -> Bool
pair a b = \c -> c a b
first :: ((Integer -> Bool -> Bool) -> Bool) -> (Integer -> Bool) -> Bool
first p = \f -> p (\a b -> f a)
second :: ((Integer -> Bool -> Bool) -> Bool) -> (Bool -> Bool) -> Bool
second p = \g -> p (\a b -> g b)
*Main> let p = pair 2 True in first p ((<)1)
True
*Main> let p = pair 2 True in first p ((<)3)
False
*Main> let p = pair 2 True in second p ((||) False)
True
*Main> let p = pair 2 False in second p ((||) False)
False
Best Answer
Although to you, category theory is merely an inefficient framework for data about logic and programming languages, to mathematicians working in areas like algebraic geometry and algebraic topology, categories are truly essential. For us, some of our most basic notions make no sense and look extremely awkward (in fact, some of them are from pre-category days, and no one really knew what we wanted intuitively until after we started using categories) until you phrase them in terms of categories and universal properties of objects and morphisms (and 2-morphisms, etc). Additionally, it helps us tell what various types of mathematical objects have in common, and how they relate via functors and natural transformations.
As far as recasting algebraic topology and algebraic geometry in terms of lambda calculus, I'd rather like to see that if anyone can manage to, say, give an alternative definition of stack or gerbe or model structure which are more intuitive without using categories and groupoids and the like.