I'm in a hurry but will try to answer a couple of your questions:
Is it correct that the omission of LEM is the distinguishing characteristic of constructive logic as opposed to classical?
Yes!
Is it correct that the Curry-Howard correspondence is conditioned on the omission of LEM?
In a basic version of the Curry–Howard correspondence, you are quite right: There is no program of the type corresponding to LEM. That would be a kind of oracle which is able, for any type $A$, to either produce a value of $A$ or else produce a procedure $A \to \bot$. In particular, with such an oracle there would be an algorithm deciding the halting problem.
Astoundingly, there are more refined versions of the Curry–Howard correspondence which interpret LEM just fine. Every constructive proof gives rise to a pure program while every classical proof gives rise to a program with side effects. Which side effects exactly, depends on the amount of classical axioms used. For interpreting LEM, to have continuations as side effects is enough. For interpreting countable or dependent choice, other side effects are required. You find some references in these slides.
Is it correct that were LEM used in the heart of CoQ and LEAN that in turn these proof checkers would not be proven to halt?
Literally speaking: No. Coq and Lean support LEM just fine as an unproven assumption and still their proof checking terminates.
Is it correct that therefore to conclude that it is LEM that is at the heart of the halting problem?
Yes in some sense and no in some other.
Yes: Without LEM, it becomes possible for all functions $\mathbb{N} \to \mathbb{N}$ to be computable (by a Turing machine or equivalently by an untyped lambda term). The halting function, which maps a number $n$ to $0$ or $1$ depending on whether the $n$-th Turing machine terminates, is no longer a total function $\mathbb{N} \to \mathbb{N}$ in such a world. (Instead, its domain of definition is the subset consisting of those numbers $n$ such that the $n$-th Turing machine terminates or does not terminate.) Accordingly, the question whether there is a Turing machine computing this no-longer-existing gadget loses a bit of its relevance. There are lots of worlds like this, for instance the "effective topos", I tried to give an introduction here.
No: There is a metatheorem stating that every PA-proof of termination of a Turing machine can be mechanically transformed into a HA-proof, and similarly with ZFC and IZF. (PA is Peano Arithmetic, HA is its constructive cousin without LEM; ZFC is Zermelo–Fraenkel set theory with the axiom of choice, IZF is its constructive cousin without LEM and without choice.) So LEM will never be crucial in proving termination of a Turing machine. LEM might be crucial for abstract tools helping with termination proofs, but in the end LEM can always be eliminated from concrete termination results. Keywords to lookup are "Friedman's trick", "continuation trick" or "Barr's theorem".
Best Answer
One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.
Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $\pi_i:A_1\times A_2\to A_i$ and $\langle M,N\rangle : A_1\times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $\mathsf{let}$, e.g. $\mathsf{let}\ \langle x,y\rangle = T\ \mathsf{in}\ U$ where $T:A_1\times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)
Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.
Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is
callcc
which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.
A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.
1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.
2 I don't mean to suggest that Daniel Schepler meant otherwise.