One (extreme) perspective on Curry-Howard (as a general principle) is that it states that proof theorists and programming language theorists/type theorists are simply using different words for the exact same things. From this (extreme) perspective, the answer to your first question is trivially "yes". You simply call the types of your imperative language "formulas", you call the programs "proofs", you call the reduction rules of an operational semantics "proof transformations", and you call (the type mapping part of a) denotational sematics an "interpretation" (in the model theoretic sense), and et voilà, we have a proof system.
Of course, for most programming languages, this proof system will have horrible meta-theoretic properties, be inconsistent1, and probably won't have an obvious connection to (proof systems for) usual logics. Interesting Curry-Howard-style results are usually connecting a pre-existing proof system to a pre-existing programming language/type theory. For example, the archetypal Curry-Howard correspondence connects the natural deduction presentation of intuitionistic propositional logic to (a specific presentation) of the simply typed lambda calculus. A sequent calculus presentation of intuitonistic propositional logic would (most directly) connect to a different presentation of the simply typed lambda calculus. For example, in the presentation corresponding to natural deduction (and assuming we have products), you'd have terms $\pi_i:A_1\times A_2\to A_i$ and $\langle M,N\rangle : A_1\times A_2$ where $M:A_1$ and $N:A_2$. In a sequent calculus presentation, you wouldn't have the projections but instead have a pattern matching $\mathsf{let}$, e.g. $\mathsf{let}\ \langle x,y\rangle = T\ \mathsf{in}\ U$ where $T:A_1\times A_2$ and $U:B$ and $x$ and $y$ can occur free in $U$ with types $A_1$ and $A_2$ respectively. As a third example, a Hilbert-style presentation of minimal logic (the implication only fragment of intuitionistic logic) gives you the SKI combinatory logic. (My understanding is that this third example is closer to what Curry did.)
Another way of using Curry-Howard is to leverage tools and intuitions from proof theory for computational purposes. Atsushi Ohori's Register Allocation by Proof Transformation is an example of a proof system built around a simple imperative language (meant to be low-level). This system is not intended to correspond to any pre-existing logic. Going the other way, computational interpretations can shed light on existing proof systems or suggest entirely new logics and proof systems. Things like the Concurrent Logical Framework clearly leverage the interplay of proof theory and programming.
Let's say we start with a typed lambda calculus, for familiarity. What happens when we add various effects? We actually know in many cases (though usually the effects need to be carefully controlled, more so than they usually are in typical programming languages). The most well-known example is callcc
which gives us a classical logic (though, again, a lot of care is needed to produce a well-behaved proof system). Slide 5 (page 11) of this slide set mentions several others and a scheme for encoding them into Coq. (You may find the entire slide set interesting and other work by Pierre-Marie Pédrot.) Adding effects tends to produce powerful new proof rules and non-trivial extensions of what's provable (often to the point of inconsistency when done cavalierly). For example, adding exceptions (in the context of intuitionistic dependent type theory) allows us to validate Markov's rule. One way of understanding exceptions is by a monad-style translation and the logical view of this is Friedman's translation. If you take a constructive reading of Markov's rule, it's actually pretty obvious how you could implement it given exceptions. (What's less obvious is if adding exceptions causes any other problems, which, naively, it clearly does, since we can "prove" anything by simply throwing an exception, but if we require all exceptions to be caught...)
Daniel Schepler mentioned Hoare logic (a more modern version would be Hoare Type Theory), and it is worth mentioning how this connects. A Hoare Logic is itself a deductive system (that usually is defined in terms of a traditional logic). The "formulas" are the Hoare triples, i.e. the programs annotated with pre- and post-conditions, so this is a different way of connecting logic and computation than Curry-Howard2.
A less extreme perspective on Curry-Howard would add some conditions on what constitutes an acceptable proof system and, going the other way, what constitutes an acceptable programming language. (It's quite possible to specify proof systems that don't give rise to an obvious notion of computation.) Also, the traditional way of using Curry-Howard to produce a computational system identifies computation with proof normalization, but there are other ways of thinking about computation. Regardless of perspective, the well-behaved proof system/rules are often the most interesting. Certainly from a logical side, but even from the programming side, good meta-theoretical properties of the proof system usually correspond to properties that are good for program comprehension.
1 Traditional discussions of logic tend to dismiss inconsistent logics as completely uninteresting. This is an artifact of proof-irrelevance. Inconsistent logics can still have interesting proof theories! You just can't accept just any proof. Of course, it is still reasonable to focus on consistent theories.
2 I don't mean to suggest that Daniel Schepler meant otherwise.
Assuming you are interested only in the Curry-Howard-Lambek type of interpretations of logic in categories then in order to interpret first-order logic in a category cartesian closed categories are not sufficient.
The problem lies in the fact that you need a category whose objects represents not only propositions, i.e. closed formulas, but predicates as well (i.e. formulas with variables).
In order to do this one needs some sort of fibered categories (or indexed ones if you prefer).
We assume we have a logic signature made of an algebraic signature $\Sigma$, containing the operation symbols, and predicate signature $\Pi$, containing the predicate symbols.
For start one consider the category $T(\Sigma)$ whose objects sets of free variables and for every pair of such sets $X$ and $Y$ we have
$$T(\Sigma)[X,Y]=\mathbf{Set}[X,T_\Sigma(Y)]$$
where $T_\Sigma(Y)$ is the set of terms with variables in $Y$. So morphisms of this category are substitutions for the logic signature considered.
To each $X \in T(\Sigma)$ we can associate the poset $P(X)$ of formulas with free variables in $X$, with order relations given by $\varphi \leq \psi$ iff $\varphi \vdash \psi$.
Also for every morphism/substitution $\sigma \in T(\Sigma)[X,Y]$ we have a monotone map $$P(\sigma)\colon P(X) \to P(Y)$$
defined as $P(\sigma)(\varphi)=\sigma(\varphi)$, that is the formula obtained by applying the substitution $\sigma$ to the formula $\varphi$.
These assignment define a functor $P\colon T(\Sigma) \to \mathbf{Cat}$, where we regard posets as categories and monotone functions as functors.
In this set up we can characterize quantifiers as adjoint functors, let see how.
For every $X$ set of variables and a new variable $x$ we have a morphisms $\pi_x^X \in T(\Sigma)[X,X\cup\{x\}]$ given by the substitution
$$\pi_x^X \colon X \longrightarrow X\cup\{x\}$$
$$\pi_x^X(x)=x$$
for every $x \in X$.
The induced functor/monotone map $P(\pi_x^X)$ sends every proposition of $P(X)$ into itself seen as a member of $P(X\cup\{x\})$.
A left adjoint to $P(\pi_x^X)$ would be a functor $L_x^X \colon P(X \cup\{x\}) \to P(X)$, that takes formulas with free variables in $X\cup\{x\}$ and sends them in formulas with free variables in $X$ (pretty much as the quantifiers that eliminate free variables).
For $L_x^X$ to be an adjoint we need to have a natural isomorphism
$$P(X)[L_x^X(\varphi),\psi]\cong P(X\cup\{x\})[\varphi,P(\pi_x^X)(\psi)]$$
and since we are dealing with posets this amounts to require that
$$L_x^X(\varphi) \leq \psi \iff \varphi \leq P(\pi_x^X)(\psi)=\psi$$
This is equivalent to require that $L_x^X(\varphi)$ is a proposition whose logical consequences $\psi$ with free variables in $X$ are exactly those formulas that are implied by $\varphi$ in the extended context of $P(X\cup\{x\})$.
With a little effort one can show that such $L_x^X(\varphi)$ must be $\exists x \varphi$.
Similarly a right adjoining $R_x^X \colon P(X\cup\{x\}) \to P(X)$ would be a functor such that there is a natural isomorphism
$$P(X\cup\{x\})[P(\pi_x^X)(\varphi),\psi]\cong P(X)[\varphi,R_x^X(\psi)]$$
Again, since we are dealing with posets, this amounts to require that $R_x^X(\psi)$ is the formula such that
$\varphi \leq R_x^X(\psi)$ iff and only if $\varphi \leq \psi$, that is $R_x^X(\psi)$ is a logical consequence of $\varphi$ if and only if $\psi$ is a logical consequence of $\varphi$.
By playing a little bit with logic one can show that the only possibility is for $R_x^X(\psi)$ to be $\forall x \psi$.
If you are interested to some pointers I suggest to read Bart Jacobs' "Categorical Type Theory" that deals in details with these subject.
Note: usually authors use a different notation form the one I used here, for instance they usually deal with the category $T(\Sigma)$, nevertheless if you got the general idea I think you can easily fix the difference in notations.
I hope I was able to give you the raw idea, in case you needed any additional clarification please ask.
Best Answer
For book length works, I recommend the following:
Lambek & Scott's Introduction to Higher Order Categerical Logic. It has a nice brief introduction to category theory, and discusses typed lambda calculi and their relation to cartesian closed categories and a higher order type theory for toposes. It's a bit older, but still one of the best of its kind. It doesn't address dependent type theories of the sort Coq is based on, but it's a nice place to get started on thinking categorically about simple type theories.
A great book that covers a wide range of logics and type theories is Bart Jacobs' Categorical Logic and Type Theory. It's a little more demanding, and much longer, than Lambek & Scott, but it does give a very nice general account of type dependency, as well as spending a lot of time on realizability interpretations. The running theme of the book is the use of various forms of fibred category, a topic on which it is also one of the best resources I'm aware of.
The latter might be a bit overpowered for what you need, but reading the first chapter, skimming the examples of how fibrations are used to model logics, and getting the notion of a comprehension category would probably give you the main conceptual tools of the book.
If you need an additional category theory reference (the one in Lambek & Scott is adequate, but quick), a good and cheap one is Leinster's Basic Category Theory.