I think one reason is that often one does not want to remember what the variable names really represent.
As an example, when we choose to talk about the matrix $(a_{ij})$ instead of the matrix $(\mathrm{TransitionProbability}_{ij})$, this expresses the important fact that once we have formulated our problem in terms of matrices, it is perfectly safe to forget where the problem came from originally -- in fact, remembering what the matrix "really" describes might only be unnecessary psychological baggage that prevents us from applying all linear-algebraic tools at our disposal.
(As an aside, have you ever seen code written by a mathematician? It very often looks exactly like your first example.)
What is mathematics? One answer is that mathematics is a collection of definitions, theorems, and proofs of them. But the more realistic answer is that mathematics is what mathematicians do. (And partly, that's a social activity.) Progress in mathematics consists of advancing human understanding of mathematics.
What is a proof for? Often we pretend that the reason for a proof is so that we can be sure that the result is true. But actually what mathematicians are looking for is understanding.
I encourage everyone to read the article On Proof and Progress in Mathematics by the Fields Medalist William Thurston. He says (on page 2):
The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true.
On a more everyday level, it is common for people first starting to grapple
with computers to make large-scale computations of things they might have
done on a smaller scale by hand. They might print out a table of the first
10,000 primes, only to find that their printout isn’t something they really
wanted after all. They discover by this kind of experience that what they
really want is usually not some collection of “answers”—what they want is
understanding.
Some people may claim that there is doubt about a proof when it has been proved by a computer, but I think human proofs have more room for error. The real issue is that (long) computer proofs (as opposed to, something simple like checking a numerical value by calculator) are hard to keep in your head.
Compare these quotes from Gian-Carlo Rota's Indiscrete Thoughts, where he describes the mathematicians' quest for understanding:
“eventually every mathematical problem is proved trivial. The quest for ultimate triviality is characteristic of the mathematical enterprise.” (p.93)
“Every mathematical theorem is eventually proved trivial. The mathematician’s ideal of truth is triviality, and the community of mathematicians will not cease its beaver-like work on a newly discovered result until it has shown to everyone’s satisfaction that all difficulties in the early proofs were spurious, and only an analytic triviality is to be found at the end of the road.” (p. 118, in The Phenomenology of Mathematical Truth)
Are there definitive proofs?
It is an article of faith among mathematicians that after a new theorem is discovered, other simpler proofs of it will be given until a definitive one is found. A cursory inspection of the history of mathematics seems to confirm the mathematician’s faith. The first proof of a great many theorems is needlessly complicated. “Nobody blames a mathematician if the first proof of a new theorem is clumsy”, said Paul Erdős. It takes a long time, from a few decades to centuries, before the facts that are hidden in the first proof are understood, as mathematicians informally say. This gradual bringing out of the significance of a new discovery takes the appearance of a succession of proofs, each one simpler than the preceding. New and simpler versions of a theorem will stop appearing when the facts are finally understood. (p.146, in The Phenomenology of Mathematical Proof).
In my opinion, there is nothing wrong with, or doubtful about, a proof that relies on computer. However, such a proof is in the intermediate stage described above, that has not yet been rendered trivial enough to be held in a mathematician's head, and thus the theorem being proved is to be considered still work in progress.
Best Answer
I work in software and I have an amateur interest in mathematics, and from what I can tell many theoretical mathematicians don't have much use for computers because of the domains they work in. In order for a mathematician to utilize a computer, the problem must either be something that requires a lot of computation or is already formalized enough for a proof assistant or theorem prover to attack.
If you need one closed form solution to an easy differential equation, just using the stuff you learned in calculus is easier than figuring out how to use software that does integration for you, much as you wouldn't turn to a calculator to multiply six times nine. Computers can be useful for areas of mathematics where you need to come up with lots of closed form solutions (computer algebra systems) numerical integration or integral transforms, or running other intensive computing tasks in other domains like computing the class number (or other properties) for a large number of number fields.
Many of these problems are tedious and not particularly interesting to theorists, even if they're quite useful for applied math; Further, solving the problem often requires writing software to do it, given theorists often work in unexplored areas. For theorists, you often want to prove a statement of some sort, and that requires a proof assistant of some sort; While they can be useful, the foundations for most graduate level mathematics aren't formalized in first order logic, and where they are they're usually inaccessibly unwieldy compared to more informal reasoning.
Further complicating the picture is the traditional method of formalizing mathematics is built on the foundation of set theory in first order logic, which is often incompatible (or at least unwieldy) with metamathimatical reasoning about categories, nonstandard models, and other foundational issues. So if mathematicians who worked in this field tried to use computers, they'd spend more time writing software and formalizing existing math than doing new work.
As an example, very little of Wiles proof has been formalized in a form that can be verified by mechanized reasoning, because most of the branches of mathematics that it rests on have yet to be formalized. This may change in coming decades, as theorem provers and proof assistants get more advanced, but for the time being computers are useful for the most mature, formalized areas of mathematics that is largely the domain of physics and applied math.