I'd suggest to look into python because it's a modern language with an easy syntax.
Also for mathematical application you could look at sage. That's basically an extension of python which provides an interface to many mathematical programs (among others maxima, octave, gap, R, etc..).
This is also convenient because you have a "built-in" representation of mathematical objects. For example, if you want to manipulate polynomials, you don't need to define a class to represent polynomials. You do something like that:
def gcd(P,Q):
F, G, R = P, Q, P % Q
while (R != 0):
F, G, R = G, R, F % G
return G
print gcd(5,6) # Compute gcd of integers
R.<x> = PolynomialRing(QQ)
P, Q = x^7-1, x^14-1
print gcd(P,Q) # Compute gcd of polynomials
Disclaimer: there are several proof assistants for HOL and Isabelle/HOL is the one that I am least familiar with. The different proof assistants all take slightly different lines on the design of their mathematical libraries, but have the same basic mechanisms for defining the objects in those libraries. The principle here is common to all of them.
As discussed in the blog article that you reference, a feature of HOL is that the mechanism for introducing a new constant is not restricted to simple equational definitions, but allows you to give implicit and possibly loose specifications of the constant. So, for example, you could introduce $\pi$ with the defining property:
$$ 0 < \pi \land \forall x\cdot \sin x = 0 \Leftrightarrow \exists m:\Bbb{Z}\cdot x = m \times \pi$$
(There is a small price to pay: a proof obligation is imposed that ensures that the the defining property is consistent.)
The above is an implicit specification, but can be shown not to be loose: i.e., you can prove that there is exactly one real number $\pi$ that satisfies the above property. As a conceivable example of a loose specification, after defining the complex numbers as pairs of reals with the usual definition of the arithmetic operators, you could define $\mathbf{i}$ by:
$$ \mathbf{i}^2 = -1 $$
(The above example is not very likely in practice as it is more convenient to be less abstract and to require $\mathbf{i} = (0, 1)$. The blog article contains other relevant examples, e.g., see its discussion of $\mathtt{fromSome}$.)
$\mathtt{undefined}$ in Isabelle/HOL is an extreme example of loose specification: its defining property is just $\mathbf{true}$. It has a polymorphic type such that it can be instantiated to any type. Its instance in any type has all the properties that are common to all elements of the type. So you have properties like::
$$
(\mathtt{undefined} : \tau) = (\mathtt{undefined} : \tau) \\
\lnot (\mathtt{undefined} : \Bbb{N}) < 0 \\
(\mathtt{undefined} : \Bbb{R}) < 0.0 \lor (\mathtt{undefined} : \Bbb{R}) = 0.0 \lor (\mathtt{undefined} : \Bbb{R}) > 0.0
$$
where $\tau$ can be any type. $\mathtt{undefined}$ has no other special properties: In particular, $f(x) = \mathtt{undefined}$ does not imply that $x$ is not in the domain of $f$. The definition of $\mathtt{division}$ is just saying that $\mathtt{division} \, a \, 0$ is equal to $\mathtt{undefined}$, some unspecified, but perfectly ordinary natural number. As the blog article suggests, it would perhaps have been more appropriate to call it $\mathtt{unspecified}$, rather than $\mathtt{undefined}$.
There are logics that do explicitly model undefined terms. Tennant's natural logic is one example. See also this article on free logics. HOL is not one of these logics.
As regards the conventions for the use of $\infty$ in ordinary mathematics, good texts will make it quite clear what conventions are being used. I don't think equations like $\infty = \infty$ involving $\infty$ are considered as problematic. E.g., Rudin's Principles of Mathematical Analysis states a convention that $x + \infty = \infty$ when $x$ is finite.
Best Answer
On the one hand, there has been a lot of success recently in creating fully formalized and computer-verified proofs of nontrivial theorems, including Hales' theorem, the prime number theorem, the Jordan curve theorem, and Gödel's incompleteness theorems. The sense I get from experts in the field is that the main challenge is time, not theory. It takes a long time to formalize human-readable proofs into computer-verifiable proofs with the present systems, but there should be no theoretical obstacles to formalizing any theorem one wishes to study.
On the other hand, there are other reasons that nothing explicitly like Principia Mathematica has been developed. The first of these is that Principia is virtually unreadable. As a means of conveying mathematical information from one person to another, fully formal or even mostly formal proofs (the kind that a proof assistant can verify) are not as efficient as ordinary natural-language proofs. This means that few mathematicians have a desire to work with any "new" system of this kind. We already realize that virtually all mathematical theorems can be formalized in ZFC set theory, but instead we write proofs in a way that tries to convey the mathematical insight more than the technical details of a formal system, unless the technical details are somehow important.
There has been a lot of recent work on a different foundational system called "homotopy type theory", which could be used instead of ZFC to formalize theorems. It remains to be seen, however, whether this new system ends up being widely adopted. There other foundational systems, such as second-order arithmetic, which could also be used to fully formalize large parts of mathematics. I believe that a significant number of mathematicians don't really worry much about the foundational system they use, because the objects they deal with are sufficiently concrete that the foundations make little difference.
The other goal of Principia was to support the logicist program that all of mathematics can be reduced to logic. The idea that mathematics can be formalized and presented in full detail is no longer in question, as it might have been at the time. But the idea that the axioms of a foundational theory would all be fully logical is far from clear - in fact, it is generally considered false, because axioms such as the axiom of infinity or the axiom of replacement do not seem purely "logical" to many mathematicians.