How can one feel comfortable with non-solvable algebraic numbers?
The nice thing about solvable numbers is this idea that they have a formula. You can manipulate the formula as if it were actually a number using some algebra formalism that you probably have felt comfortable with for a while. For instance $\sqrt{3+\sqrt{6}}+2$ is an algebraic number. What do you get if you add it to $7$? Well $\left(\sqrt{3+\sqrt{6}}+2\right)+7=\sqrt{3+\sqrt{6}}+9$ seems like a decent answer. As a side note: there actually some reasonably hard algorithmic questions along these lines, but I'll assume they don't worry you. :-)
We'd like to be able to manipulate other algebraic numbers with similar comfort. The first method I was taught is pretty reasonable:
Kronecker's construction: If $x$ really is an algebraic number, then it is the root of some irreducible polynomial $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$. But how do we manipulate $x$? It's almost silly: we treat it just like $x$, and add and multiply as usual, except that $x \cdot x^{n-1}$ needs to be replaced by $a_{n-1} x^{n-1} + \ldots + a_1 x + a_0$, and division is handled by replacing $1/x$ with $( x^{n-1} - a_{n-1} x^{n-2} - \ldots - a_2 x - a_1)/a_0$. This is very similar to "integers mod n" where you replace big numbers by their remainder mod n,. In fact this is just replacing a polynomial in $x$ with its remainder mod $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$.
I found it somewhat satisfying, but in many ways it is very mysterious. We use the same symbol for many different algebraic numbers; each time we have to keep track of the $f(x)$ floating in the background. Also it raises deep questions about how to tell two algebraic numbers apart. Luckily more or less all of these questions have clean algorithmic answers, and they are described in Cohen's textbooks CCANT (A Course in Computational Algebraic Number Theory, Henri Cohen, 1993).
Companion matrices: But years later, it still bugged me. Then I studied splitting fields of group representations. The crazy thing about these fields is that they are subrings of matrix rings. So “numbers” were actually matrices. You've probably seen some tricks like this $$\mathbb{C} = \left\{ \begin{bmatrix} a & b \\ -b & a \end{bmatrix} : a,b \in \mathbb{R} \right\}$$ where we can make a bigger field out of matrices over a smaller field. It turns out that is always true: If $K \leq F$ are fields, then $F$ is a $K$-vector space, and the function $f:F \to M_n(K) : x \mapsto ( y \mapsto xy )$ is an injective homomorphism of fields, so that $f(F)$ is a field isomorphic to $F$ but whose “numbers” are just $n \times n$ matrices over $K$, where $n$ is the dimension of $F$ as a $K$-vector space (and yes $n$ could be infinite if you want, but it's not).
That might seem a little complicated, but $f$ just says "what does multiplying look like?" For instance if $\mathbb{C} = \mathbb{R} \oplus \mathbb{R} i$ then multiplying $a+bi$ sends $1$ to $a+bi$ and $i$ to $-b+ai$. The first row is $[a,b]$ and the second row $[-b,a]$. Too easy.
Ok, fine, but that assumes you already know how to multiply, and perhaps you are not yet comfortable enough to multiply non-solvable algebraic numbers! Again we use the polynomial $x^n - a_{n-1} x^{n-1} - \ldots - a_1 x - a_0$, but this time as a matrix. We use the same rule, viewing $F=K \oplus Kx \oplus Kx^2 \oplus \ldots \oplus Kx^{n-1}$ and ask what $x$ does to each basis element: well $x^i$ is typically sent to $x^{i+1}$. It's only the last one that things get funny:
$$f(x) = \begin{bmatrix} 0 & 1 & 0 & 0 & \ldots & 0 & 0 \\ 0 & 0 & 1 & 0 & \ldots & 0 & 0 \\
0 & 0 & 0 & 1 & \ldots & 0 & 0 \\
& & & & \ddots & & \\
0 & 0 & 0 & 0 & \ldots & 1 & 0 \\
0 & 0 & 0 & 0 & \ldots & 0 & 1 \\
a_0 & a_1 & a_2 & a_3 & \ldots & a_{n-2} & a_{n-1}
\end{bmatrix}$$
So this fancy “number” $x$ just becomes a matrix, most of whose entries are $0$. For instance $x^2 - (-1)$ gives the matrix $i = \left[\begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix}\right]$.
This nice part here is that different algebraic numbers can actually have different matrix representations. The dark part is making sure that if you have two unrelated algebraic numbers that they actually multiply up like a field. You see $M_n(K)$ has many subfields, but is not itself a field, so you have to choose matrices that both lie within a subfield. Now for splitting fields and centralizer fields and all sorts of handy dandy fancy fields, you absolutely can make sure everything you care about comes from the field. Starting from just a bunch of polynomials though, you need to be careful and find a single polynomial that works for both. This is called the primitive element theorem.
This also lets you see the difference between eigenvalues in the field $K$ and eigenvalues (“numbers”) in the field $F$: the former are actually numbers, or multiples of the identity matrix, while the latter are full-fledged matrices that happen to lie in a subfield. If you ever studied the “real form” of the eigenvalue decomposition with $2\times 2$ blocks, those $2 \times 2$ blocks are exactly the $\begin{bmatrix}a&b\\-b&a\end{bmatrix}$ complex numbers.
From my perspective, the key to all such proofs is the completeness of the real numbers.
At the most basic level, the defining property of $\mathbb{R}$ is that it is a complete ordered field. Now $\mathbb{Q}$ is also an ordered field, so you are not going to be able to prove the existence of irrationals by pure algebra or inequalities. Somewhere you will have to use completeness. Indeed, the existence of irrationals is equivalent to the fact that $\mathbb{R}$ is complete but $\mathbb{Q}$ isn't.
It's interesting to look where completeness is used in the other usual proofs:
$\sqrt{2}$ is irrational: How do you know that $2$ has a square root at all? There are plenty of number systems in which lots of numbers just don't have square roots (e.g. integers, modular arithmetic; even in $\mathbb{R}$ the negative numbers don't have square roots). So what's so special about $\mathbb{R}$ to guarantee that $2$ has a square root? Completeness, of course. You can construct a Cauchy sequence of rationals such that if it has a limit, the square of the limit must be 2. Or you can look at the Dedekind cut $\{x : x^2 < 2\}$. Or you can prove the intermediate value theorem (using completeness!) and consider the continuous function $x^2$ which takes values both above and below 2.
$\log_2 3$: Likewise, how do you know that $3$ has a log base 2? There are lots of other number systems in which logs don't exist, e.g. there just doesn't exist any number $x$ such that $2^x = 3$.
Continued fractions: how do you know that the continued fraction actually converges to something?
$\mathbb{Q}$ is countable but $\mathbb{R}$ is not: you have to use completeness somewhere in proving that $\mathbb{R}$ is uncountable. (The Baire category theorem is a neat way to see it made explicit.)
0.1010010001...: why does every sequence of decimal digits actually represent a number?
$e$: You have to first give a rigorous definition of $e$. Whatever definition you choose, proving that there exists a number satisfying that definition will require using completeness.
Best Answer
Transcendental numbers are numbers that cannot be defined in the language of algebra. Their existence shows that the basic concepts of arithmetic are not enough to fully describe all of the phenomena that occur in the real numbers.
Polynomials are precisely the formulae in one variable that can be written down using only addition, subtraction, and multiplication. Yes, they can be written as a sum of monomials, and this is a useful canonical form, but that makes for a poor definition, despite the fact that it's repeated as such endlessly by high school teachers and even most university-level sources. Thus, a polynomial equation with rational coefficients is just any equation that can be written down using the rational numbers and the $+$ and $\cdot$ signs. Of course by using negative coefficients, this also lets us use $-$ if we like. Furthermore, an equation that also uses division can always be reduced to the form $\frac {P(x)} {Q(x)}=0$, where $P$ and $Q$ are polynomials, and from there to $P(x)=0$, so if $x$ solves an equation involving division, it also solves an equation without division. And finally, any rational number can always be written using the four arithmetic operations and the numbers $0$ and $1$.
Thus, one definition for an algebraic number is a number which satisfies a formula written using only $+, -,\cdot,\div, 0, 1$, and $=$. If we consider this alphabet (and associated grammar) to be the "language of algebra", then such a formula can be taken as a definition of that number written in that language. $\sqrt 2$ can be given such a definition. $\pi$ is usually defined by making reference to geometry, and what its transcendence means is that we need geometry (or at least something bigger than algebra) to define it. A transcendental number is one such that the only predicates written in the language of algebra that the number verifies are the trivial predicates verified by all numbers, like $x+x=2x$.
You might object that something like $x^2=2$ doesn't really define $\sqrt 2$, since after all that equation is also true for $-\sqrt 2$. This is true, and in fact this insight eventually leads to Galois theory. The numbers $\sqrt 2$ and $-\sqrt 2$ cannot be distinguished using algebra and the rational numbers, in much the same way that $\pi$ cannot be defined using algebra and the rational numbers. In Galois theory we have the notion of conjugate numbers over a given field $F$, which are numbers which cannot be distinguished "from the point of view of $F$". This means that any sentence written in the "language of $F$" is either true for both elements, or true for neither. In turns out that there's always a fundamental "minimal sentence" - the minimal polynomial - such that the conjugate numbers of $a$ are precisely all of the numbers making that sentence true. Thus we cannot do better than the minimal polynomial as a definition for $a$ in the language of $F$ - it is the sentence true for $a$ which is true for the fewest other elements.
I remember reading somewhere that there's a general notion in logic called a "transcendental element" over a language or a formal system, or something like that, which is basically exactly what I outlined above: an element which verifies no sentences in the language other than the tautologies. Someone who knows more might leave a comment or an answer.