1) There is no algorithm that can compute arbitrarily many digits of $\Omega$. If we were in some fashion capable of checking whether given strings of numbers composed the first $n$ of $\Omega$'s binary expansion, we could simply brute force our way through all numbers with denominators $2^k,$ $k=1,2,3,\dots$, which would provide us with more and more digits of $\Omega$ - a contradiction.
However, all is not lost. First off, the set of rational numbers less than $\Omega$ is computably enumerable, so there exists an algorithm that lists them off one by one. More directly, we can compute successively better lower bounds by simply calculating known halting programs and discovering more of them (we never know how close we are, though, or otherwise we could extract digits). If the aliens give us a number less than Chaitin's constant, then in theory we could confirm that. In practice, though, we might well not have enough time and resources to check their number if it's too close to the real value, so we wouldn't be sure (and of course, we would be guaranteed failure if their number is $>\Omega$).
Second, just because no algorithm computes infinitely many digits, doesn't mean we can't soundly prove the first $n$ digits for some fixed $n$ (and a fortuitous choice of Turing machine). Mathworld provides two examples of Chaitin's constant being computed for two TM's:
$$\Omega_U= 0.0000001000000100000110\cdots_2$$
$$\Omega_U=0.0001000000010000101001110111000011111010\cdots_2 $$
For more on these, you'll have to see Calude (2002) and Calude, Dinneen (2007). Using these, we can do a rudimentary check on the alien's number by confirming or disconfirming that it agrees with what we have. But of course, this is extremely limited - and highly dependent on choice of Turing machine.
Third, the digits of $\Omega$ are algorithmically random, i.e. the first $n$ digits (alone) can only be enumerated in at least $n-O(1)$ steps. Thus it might be possible to investigate the plausibility of candidate digits via approximations of Kolmogorov complexity or other statistical tests. However, this may require a very large number of candidate digits in order to obtain a useful indicator, possibly more than is feasible to realistically work with.
2) Whether the answer to a general mathematics problem is "of any use" is outside the scope of computability theory. To the extent that proving or disproving a conjecture is inherently useful, then calculating truth or falsity based on $\Omega$ would be useful. If an existence claim turned out false, we could stop searching computationally, or if a universality claim turned out false we could start searching computationally for a counterexample. Precisely what the implications of a conjecture are depend on the problem itself, e.g. the Riemann hypothesis affects the distribution of prime numbers and subsequent arithmetic bounds in number theory.
However, a simple computation of this sort won't lend any insight into why a problem turns out the way it does. Normally in mathematics, famous unsolved problems require whole new theories, perspectives, or directions to resolve, but we won't get that from a $\Omega$-based calculation because all of the "reasoning" is not found in the digits, nor the checking of the digits. Chaitin's constant encodes only the answers, not the why's behind them.
Chaitin number $\Omega$ is asymptotically computable, so we can compute the first several digits of it.
Please check the paper Computing a Glimpse of Randomness by Cristian S. Calude, Michael J. Dinneen, and Chi-Kou Shu at 2002, they calculate the first 64 bits of $\Omega_U$ where U is a special register machine.
In summary, the first 64 exact bits of $\Omega_U$ are:
0000001000000100000110001000011010001111110010111011101000010000
The value is depending on the machine U
Also the thesis of Truth and Light: Physical Algorithmic Randomness by Michael A. Stay is very worth reading. In the thesis, the author gives all the background and several universal prefix-free machines which is suitable for the study.
Best Answer
Chaitin's number is uncomputable, therefore transcendental. From the viewpoint of number systems, all transcendental numbers are pretty much the same.
Assuming $c$ is the base, you're looking at the numbers of the form
$$a_n c^n + \dots + a_1 c + a_0 + a_{-1} c^{-1} + \dots + a_{-k} c^{-k}$$
where all digits $a_i$ are rationals (you could limit that further to integers, or just integers in a fixed range).
Because $c$ is transcendental, two numbers of this form are equal if and only if all of their coefficients are equal. For example, the only way to write $5$ is $a_0 = 5$ and all other $a_i$ being 0. If you limit $a_i$ to a range, say $\{0,1,2,3\}$, then the set will not be closed under addition $2 + 3$.
Assuming you allow any rational number (or any integer number) you can add and multiply numbers of this form, pretty much the same way you operate on polynomials. They are a ring isomorphic to Laurent polynomials.
This set is not closed under division. To be able to divide, you can consider expressions of the form $f(c)/g(c)$ where $f,g$ are polynomials with rational coefficients. The arithmetic on those expressions will follow the same laws as arithmetic on rational functions; the field $\mathbb Q(c)$ is isomorphic to the set of rational functions $\mathbb Q(x)$.
This is the reason why we don't use transcendental numbers as a base - it's really studying polynomials and rational functions.
All of the above applies to any transcendental number, e.g. you could use $c=\pi$ instead of $c=\text{Chaitin's constant}$. Unlike $\pi$, Chaitin's number is noncomputable. It is transcendental over computable numbers, and not just rational numbers. Therefore, for Chaitin's constant you could make $a_i$ computable, and not just rational, and the same reasoning would apply.