1) There is no algorithm that can compute arbitrarily many digits of $\Omega$. If we were in some fashion capable of checking whether given strings of numbers composed the first $n$ of $\Omega$'s binary expansion, we could simply brute force our way through all numbers with denominators $2^k,$ $k=1,2,3,\dots$, which would provide us with more and more digits of $\Omega$ - a contradiction.
However, all is not lost. First off, the set of rational numbers less than $\Omega$ is computably enumerable, so there exists an algorithm that lists them off one by one. More directly, we can compute successively better lower bounds by simply calculating known halting programs and discovering more of them (we never know how close we are, though, or otherwise we could extract digits). If the aliens give us a number less than Chaitin's constant, then in theory we could confirm that. In practice, though, we might well not have enough time and resources to check their number if it's too close to the real value, so we wouldn't be sure (and of course, we would be guaranteed failure if their number is $>\Omega$).
Second, just because no algorithm computes infinitely many digits, doesn't mean we can't soundly prove the first $n$ digits for some fixed $n$ (and a fortuitous choice of Turing machine). Mathworld provides two examples of Chaitin's constant being computed for two TM's:
$$\Omega_U= 0.0000001000000100000110\cdots_2$$
$$\Omega_U=0.0001000000010000101001110111000011111010\cdots_2 $$
For more on these, you'll have to see Calude (2002) and Calude, Dinneen (2007). Using these, we can do a rudimentary check on the alien's number by confirming or disconfirming that it agrees with what we have. But of course, this is extremely limited - and highly dependent on choice of Turing machine.
Third, the digits of $\Omega$ are algorithmically random, i.e. the first $n$ digits (alone) can only be enumerated in at least $n-O(1)$ steps. Thus it might be possible to investigate the plausibility of candidate digits via approximations of Kolmogorov complexity or other statistical tests. However, this may require a very large number of candidate digits in order to obtain a useful indicator, possibly more than is feasible to realistically work with.
2) Whether the answer to a general mathematics problem is "of any use" is outside the scope of computability theory. To the extent that proving or disproving a conjecture is inherently useful, then calculating truth or falsity based on $\Omega$ would be useful. If an existence claim turned out false, we could stop searching computationally, or if a universality claim turned out false we could start searching computationally for a counterexample. Precisely what the implications of a conjecture are depend on the problem itself, e.g. the Riemann hypothesis affects the distribution of prime numbers and subsequent arithmetic bounds in number theory.
However, a simple computation of this sort won't lend any insight into why a problem turns out the way it does. Normally in mathematics, famous unsolved problems require whole new theories, perspectives, or directions to resolve, but we won't get that from a $\Omega$-based calculation because all of the "reasoning" is not found in the digits, nor the checking of the digits. Chaitin's constant encodes only the answers, not the why's behind them.
Best Answer
A CRC computation is as follows. You have a data polynomial $d(x) = \sum_{i=0}^{n-1} d_ix^i$ where the $d_i \in \{0, 1\}$ are $n$ bits to be transmitted (or recorded). What is actually transmitted (or recorded) is $$t(x) = x^{16}d(x) + p(x)$$ where $p(x)$ is the ${\textit remainder~polynomial}$ when $x^{16}d(x)$ is divided by the CRC polynomial $C(x) = x^{16} + x^{12} + x^5 + 1$. Note that polynomial division yields $$ x^{16}d(x) = q(x)C(x) + p(x) $$ where the quotient $q(x)$ is discarded. But the above equation can be re-arranged as $$ q(x)C(x) = x^{16}d(x) - p(x) = x^{16}d(x) + p(x) = t(x) $$ (because arithmetic on the polynomial coefficients is done modulo $2$ and subtraction is thus the same as addition), and thus $t(x)$ is a multiple of the CRC polynomial $C(x)$. Since $p(x)$ is of degree $15$ or less, the high-order coefficients of the polynomial $t(x)$ are the data bits while the low-order coefficients are the so-called CRC bits or parity bits, that is $$ t(x) = d_{n-1}x^{n-1 + 16} + d_{n-2}x^{n-2+16} + \cdots d_0x^{16} + p_{15}x^{15} + p_{14}x^{14} + \cdots + p_0. $$
In computing $p(x)$ via polynomial long division using paper and pencil, you need to remember that
(i) $\quad \quad$ $d(x)$ is multiplied by $x^{16}$ before beginning the long division
(ii) $\quad \quad$ the subtractions of polynomial coefficients that occur in the long division process are all done modulo $2$ and thus are the same as additions modulo $2$ (that is, the XOR operation)
(iii) $\quad \quad$ the long division continues till $x^{16}d_0$ is processed and the remainder is a polynomial of degree $15$ or less.
Practical CRC systems often have bells-and-whistles (such as the high-order bits $8$ bits in $x^{16}d(x)$ are complemented before the division process begins etc.) which I have not included above because these are likely not of interest in this forum. However, ignoring such details in your paper-and-pencil computations will make your results differ from the ones your machine is giving you.
At the receiving (or reading) end, what you have is a polynomial $r(x)$ which might be $\textit slightly$ different from $t(x)$ if transmission (or read) errors changed some of the bits. The CRC error detection process merely divides $r(x)$ by $C(x)$; no need to multiply $r(x)$ by $x^{16}$ first. If the remainder is nonzero, then $r(x)$ is $\textit not$ a multiple of $C(x)$ and so the receiver knows that transmission (or read) errors have occurred. But if the remainder is $0$, then $r(x)$ is a multiple of $C(x)$, and with high probability, is the same as $t(x)$. The low-order $16$ bits in $r(x)$ are discarded and the high-order $n$ bits are accepted as error-free data. When the remainder is $0$, there is a small probability that the difference between $r(x)$ and $t(x)$ is a multiple of $C(x)$. Such errors are referred to as undetected errors. The probability of undetected error decreases as the degree of $C(x)$ increases. For this reason, many modern systems use CRC polynomials of degree 32.