Are there countably many symbols (in the context of computation theory)

computabilitycomputer scienceelementary-set-theoryturing-machines

I've seen this kind of argument on countability of Turing-recognizable languages in several places:

For any Turing machine $M$ consider it's encoding into a string $ \langle M \rangle$. This encoding is in some
$ \Sigma^*$ for some alphabet $\Sigma$, say the alphabet of english language or simply $ \{0,1\}$. Since the set of all strings in $ \Sigma^* $ is countable, so is the set of all Turing machines, and therefore all Turing-recognizable languages.

I believe this argument demands the countability of the set of all symbols, since if there were uncountably many symbols, then there would be uncountably many turing machines like $M_{\sigma}$ which accepts exactly the symbol $\sigma$. Only if symbols were countable we could ensure that the set of all of it's finite subsets are countable, also as mentioned in a comment here.

So my question is, do we always take the set of tape symbols $\Gamma$ of turing machines to be a subset of some fix mother set which is countable? In my books (Introduction to Languages and the Theory of Computation by John Martin and Introduction to the Theory of Computation by Michael Sipser) no such thing is mentioned; I would also appreciate references.

Best Answer

Yes, the general assumption is that the number of symbols is countable. Remember that with his definition of Turing-machines, Turing tried to capture what we humans would be able to effectively compute. And, he argued that the human mind would only be able to deal with a finite number of symbols. One reason for that is that if we have to write symbols on a piece of paper (which is how humans compute), and we assume that there is only some finite space to do so (this is of course corresponding to a 'square'), then we are only able to perceptually/visually discriminate between a finite number of symbols.

Here is how he put it himself in his original paper:

I shall also suppose that the number of symbols which may be printed is finite. If we were to allow an infinity of symbols, then there would be symbols differing to an arbitrarily small extent.

Now, we don't want to actually put an exact finite number $n$ as the upper limit of the number of symbols a Turing-machine can have: maybe some computer (whether human or mechanical) could deal with more than $n$ symbols. And, saying that the number is finite, though arbitrary certainly makes the theoretical work a lot easier, and will provide a nice upper bound as to what can be computed. So this is why we typically assume that any Turing-machine uses a finite subset of some infinite, but still countable, set of symbols.

Still, as @user1729 points out, why not say that any Turing-machine uses a finite subset of some uncountable set of symbols? That would still lead to an uncountable number of Turing-machines. Well, again following our concept of computation, it should be clear that whether a machine uses symbols $0$ and $1$, or whether it uses $A$ and $B$ makes no difference as far as the scope and limits of computation goes: whatever something can compute using $\{0,1\}$ is exactly the same as using $\{A,B\}$. So, we might as well say that we have a countable set of symbols $s_1,s_2,....$, out of which any specific Turing-machine uses some finite number.

Surely your textbooks go into this conceptual underpinning of computation, right? I mean, they don't just throw out some mathematical definition of a Turing-machine without explaining what this definition is supposed to capture?