Any language "generated" by a Turing machine that always halts the way you mentioned (i.e., $s\in S$ iff $T(x)=s$ for some input $x$ to the fixed Turing machine $T$) will be Turing recognisable.
The way to recognise the language $S$ is to use the Turing machine which does the following:
Machine A: Turing machine to recognise the language $S=\{s\mid \exists x : T(x)=s\}$, where $T$ always halts
- given an input $s$ (to check whether $s\in S$)
- for all possible input strings $x$:
- simulate $T(x)$
- if $T(x)=s$, then accept
If $s\in S$, then the above machine will eventually halt and accept when it comes to a string $x$ such that $T(x)=s$; if $s\notin S$ then this machine will run forever.
Nonempty Turing recognisable languages can also be "generated" by a Turing machine in this way too.
Suppose $S$ is a language recognised by the Turing machine $T$ that is nonempty, so that we have some known element $s_0\in S$, then consider the following algorithm:
Machine B: Turing machine which always halts to generate the nonempty language $S\ni s_0$ recognised by $T$
- take as input a string $s$ and a number $n$
- simulate $T(s)$
- if $T(s)$ halts in $n$ steps and accepts, then print $s$
- otherwise, print $s_0$
The language generated by the above machine will be $S$: for any $s\in S$, suppose $T(s)$ accepts $s$ in $n$ steps, then the above machine will print $s$ after input $(s,n)$.
If $s\notin S$, then no possible input will make the above machine print $s$.
However, the Turing recognisable language $\varnothing$ cannot really be generated by a Turing machine that always halts.
If we become more flexible and consider languages generated by a Turing machine which is allowed to run forever, then we can modify the machines to fix this
Machine A': Turing machine to recognise the language $S = \{s \mid \exists x:T(x)=s\}$ where $T$ may not halt
- given an input $s$
- for some enumeration of pairs $(x,n)$ where $x$ is a string and $n$ is a number
- simulate $T(x)$ for $n$ steps
- if $T(x)$ halts and outputs $s$, then accept
Machine B': Turing machine which may not always halt that generates a language $S$ recognised by $T$
- takes as input a string $s$
- simulate $T(s)$
- if $T(s)$ eventually halts and accepts, print $s$
- if $T(s)$ halts and rejects, loop forever
Therefore, in summary:
Nonempty Turing recognisable languages are equivalently sets of outputs generated by some Turing machine that always halts
Turing recognisable languages are equivalently sets of outputs generated by some Turing machine that may not halt at every input
Yes, the general assumption is that the number of symbols is countable. Remember that with his definition of Turing-machines, Turing tried to capture what we humans would be able to effectively compute. And, he argued that the human mind would only be able to deal with a finite number of symbols. One reason for that is that if we have to write symbols on a piece of paper (which is how humans compute), and we assume that there is only some finite space to do so (this is of course corresponding to a 'square'), then we are only able to perceptually/visually discriminate between a finite number of symbols.
Here is how he put it himself in his original paper:
I shall also suppose that the number of symbols which may be printed is finite. If we
were to allow an infinity of symbols, then there would be symbols differing to an arbitrarily small extent.
Now, we don't want to actually put an exact finite number $n$ as the upper limit of the number of symbols a Turing-machine can have: maybe some computer (whether human or mechanical) could deal with more than $n$ symbols. And, saying that the number is finite, though arbitrary certainly makes the theoretical work a lot easier, and will provide a nice upper bound as to what can be computed. So this is why we typically assume that any Turing-machine uses a finite subset of some infinite, but still countable, set of symbols.
Still, as @user1729 points out, why not say that any Turing-machine uses a finite subset of some uncountable set of symbols? That would still lead to an uncountable number of Turing-machines. Well, again following our concept of computation, it should be clear that whether a machine uses symbols $0$ and $1$, or whether it uses $A$ and $B$ makes no difference as far as the scope and limits of computation goes: whatever something can compute using $\{0,1\}$ is exactly the same as using $\{A,B\}$. So, we might as well say that we have a countable set of symbols $s_1,s_2,....$, out of which any specific Turing-machine uses some finite number.
Surely your textbooks go into this conceptual underpinning of computation, right? I mean, they don't just throw out some mathematical definition of a Turing-machine without explaining what this definition is supposed to capture?
Best Answer
From a slightly different perspective: for theoretical purposes it's often better to work with computable and uncomputable sets rather than languages; indeed, if your language is over a finite alphabet $A$, then there's the classic natural correspondance between $A^*$ and $\mathbb{N}$ given by 'base-$|A|$ representation', and it shouldn't be hard to convince yourself that a language $\mathcal{L}\subseteq\mathcal{P}(A^*)$ is Turing recognizable iff the set $S_L\subseteq\mathbb{N}$ given by the correspondence is computable.
This lets us use a standard topology on subsets of $\mathbb{N}$; for instance, we can use a metric like $d(A,B)=2^{-n}$, where $n$ is the smallest element of $A\Delta B$. Under this topology, the computable sets are indeed 'dense' in the set of all subsets of $\mathbb{N}$ — in fact, the finite sets (all of which are computable) are already dense. (This is exactly analagous to the dyadic rationals being dense in the set of reals, although the 'natural' mapping $S\subseteq\mathbb{N}\mapsto \sum_{s\in S} 2^{-s}$ to map the subsets of the naturals into $[0,1]$ doesn't quite work since it's not 1-to-1.)