Let $(S, \mathcal{F}, P)$ be a probability triplet: $S$ is the sample space, $\mathcal{F}$ is a sigma algebra on $S$ (containing all the events), and $P:\mathcal{F}\rightarrow \mathbb{R}$ is a probability measure.
Suppose $\{X_i\}_{i=1}^{\infty}$ is a sequence of mutually independent and identically distributed (i.i.d.) random elements. In particular
$$X_i:S\rightarrow \{H, T\}$$
is a measurable map from the sample space to the set $\{H,T\}$, so for each $i \in \{1, 2, 3,...\}$ we have
$$\{\omega \in S: X_i(\omega) = H\} \in \mathcal{F}$$
Assume $P[X_i=H]=P[X_i=T]=1/2$.
Claim 1: For each sequence $\{h_i\}_{i=1}^{\infty}$ with $h_i \in \{H,T\}$, we have
$$\cap_{i=1}^{\infty} \{X_i=h_i\}= \cap_{i=1}^{\infty} \{\omega \in S: X_i(\omega)=h_i\} \in \mathcal{F}$$
Proof: Since $\mathcal{F}$ is a sigma algebra, the countable intersection of events in $\mathcal{F}$ is in $\mathcal{F}$. $\Box$
Claim 2: It is possible to construct $(S, \mathcal{F}, P)$, for which such i.i.d. random elements $\{X_i\}$ exist, in these example cases:
a) $S = \{red, blue\} \cup [0,1)$
b) $S = A$, where $A=\{(h_1, h_2, h_3, ...) : h_i \in \{H,T\}\quad \forall i \in \{1, 2, 3,...\}\}$.
c) $S = A \setminus \{(T, T, T, T, ...)\}$.
In particular, examples (a) and (c) can be viewed as "counter-examples" to your claim that the sample space must contain all binary sequences of H/T. The example (c) contains all binary sequences of H/T except for the all-Tails sequence $(T, T, T, T,...)$ (just start with the probability triplet in part (b) but throw away the probability-0 outcome of all tails).
Quick justifications for (a)-(c):
a) Use $P[\{red\}]=P[\{blue\}]=0$ and choose $\omega \in [0,1)$ according to the Borel sigma algebra and Borel measure. Write $\omega\in [0,1)$ as
$$ \omega = \sum_{i=1}^{\infty} \omega_i 2^{-i}$$
where $\{\omega_i\}$ is the unique binary expansion that does not contain an infinite tail of 1s. Define for each $i \in \{1,2,3,...\}$
$$X_i(red)=X_i(blue)=H$$
For $\omega \in [0,1)$ define
$$X_i(\omega) = \left\{\begin{array}{cc}
H & \mbox{if $\omega_i=1$}\\
T & \mbox{else}
\end{array}\right.$$
b) This is the standard one.
c) Just start with (b) and throw away an outcome of probability 0.
Best Answer
The k'th success occurs at the i'th trial exactly when there is k-1 successes in the first i-1 trials, $B(i-1,p) = k-1$, and the i'th trial is a success. $$P(X=i) = p \cdot P(B(i-1,p)=k-1) = p \left(\begin{matrix}i-1\\k-1\end{matrix}\right)p^{k-1}(1-p)^{i-k}$$