It sometimes is a useful exercise to separate
the random from the non-random pieces of the puzzle.
Let's build up a stopping time, starting without randomness,
along the lines of your intuition. Suppose that the
observations $X_j$ take values in the space $S$, and
let $S^\mathbb{N}$ be the space of $S$-valued sequences.
For any strategy or stopping policy, and any $0\leq n<\infty$
we may define a two-valued map $\phi_n:S^\mathbb{N}\to\{\mbox{GO},\mbox{STOP}\}$
which tells me what to do at time $n$ if I were to observe $s=(s_0,s_1,\dots)$. We require that $\phi_n(s)$ only depends on the first part of the sequence $(s_0,s_1,\dots,s_n)$.
That is, the decision to stop at time $n$ must only depend on the
observations up to time $n$. No peeking into the future!
Now define $\phi(s)=\inf(n\geq 0: \phi_n(s)=\mbox{STOP})$, where the infimum over
the empty set is $\infty$. This gives a map
$\phi:S^\mathbb{N}\to \mathbb{N}\cup \{\infty\}$ which expresses our policy,
by telling us when to stop.
Finally we can put probability back into the picture by defining
$\tau:\Omega\to \mathbb{N}\cup \{\infty\}$ by
$$\tau(\omega)=\phi(X_0(\omega), X_1(\omega), X_2(\omega), \dots ).$$
This random variable is the stopping strategy applied to the
random sequence $(X_0(\omega), X_1(\omega), X_2(\omega), \dots)$.
Every stopping time $\tau$ can be expressed like this for some such $\phi$.
For $0\leq n< \infty$, by the Doob-Dynkin lemma, there is a measurable map
$\varphi_n:(S^\mathbb{N},{\cal G}_n) \to \{0,1\}$ so that $1_{[\tau=n]}=\varphi_n(X_0,X_1,X_2,\dots)$.
Here ${\cal G}_n$ is the $\sigma$-field generated by the coordinate maps $s_j$ for $0\leq j\leq n$. Now let $\phi(s)=\inf(n\geq 0: \varphi_n(s)=1)$.
The expected value of $T$ is not necessarily given by $\log_c \frac ba$, and in fact does not need to be finite. For example, let $Y_n$ be iid random variables with $\mathbb{P}(Y_n=2b) = \frac{c}{2b}$, $\mathbb{P}(Y_n=0) = 1-\frac{c}{2b}$. I'm assuming $c < 2b$ here, but the example can be easily modified if that is not the case.
Note that $\mathbb{E}[Y_n] = c$, so if we let $X_1 = a$ and $X_i := \prod_{n=2}^i Y_n$ then $X_i$ satisfies your properties by independence. However, we can directly compute the distribution of your stopping time since $X_i$ either passes $b$ immediately or never passes $b$:
\begin{align*}
\mathbb{P}(T = 2) &= \frac{c}{2b} \\
\mathbb{P}(T = \infty) &= 1-\frac{c}{2b}.
\end{align*}
Since $T = \infty$ with positive probability, $\mathbb{E}[T] = \infty$.
Best Answer
Indeed $\tau$ need not be independent of the process $\{X_n\}$. In the example where $\mathbb P(X_1=1)=1-\mathbb P(X_1=-1)=p$ and $$\tau=\inf\{n>0:X_n=1\}, $$ we have $$\tau = \sum_{n=1}^\infty p(1-p)^{n-1}\mathsf 1_{\{X_n=1\}\cap\{X_{n-1}=\cdots=X_1=-1\}}, $$ so $\sigma(\tau)\subset\sigma\left(\bigcup_{n=1}^\infty X_n \right)$ as $\tau$ is a measurable function of $\{X_n\}$.
In general, a nonnegative integer-valued random variable $T$ is a stopping time with respect to the filtration $\{\mathcal F_n\}$ if $\{T= n\}\in\mathcal F_n$ for all $n$. This implies that $$\sigma(T)=\sigma\left(\bigcup_{n=0}^\infty \{T=n\}\right)\subset\sigma\left(\bigcup_{n=0}^\infty \mathcal F_n\right),$$ so $T$ is independent of $\{X_n\}$ if and only if $\mathbb P(A)\in\{0,1\}$ for all $A\in\sigma(T)$, i.e. $\mathbb P(T=n)=1$ for some $n$.