Lemma 1: One of the following three statements holds true:
- $\lim_{x \uparrow 1} f(x)=\infty$ a.s.
- $\lim_{x \uparrow 1} f(x) = - \infty$ a.s.
- $\limsup_{x \uparrow 1} f(x) = \infty$ and $\liminf_{x \uparrow 1} f(x) = -\infty$ a.s.
Proof: Denote by $\mu$ the distribution of $\limsup_{x \uparrow 1} f(x)$, i.e.
$$\mu(B) := \mathbb{P} \left( \limsup_{x \uparrow 1} \sum_{n=1}^{\infty} \epsilon_n x^n \in B \right).$$
If we set
$$\tau := \inf\{N \in \mathbb{N}; \sum_{n=1}^N \epsilon_n = 1\}$$
then $\tau< \infty$ almost surely and
$$\xi_n := \epsilon_{n + \tau(\omega)}, \qquad n \geq 1$$
defines a sequence of iid Bernoulli random variables. In particular, $(\xi_n)_{n \in \mathbb{N}}$ equals in distribution $(\epsilon_n)_{n \in \mathbb{N}}$, and so
$$\mu(B) = \mathbb{P} \left( \limsup_{x \uparrow 1} \sum_{n=1}^{\infty} \xi_n x^n \in B \right) \tag{1}$$
for all $B$. Moreover, we have
\begin{align*} \limsup_{x \uparrow 1} \sum_{n =1}^{\infty} \xi_n x^n &= \limsup_{x \uparrow 1} \sum_{n=\tau+1}^{\infty} \epsilon_n x^n \\ &= - \sum_{n=1}^{\tau} \epsilon_n + \limsup_{x \uparrow 1} \sum_{n=1}^{\infty} \epsilon_n x^n \\ &= -1 + \limsup_{x \uparrow 1} \sum_{n=1}^{\infty} \epsilon_n x^n. \end{align*}
Combining this with $(1)$ we get
$$\mu(B) = \mu(B+1)$$
for any Borel set $B$. The only finite measure on $\mathcal{B}(\mathbb{R})$ which is invariant under (non-trivial) translations is the trivial measure, and therefore we conclude that $\mu(\mathbb{R})=0$. The same reasoning works for $\liminf_{x \uparrow 1} f(x)$ (because of symmetry), and this finishes the proof of the lemma.
Lemma 2: $\limsup_{x \uparrow 1} f(x) = \infty$ and $\liminf_{x \uparrow 1} f(x)= - \infty$ almost surely.
Proof: The sequence $(-\epsilon_n)_{n \in \mathbb{N}}$ equals in distribution $(\epsilon_n)_{n \in \mathbb{N}}$, and therefore the random variables
$$\limsup_{x \uparrow 1} \sum_{n =1}^{\infty} \epsilon_n x^n$$
and
$$\limsup_{x \uparrow 1} \sum_{n=1}^{\infty} (-\epsilon_n) x^n = - \liminf_{x \uparrow 1} \sum_{n=1}^{\infty} \epsilon_n x^n$$
have the same distribution. Now the assertion follows from Lemma 1.
Corollary: $f$ has infinitely many zeros in $(0,1)$ with probability $1$.
Proof: As already noted by the OP, it suffices to show that for any $c \in (0,1)$ there exists with probability $1$ some $x^* \in (c,1)$ such that $f(x^*)=0$. Fix $c \in (0,1)$. By Lemma 2, we can find (with probability $1$) some $x_1 \in (c,1)$ and $x_2 \in (c,1)$ such that $f(x_1)>1$ and $f(x_2)<-1$. Since $f$ is continuous on $(0,1)$ this implies, by the intermediate value theorem, that there exists $x^* \in (x_1,x_2) \subseteq (c,1)$ such that $f(x^*)=0$.
Remark: In this paper you can find some more general statements on the behaviour of random series.
Best Answer
Here is one argument that would work, assuming that one already knows the following facts:
For any $s>0$, the process $(B_{s+t}-B_s)_{t \geq 0}$ is a Brownian motion.
The time inversion $(tB_{\frac{1}{t}})$ of a Brownian motion is a Brownian motion (defined to start at $0$ at $t=0$).
$\limsup_{t \to \infty} B_t >0$ (in fact, the lim sup is $+\infty$).
There exist very succinct proofs of each of these statements, each of which can be derived directly from the basic axioms of Brownian motion. See theorem 1.9, proposition 1.23, and theorem 2.3 in the book by Morters and Peres, for example.
We can use these three things to get a short proof of your statement:
Fix $t_0 \geq 0$.
By $(1)$, we know that the process $Y_t = B_{t+t_0}-B_{t_0}$ is a Brownian motion.
By $(2)$, we know that the process $(X_t)$, defined by $X_t = tY_{\frac{1}{t}}$ when $t>0$ (and $X_0=0$), is a Brownian motion.
Now convince yourself that the following inclusion of events is true: $$\{ t_0 \text{ is a local maximum of } (B_t)\} \subseteq \{ \limsup_{t\to \infty} X_t \leq 0\}$$ The reason that this is true, is that if $t_0$ is a local maximum of $(B_t)$, then $Y_t \leq 0$ for small enough $t$, and thus $X_t \leq 0$ for large enough $t$.
By statement $(3)$, we see that the event on the right has probability $0$, and so the event on the left has probability $0$.
(I do concede, however, that there may be a shorter proof which relies on less statements. But I don't know one off the top of my head...)