By the Levy zero-one law, the question is equivalent to deciding whether $A$ and $\theta$ are measurable with respect to $\mathcal F_\infty$, the common refinement of all the $\mathcal F_t$. The answer is positive.
For $\alpha>0,\beta\ge 0$, consider
$$Z_n=Z_n(\alpha,\beta):=Y_{\beta+(2n+1)\alpha }-Y_{\beta+(2n-1)\alpha }=U_n+X_n$$
where, for $n=0,1,2,\ldots$, the sequence
$$U_n:=A\sin\Bigl(\theta (\beta+(2n+1)\alpha)\Bigr)-A\sin\Bigl(\theta (\beta+(2n-1)\alpha)\Bigr)=2A\sin(\alpha \theta) \cdot \cos(\theta\beta+2n\theta\alpha)$$
is an almost periodic sequence obtained as a function of the (zero entropy) rotation by angle $2\alpha\theta$, and
$$X_n:=W_{\beta+(2n+1)\alpha }-W_{\beta+(2n-1)\alpha }$$
is a sequence of i.i.d. Gaussian variables with mean 0 and variance $2\alpha$.
In [1], H. Furstenberg defined disjointness of dynamical systems and proved in Theorem I.2 that i.i.d. processes and zero-entropy processes are disjoint.
In Theorem 1.5 of the same paper, he showed that if $(U_n)_{n \ge 1}$ and $(X_n)_{n \ge 1}$ are sequences
of integrable real random variables which define two disjoint stationary
processes, then the sum sequence $(U_n+X_n)_{n \ge 1}$ determines $(U_n)_{n \ge 1}$.
A generalization is in Theorem 7 of [2].
In particular, from the sequence $\{Z_n(\alpha,\beta)\}_{n \ge 1}$ we can obtain
$(U_n)_{n \ge 1}$ =$(U_n(\alpha,\beta))_{n \ge 1}$ for every $\alpha$ and a.e. $\beta$, and using continuity, for all $\beta$. Then $\theta=\pi/(4\alpha_*)$, where
$\alpha_*=\min\{\alpha>0: U_1(\alpha,0)=0\}$ and then $A$ is easy to determine.
Theorem 1.6 in [3] gives another proof.
For an elementary direct argument,
define the deterministic function $f_0(\alpha)=\lim_n \frac{1}{n}\sum_{k=1}^n Z_n(\alpha,0)^2$
and observe that the smallest positive $\alpha$ where $f_0(\alpha)\alpha^{-2}$ is minimized
is $\alpha_{min}=\pi/\theta$. Moreover, $$f_0(\alpha_{min}/2)=4A^2+\alpha_{min}$$
which yields $A$.
[1] H. Furstenberg (1967), Disjointness in ergodic theory, minimal sets, and a problem in
Diophantine approximation, Math. systems theory 1, pp. 1-49.
https://mathweb.ucsd.edu/~asalehig/F_Disjointness.pdf
[2] Furstenberg, Hillel, Yuval Peres, and Benjamin Weiss. "Perfect filtering and double disjointness." In Annales de l'IHP Probabilités et statistiques, vol. 31, no. 3, pp. 453-465. 1995.http://www.numdam.org/article/AIHPB_1995__31_3_453_0.pdf
[3] Lev, Nir, Ron Peled, and Yuval Peres. "Separating signal from noise." Proceedings of the London Mathematical Society 110, no. 4 (2015): 883-931.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.749.2820&rep=rep1&type=pdf
If $\xi\in \mathbb D^{1,2}$ is in the Sobolev-Watanabe space then we can apply Clark-Ocone formula to get that
$$\xi=E[\xi]+\int_0^T E[D_s\xi|\mathcal F_s]dW_s$$
where $D_s$ is the Malliavin derivative. For $s\in [0,T]$ we may write $\xi=\xi 1_{\{\tau > s\}}+\xi 1_{\{\tau \leq s\}}$. Then
\begin{align*}
\xi&=E[\xi]+\int_0^T E[D_s(\xi 1_{\{\tau > s\}}+\xi 1_{\{\tau \leq s\}})|\mathcal F_s]dW_s\\
&=E[\xi]+\int_0^T E[D_s(\xi 1_{\{\tau > s\}})|\mathcal F_s]dW_s+\int_0^T E[D_s(\xi 1_{\{\tau \leq s\}})|\mathcal F_s]dW_s
\end{align*}
$\xi 1_{\{\tau \leq s\}}$ is $\mathcal F_s$-measurable so $D_s (\xi 1_{\{\tau \leq s\}})=0$. Also for $s>\tau$ we have that $\xi 1_{\{\tau > s\}}=0$ so $D_s (\xi 1_{\{\tau > s\}})=0$ and for $s<\tau$ we have that $\xi 1_{\{\tau > s\}}=\xi$. So
$$\xi=E[\xi]+\int_0^\tau E[D_s\xi|\mathcal F_s]dW_s.$$
Best Answer
Yes. Since $W$ and $Z$ are independent, we may write
$$X_t = \int_{\mathbb R} \mathbb E[g(\xi, z)| \mathcal F^W_t] \, d\mu_Z(z),$$
as can be seen by, say, taking the regular conditional probability with respect to $Z$, or by working directly on the product space $(C[0, T] \times \mathbb R, \mu_W \times \mu_Z, \mathcal B_{C[0, T]} \otimes \mathcal B_{\mathbb R}).$
We recognize that for every $z$, $Y^z_t: = \mathbb E[g(\xi, z)| \mathcal F_t^W]$ is a closable martingale with respect to the Brownian filtration. According to the results here and here, $Y^z_t$ is in fact continuous for every $z$. By the boundedness of $g$, $Y^z_t$ is also uniformly bounded.
Now the rest of the proof is analysis - we claim that $X_t$, being the average of continuous, uniformly bounded functions is also continuous almost surely.
To see this, let
$$\phi(z, \delta, \omega) := \sup_{s, t; |s - t| < \delta} |Y_t^z (\omega) - Y_s^z (\omega)|$$
be a uniform modulus of continuity for $Y^z$, and let $M > 0$ be a uniform bound for $|g|$. By continuity of $Y^z_t$, we have for $\mu_z \times \mathbb P$-a.e. $(z, \omega)$ that $\lim_{\delta\to 0} \phi(z, \delta, \omega) = 0$.
In other words, writing $E_{\varepsilon, \delta, \omega} := \{z \, | \, \phi(z, \delta, \omega) \leq \varepsilon\}$, we have for every $\varepsilon > 0$ that for $\mathbb P$-a.e. $\omega$ we have $\mu_Z(E_{\varepsilon, \delta, \omega}) \to 1$ as $\delta \to 0^+$.
If for each $\varepsilon > 0$, we write $N_{\varepsilon}$ for the $\mathbb P$-null set of exceptions to the above statement, we may set $N := \cup_{n \in \mathbb Z_+} N_{1/n}$ to find that for all $\omega$ in the $\mathbb P$-full measure set $\Omega \setminus N$, we have $\mu_Z(E_{\varepsilon, \delta, \omega}) \to 1$ as $\delta \to 0^+$, for every $\varepsilon > 0$.
Now let $\varepsilon > 0$ be arbitrary, and fix $\omega \in \Omega \setminus N$. Pick $\delta$ such that $\mu_Z (E_{\varepsilon/2, \delta, \omega}) > 1 - \frac{\varepsilon}{2M}$.
We then compute, for all $s, t$ with $|s - t| < \delta$,
$$|X_t (\omega) - X_s (\omega)|$$
$$= |\int_{\mathbb R} Y^z_t (\omega) - Y^z_s (\omega)\, d\mu_Z(z)|$$
$$\leq \int_{\mathbb R}|Y^z_t (\omega) - Y^z_s (\omega)| \, d\mu_Z(z)$$
$$ = \int_{E_{\varepsilon/2, \delta, \omega}} |Y^z_t (\omega) - Y^z_s (\omega)| \, d\mu_Z (z) + \int_{E^c_{\varepsilon/2, \delta, \omega}} \mathbb |Y^z_t (\omega) - Y^z_s (\omega)| \, d\mu_Z(z)$$
$$ < \frac{\varepsilon}{2} + M \frac{\varepsilon}{2M} = \varepsilon$$
and we conclude since $\varepsilon$ was arbitrary.