Filtering Period and Amplitude of Noisy Sine Wave – Stochastic Processes

pr.probabilitystochastic-calculusstochastic-filteringstochastic-processes

Let $W$ be a standard Brownian motion and $\mathcal F_t$ its natural filtration.

Suppose $\theta, A$ are positive $L^1$ random variables independent of $\mathcal F_t$.

Let $Y_t$ be the process

$$Y_t := A \sin \, (\theta t) + W_t $$

and denote by $\mathcal Y_t$ its natural filtration.

Question: Is it true that $\mathbb E[A| \mathcal Y_t] \to A$, and $\mathbb E[\theta| \mathcal Y_t] \to \theta$ almost surely?

Best Answer

By the Levy zero-one law, the question is equivalent to deciding whether $A$ and $\theta$ are measurable with respect to $\mathcal F_\infty$, the common refinement of all the $\mathcal F_t$. The answer is positive.

For $\alpha>0,\beta\ge 0$, consider $$Z_n=Z_n(\alpha,\beta):=Y_{\beta+(2n+1)\alpha }-Y_{\beta+(2n-1)\alpha }=U_n+X_n$$ where, for $n=0,1,2,\ldots$, the sequence $$U_n:=A\sin\Bigl(\theta (\beta+(2n+1)\alpha)\Bigr)-A\sin\Bigl(\theta (\beta+(2n-1)\alpha)\Bigr)=2A\sin(\alpha \theta) \cdot \cos(\theta\beta+2n\theta\alpha)$$ is an almost periodic sequence obtained as a function of the (zero entropy) rotation by angle $2\alpha\theta$, and $$X_n:=W_{\beta+(2n+1)\alpha }-W_{\beta+(2n-1)\alpha }$$ is a sequence of i.i.d. Gaussian variables with mean 0 and variance $2\alpha$.

In [1], H. Furstenberg defined disjointness of dynamical systems and proved in Theorem I.2 that i.i.d. processes and zero-entropy processes are disjoint. In Theorem 1.5 of the same paper, he showed that if $(U_n)_{n \ge 1}$ and $(X_n)_{n \ge 1}$ are sequences of integrable real random variables which define two disjoint stationary processes, then the sum sequence $(U_n+X_n)_{n \ge 1}$ determines $(U_n)_{n \ge 1}$. A generalization is in Theorem 7 of [2].

In particular, from the sequence $\{Z_n(\alpha,\beta)\}_{n \ge 1}$ we can obtain $(U_n)_{n \ge 1}$ =$(U_n(\alpha,\beta))_{n \ge 1}$ for every $\alpha$ and a.e. $\beta$, and using continuity, for all $\beta$. Then $\theta=\pi/(4\alpha_*)$, where $\alpha_*=\min\{\alpha>0: U_1(\alpha,0)=0\}$ and then $A$ is easy to determine.

Theorem 1.6 in [3] gives another proof.

For an elementary direct argument, define the deterministic function $f_0(\alpha)=\lim_n \frac{1}{n}\sum_{k=1}^n Z_n(\alpha,0)^2$ and observe that the smallest positive $\alpha$ where $f_0(\alpha)\alpha^{-2}$ is minimized is $\alpha_{min}=\pi/\theta$. Moreover, $$f_0(\alpha_{min}/2)=4A^2+\alpha_{min}$$ which yields $A$.

[1] H. Furstenberg (1967), Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation, Math. systems theory 1, pp. 1-49. https://mathweb.ucsd.edu/~asalehig/F_Disjointness.pdf

[2] Furstenberg, Hillel, Yuval Peres, and Benjamin Weiss. "Perfect filtering and double disjointness." In Annales de l'IHP Probabilités et statistiques, vol. 31, no. 3, pp. 453-465. 1995.http://www.numdam.org/article/AIHPB_1995__31_3_453_0.pdf

[3] Lev, Nir, Ron Peled, and Yuval Peres. "Separating signal from noise." Proceedings of the London Mathematical Society 110, no. 4 (2015): 883-931. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.749.2820&rep=rep1&type=pdf