Behaviour of total variation function

analysisbounded-variationfunctional-analysisgeneral-topologyreal-analysis

Question
Let $D=\left\{0=t_0<t_1<\cdots<t_K=T\right\}$ be a dissection of $[0, T]$. Assume $x^n, x$ are $\mathbb R^d$-valued paths of bounded variation such that $x^n \rightarrow x$ pointwise in $\|\cdot\|_\infty+\|\cdot\|_{\text{var}}$-norm where $\|\cdot\|_{\text{var}}$ is the variation of the paths. Thus my book claims:
$$
\sum_{i=0}^{K-1} d\left(x_{t_i}, x_{t_{i+1}}\right) =\liminf _{n \rightarrow \infty} \sum_i d\left(x_{t_i}^n, x_{t_{i+1}}^n\right)
$$

But not necessarily:

$$
\sum_{i=0}^{K-1} d\left(x_{t_i}, x_{t_{i+1}}\right) =\lim _{n \rightarrow \infty} \sum_i d\left(x_{t_i}^n, x_{t_{i+1}}^n\right)
$$

Since presumably the RHS may not exist. I thought that the distance function (in this case I interpret it as induced by the norm on the space) is continuous (at least in the induced topology by the metric) so that the second equality made sense?

Background

I am trying to prove the following lemma from the book “Multidimensional Stochastic Processes as Rough Paths” by Peter Friz and Nicolas Victoir. The total variation, or one variation, of a continuous path over an interval of the reals is defined as the supremum over all partitions of the interval of the sum of the absolute value of the increments of the path over the partition points.

In particular, I don’t understand why we can’t take limits, as opposed to lim inf, in the first equality of the show proof of Lemma 1.18. The exercise underneath it, I was able to find an example of this which is given bye tent like functions (one tent peaking at a half for $n=1$, two tents peaking at 1/4 for $n=2$, and so forth), but this example works with limits as opposed to lim infs.

“Lemma $1.18$ Assume $\left(x^n\right)$ is a sequence of paths from $[0, T] \rightarrow E$ of finite 1 -variation. Assume $x^n \rightarrow x$ pointwise on $[0, T]$. Then, for all $s<t$ in $[0, T]$,
$$
|x|_{1-v a r ;[0, T]} \leq \liminf _{n \rightarrow \infty}\left|x^n\right|_{1-v a r ;[0, T]} .
$$

Proof. Let $D=\left\{0=t_0<t_1<\cdots<t_K=T\right\}$ be a dissection of $[0, T]$. By assumption, $x^n \rightarrow x$ pointwise and so
$$
\begin{aligned}
\sum_{i=0}^{K-1} d\left(x_{t_i}, x_{t_{i+1}}\right) &=\liminf _{n \rightarrow \infty} \sum_i d\left(x_{t_i}^n, x_{t_{i+1}}^n\right) \\
& \leq \liminf _{n \rightarrow \infty}\left|x^n\right|_{1-\mathrm{var} ;[0, T]}
\end{aligned}
$$

Taking the supremum over all the dissections of $[s, t]$ finishes the 1-variation estimate.

In general, the inequality in lemma $1.18$ can be strict. The reader is invited to construct an example in the following exercise.

Exercise 1.19 Construct $\left(x^n\right) \in C^{1-\text { var }}([0,1], \mathbb{R})$ such that $\left|x^n\right|_{\infty ;[0,1]} \leq$ $1 / n$ but so that $\left|x^n\right|_{1-v a r}=1$ for all $n$. Conclude that the inequality in lemma $1.18$ can be strict.”

Best Answer

It is the case that $\lim_{n \to \infty} \sum_{i=0}^{k-1} d\left(x_{t_i}^n, x_{t_{i+1}}^n\right)$ exists under the assumption that $x^n \to x$ pointwise. The authors could equally have written \begin{align*} \sum_{i=0}^{K-1} d\left(x_{t_i}, x_{t_{i+1}}\right) &=\lim _{n \rightarrow \infty} \sum_i d\left(x_{t_i}^n, x_{t_{i+1}}^n\right) \\ & \le \liminf _{n \rightarrow \infty}\left|x^n\right|_{1-\mathrm{var} ;[0, T]}. \end{align*}

This means the same thing as what was written though since when the limit exists, the $\liminf, \limsup$ and $\lim$ are all equal.

Note that you cannot replace the $\liminf$ on the final line since you do not know that $\left|x^n\right|_{1-\mathrm{var} ;[0, T]}$ converges.

Related Question