## Setting

Let $\mathbb N_0 = \mathbb Z_{\geq 0}$ be the natural numbers including zero. Let $(X, \Sigma, \mu)$ be a measure space and $(p_i)_{i=0}^\infty$ be a sequence of measurable maps $p_i\colon X \to [0,1]$ satisfying

$$\forall x \in X\colon ~\sum_{i=0}^\infty p_i(x) = 1, \tag{1}$$

and

$$\forall i,j\in \mathbb N_0\colon~ \int_X p_i(x)p_j(x) \mathop{}\!\mathrm{d}\mu(x) = \begin{cases}

1/2&\text{if } i \neq j\\

1&\text{if } i = j.

\end{cases}\tag{2}$$

## Questions

- Can we prove that a pair $\left[(X, \Sigma, \mu), (p_i)_{i=0}^\infty \right]$ satisfying (1) and (2) can or cannot exist?
- If it can exist, can we construct an example?
- If it can exist, can we prove that the space could be $\sigma$-finite or can we prove that it must not be $\sigma$-finite?
- If it can exist, can we say anything about the uniform integrability of $p_i$?
- Any other interesting properties we can say about the pair?
- More generally, is this question even well-posed? Hopefully it's not trivial đź™‚

## Attempts

- By summing over $i, j$ in (2), we find that $\mu(X)$ cannot be finite.
- Using the $L^p$ Dominated Convergence Theorem, we can prove that, for example, for a fixed $i$,

$$\lim_{j \to \infty}\int_X \left(p_i(x) p_j(x)\right)^2 \mathop{}\!\mathrm{d}\mu(x) = 0,$$

but this doesn't (at least I don't think) tell us that

$$\lim_{j \to \infty} \int_X p_i(x) p_j(x) \mathop{}\!\mathrm{d}\mu(x) = 0,$$

which would prove nonexistence. - This theorem can almost be applied, but not quite because I see no reason why $p_i(x)p_j(x)$ cannot be zero for some $x$.
- I'm very unsure about this one, but looking at Vitali's convergence theorem and this paper, it seems like we can possibly apply them to say that either the measure space is not $\sigma$-finite or $p_i$ is not uniformly integrable?

## Thanks!

Any solutions or suggestions for how to proceed would be greatly appreciated đź™‚ Even a list of potentially relevant theorems and lemmas in measure theory would also be helpful.

## Best Answer

Such a pair $\left[(X, \Sigma, \mu), (p_i)_{i=0}^\infty \right]$ does not exist.

It is a bit more convenient to write the proof if subscripts are taken to run from $1,$ rather than $0.$

Let $H$ be the Hilbert space $\mathscr{L}^2(X, \Sigma, \mu, \mathbb{R}),$ for arbitrary $X, \Sigma,$ and $\mu.$

I now quote three propositions without proof. The reference I have for the first two is Donald L. Cohn,

Measure Theory(2nd ed., 2014), Propositions 3.1.3 and 3.1.5, and a remark on page 97. Unlike Proposition 1, Proposition 2 isn't stated explicitly by Cohn in the form given here. He merely remarks, in passing:It was indeed straightforward to do just that. But if I can find a more explicit reference to this presumably standard theorem, I shall give it in a comment, or an edit to this answer, if it needs editing for another reason.

Proposition 1.Let $(X, \mathscr{A}, \mu)$ be a measure space, and let $f$ and $f_1, f_2, \ldots$ be real-valued $\mathscr{A}$-measurable functions on $X.$ If $\{f_n\}$ converges to $f$ in measure, then there is a subsequence of $\{f_n\}$ that converges to $f$ almost everywhere.Proposition 2.Let $(X, \mathscr{A}, \mu)$ be a measure space, let $p$ satisfy $1 \leq p < +\infty,$ and let $f$ and $f_1, f_2, \ldots$ belong to $\mathscr{L}^p(X, \Sigma, \mu, \mathbb{R}).$ If $\{f_n\}$ converges to $f$ in $p$th mean, then $\{f_n\}$ converges to $f$ in measure.The next proposition, at least, is certainly very well-known, and easy to prove, but it's hard to find a statement of it that contains just what we need here and no more. The following version amalgamates I. J. Maddox,

Elements of Functional Analysis, Theorem 6.5, with BĂ©la BollobĂˇs,Linear Analysis(2nd ed.), Theorem 10.5:Proposition 3.Let $(x_k)$ be an orthogonal sequence in $H.$ Then $\sum x_k$ converges if and only if $\sum\|x_k\|^2 < +\infty$; and in that case, $\lVert\sum x_k\rVert^2 = \sum\|x_k\|^2.$(Some other standard theorems are quoted in the proof below as they are needed, with less fuss.)

Suppose now that condition (2) is satisfied.

That is, there exist vectors $p_1, p_2, \ldots \in H$ (for the moment, we abuse language by ignoring the distinction between functions $X \to \mathbb{R}$ and equivalence classes of functions that are equal $\mu$-almost everywhere) such that $\|p_n\| = 1$ for all $n$ and $(p_m, p_n) = \frac12$ whenever $m \ne n.$

It turns out that these equations have in a sense a unique solution.

Proposition 4.The vectors $p_1, p_2, \ldots$ are linearly independent, and there exists an orthonormal sequence of vectors $q_1, q_2, \ldots$ such that, for all $n \geqslant 1,$ \begin{equation} \label{4283111:eq:3}\tag{3} p_n = \sum_{k=1}^{n-1}\frac1{\sqrt{2k(k+1)}}q_k + \sqrt{\frac{n+1}{2n}}q_n. \end{equation}Proof.We use induction on $n,$ starting by defining $q_1 = p_1.$ Clearly, \eqref{4283111:eq:3} holds when $n = 1.$Suppose that $n \geqslant 2$ and \eqref{4283111:eq:3} is satisfied with $i$ in place for $n$ for $i = 1, \ldots, n-1.$ (We could even have allowed the empty case $n = 1,$ but we might as well suppose that $n \ne 1.$)

Let $K$ be the subspace of $H$ spanned by $p_1, \ldots, p_{n-1}.$ As a finite-dimensional subspace of a normed space, $K$ is closed (J. DieudonnĂ©,

Foundations of Modern Analysis, (5.9.2)), and as a closed subspace of the Hilbert space $H,$ it has a (unique) orthogonal complement, $K^\perp$ (DieudonnĂ©, (6.3.1)), so $p_n$ is the sum of a vector in $K$ and a vector in $K^\perp.$ Because $H$ isn't finite-dimensional, $K^\perp$ is nonempty, so the $K^\perp$-component of $p_n,$ even if it is $0$ (it isn't, but we don't know that yet), is a scalar multiple of some unit vector $q_n \in K^\perp.$That is, there are real numbers $\kappa_1, \ldots, \kappa_{n-1}$ and $\lambda_n$ such that $$ p_n = \kappa_1p_1 + \cdots \kappa_{n-1}p_{n-1} + \lambda_nq_n. $$ For each $i = 1, \ldots, n-1,$ we have $2(p_i, p_n) = 1,$ therefore: $$ 2\kappa_i + \sum_{j\ne i}\kappa_j = 1 \qquad (i = 1, \ldots, n-1). $$ Summing, and dividing by $n,$ we obtain $$ \kappa_1 + \cdots + \kappa_{n-1} = 1 - \frac1n, $$ whence $$ \kappa_i = \frac1n \qquad (i = 1, \ldots, n-1). $$ That is: \begin{equation} \label{4283111:eq:4}\tag{4} p_n = s_n + \lambda_nq_n, \text{ where } \ s_n = \frac{p_1 + \cdots + p_{n-1}}n. \end{equation} But $$ \|s_n\|^2 = \frac1{n^2}\left(\sum_{i=1}^{n-1}\|p_i\|^2 + \sum_{i\ne j}(p_i, p_j)\right) = \frac1{n^2}\left(n - 1 + \frac{(n-1)(n-2)}2\right) = \frac{n-1}{2n}, $$ therefore $$ \lambda_n = \sqrt{\|p_n\|^2 - \|s_n\|^2} = \sqrt{\frac{n+1}{2n}}. $$ Because $\lambda_n \ne 0,$ and because $p_1, \ldots, p_{n-1}$ are linearly independent by the inductive hypothesis, $p_1, \ldots, p_n$ are linearly independent.

In conjunction with \eqref{4283111:eq:4}, the inductive hypothesis also implies: \begin{align*} p_n - \lambda_nq_n & = \frac1n\sum_{k=1}^{n-1}\left( \sqrt{\frac{k+1}{2k}} + \sum_{j=k+1}^{n-1}\frac1{\sqrt{2k(k+1)}}\right)q_k \\ & = \frac1n\sum_{k=1}^{n-1} \frac{(k + 1) + (n - k - 1)}{\sqrt{2k(k+1)}}q_k, \end{align*} and this gives \eqref{4283111:eq:3}, completing the proof of Proposition 4.

The converse is worth mentioning (although it's not needed here): for any $(X, \Sigma, \mu),$ and any orthonormal sequence $q_1, q_2,\ldots$ in $H,$ equation \eqref{4283111:eq:3} defines a sequence $p_1, p_2, \ldots$ that satisfies condition (2) of the question. Thus, define: $$ a_n = \frac1{\sqrt{2n(n + 1)}} \text{ and } \ b_n = \sqrt{\frac{n + 1}{2n}} \quad (n \geqslant 1). $$ Then, for all $n \geqslant 1,$ $$ \|p_n\|^2 = \sum_{k=1}^{n-1}a_k^2 + b_n^2 = \frac12\sum_{k=1}^{n-1}\left(\frac1k - \frac1{k + 1}\right) + \frac{n + 1}{2n} = \frac12\left(1 - \frac1n\right) + \frac12\left(1 + \frac1n\right) = 1, $$ and if $m > n,$ then $$ (p_m, p_n) = \sum_{k=1}^{n-1}a_k^2 + a_nb_n = \frac12\left(1 - \frac1n\right) + \frac1{2n} = \frac12. $$

Returning to the argument: by Proposition 3, the series $$ s_\infty = \sum_{k=1}^\infty\frac1{\sqrt{2k(k+1)}}q_k $$ converges in $H,$ and $$ \|s_\infty\|^2 = \sum_{k=1}^\infty\frac1{2k(k+1)}=\frac12. $$ From the definition of the sequence $(s_n)_{n \geqslant 1}$ in \eqref{4283111:eq:4}, $$ \lim_{n\to\infty}\frac{p_1 + p_2 + \cdots + p_{n-1}}n = s_\infty. $$ We now begin to distinguish functions $X \to \mathbb{R}$ from their equivalence classes ("vectors") in the space $H.$ Applying Proposition 2, followed by Proposition 1, we deduce that for some strictly increasing sequence of positive integers $(n_k)_{k\geqslant1},$ \begin{equation} \label{4283111:eq:5}\tag{5} \lim_{k\to\infty}\frac{p_1(x) + p_2(x) + \cdots + p_{n_k-1}(x)}{n_k} = s_\infty(x) \ \text{ for almost all } x \in X. \end{equation} Therefore, if condition (2) is satisfied, condition (1) cannot also be satisfied. Indeed, it cannot even be true that: \begin{equation} \label{4283111:eq:1p}\tag{$1'$} \sum_{n=1}^\infty p_n(x) = 1 \ \text{ for almost all } x \in X. \end{equation} For, we have $$ \int s_\infty^2\,d\mu = \|s_\infty\|^2 = \frac12 > 0, $$ therefore $s_\infty$ is nonzero on some set of strictly positive measure. On the other hand, in view of the nonnegativity of the functions $p_n,$ equation \eqref{4283111:eq:1p}, in conjunction with \eqref{4283111:eq:5}, which we have shown to be a consequence of condition (2) of the question, would imply that $s_\infty(x) = 0$ for almost all $x \in X.$