Consider $X=\{0,1,\ldots,k-1\}^\mathbb Z$ equipped with the $\sigma$-algebra $\mathcal C$ generated by the cylinder sets $$\{x\in X:x_m=i_m,\ldots,x_n=i_n\}\tag{$m\leq n$}.$$ It is well known that the $p$-Bernoulli measure $\mu$, defined by letting $$\mu\{x\in X:x_m=i_m,\ldots,x_n=i_n\}=p_{i_m}\cdots p_{i_n}$$ is $T$-invariant (and ergodic) for the left shift $T:X\to X:x\mapsto y$, where $y_n=x_{n+1}$. However, I was wondering if there are any other $T$-invariant (not necessarily ergodic) measures. If I understand it correctly, this post seems to imply that no other $T$-invariant measure exists, but I don't see why such another measure is impossible.
Are product measures the only $T$-invariant measures if $T$ is the left shift on $\{0,1,\ldots,k-1\}^\mathbb Z$
ergodic-theorymeasure-theory
Related Solutions
Let's first start by noting the measure $\mu$ is well defined on all Borel sets. Take
$$ X : \Omega \longmapsto \mathbb{R} $$ $$ X=\sum\limits_{i}\frac{X_{n}}{2^{n}} $$ Note that $\sum\limits^{N}_{i}\frac{X_{i}}{2^{i}}$ are random variables in $\Omega$ so its limit is in fact a random variable. We can then define a measure $\mu_{X}$ in all borel sets by setting $\mu_{X}(B)=\mu(X^{-1}(B))$. We can also get a little bit more intuition on how this measure works by looking on how it assigns measure to dyadic intervals.
Take any dyadic interval $I=[\frac{b}{2^{n}},\frac{b+1}{2^{n}}]$ of level $n$. By writing $b$ in base $2$ we may identify all dyadic intervals of level $n$ with strings of length $n$ in $\{0,1\}^{[[n]]}$. Now for any string $\sigma$ of length $n$ write $I_{\sigma}$ for the respective dyadic interval and $C_{\sigma}$ for the respective cylinder. We'll show by induction on $n$ that $X^{-1}(I_{\sigma})=C_{\sigma}\cup D_{\sigma}$ where $D_{\sigma}$ is a finite set (and thus of measure $0$ in $\Omega$).
The base case $n=1$ is clear if we notice that $X \in [0,\frac{1}{2}]=I_{0}$ if and only if $X_{1}=0$ or $X_{1}=1$ and $X_{i}=0$ for all $i > 1$, similarly for $I_{1}=[\frac{1}{2},1]$.
Now suppose we have proved our assertion for $n>1$. Take $\sigma$ any string of length of $n+1$. If $\sigma=0\sigma'$ for $\sigma'$ some string of length $n$, we have that $X \in I_{\sigma}$ if and only if $2X \in I_{\sigma'}$ and $X_{1}=0$. Write $\Omega_{0}=\{\omega \in \Omega: X_{1}=0\}$. The shift operator $$S(\omega)_{n}=\begin{cases} \omega_{0} & n=0 \\ \omega_{n+1} & \text{all other cases} \end{cases}$$
is a bijection from $\Omega$ to $\Omega_{0}$, furthermore, $S(C_{\rho})=C_{0\rho}$ for any string $\rho$. Now
$$2X|_{\Omega_{0}}=\sum\limits_{i}\frac{X_{i}}{2^{i-1}}=X \circ S^{-1}$$
So $$X^{-1}(I_{\sigma})=2X|_{\Omega_{0}}(I_{\sigma'})=(X \circ S^{-1})^{-1}(I_{\sigma'})=S(X(I_{\sigma'}))=S(C_{\sigma'}\cup D_{\sigma'})=C_{\sigma} \cup S(D_{\sigma'})$$
Here we used the inductive hypothesis on $X^{-1}(I_{\sigma})$. The case where $\sigma=1\sigma'$ can be done similarly by instead noting that $X^{-1}(I_{\sigma})=(2X-1)|_{\Omega_{1}}^{-1}(I_{\sigma'})$ where $\Omega_{1}=\{\omega \in \Omega : X_{1} = 1\}$. This completes the induction.
It is now clear that $\mu_{X}(I_\sigma)=p^{\sum\sigma_{i}}(1-p)^{\sum (1-\sigma_{i})}$
There is also a more "within $\mathbb{R}$" approach to finding the same measure through Carathéodory's extension theorem. By assigning this probability to each dyadic interval, we can construct exactly the same measure. Moreover in the special case where $p=\frac{1}{2}$, the uniqueness of the extension guarantees $\mu_{X}=\mathcal{L}$. Sadly that as far as we can go as for establishing relationships with $\mathcal{L}$.
For $p$ and $q$ write $\mu_{p}$ and $\mu_{q}$ for the measures constructed as we did above for Bernoulli random variables with parameters $p$ and $q$ respectively. We can prove that if $p \neq q$ then $\mu_{p}$ and $\mu_{q}$ are mutually singular, in particular if $p \neq \frac{1}{2}$, $\mu_{p}$ and $\mathcal{L}$ are mutually singular so Radon-Nikodym tells us little about $\mu_{p}$.
For a real number $x$ write $x_{i}$ for the $i$-th term in the binary expansion. By looking a carefully on how $\mu_{X}$ assigns probability to dyadic intervals, we can show that the functions $f_{i}(x)=x_{i}$ are independent Bernoulli random variables with parameter $p$ when looked at with respect to $\mu_{p}$ and of parameter $q$ with respect to the measure $\mu_{q}$. Thus, by the law of large numbers, the sets $A_{p}=\{ x \in \mathbb{R} : \lim\limits_{n \to +\infty}\frac{\sum^{n}x_{i}}{n}=p\}$ and $A_{q}=\{ x \in \mathbb{R} : \lim\limits_{n \to +\infty}\frac{\sum^{n}x_{i}}{n}=q\}$ fulfill
$$ \mu_{p}(A_{p})=1 $$ $$ \mu_{q}(A_{q})=1 $$
So $\mu_{p}$ and $\mu_{q}$ are mutually singular.
There is a little bit of discussion of the characteristics of these measures and of the sets $A_{p}$ from a fractal geometry perspective in first chapter of:
Bishop, Christopher J.; Peres, Yuval, Fractals in probability and analysis, Cambridge Studies in Advanced Mathematics 162. Cambridge: Cambridge University Press (ISBN 978-1-107-13411-9/hbk; 978-1-316-46023-8/ebook). ix, 402 p. (2017). ZBL1390.28012.
Best Answer
There are many, many other shift-invariant measures. One very easy example is something like $\mu = \frac12 \delta_{x} + \frac12 \delta_{Tx}$, where $x$ is the periodic sequence $010101\cdots$.
Another natural example is the Markov measure generated by transition probabilities $(P_{ij})_{0 \leq i,j \leq k-1}$ and the corresponding stationary distribution $(\pi(0), \dots, \pi(k-1))$. The Markov measure $\mu$ is defined on cylinders by $$ \mu\{x \in X : x_m \cdots x_{n} = i_{m} \dots i_n\} = \pi(i_m) P_{i_m i_{m+1}} \cdots P_{i_{n-1} i_n}. $$ This example is called a Markov measure because it's just the dynamical systems way of talking about the joint distribution for sample paths of a finite state Markov chain.
Also, you can check that the first example is ergodic, and it's a classical theorem that the Markov measure is ergodic if the Markov chain is irreducible.
There are plenty of other examples besides these. For shift systems, the set of invariant measures and the set of ergodic invariant measures are both huge.