Full answer, but a little sloppy, using Markov chains.
Consider the Markov chain with states $0,1,\dots,m$ with $0$ starting state, and transition matrix:
$$T=\begin{pmatrix}p&(1-p)&0&\cdots&0\\ p&0&1-p&\cdots&0\\
\vdots&\vdots&\vdots &\vdots&0\\p&0&0&\dots&1-p\\p&(1-p)&0&\cdots&0\end{pmatrix}$$
Then we want to know how often we are in state $m,$ which is when we got the heads a multiple of $m$ times in a chain of consecutive heads.
The expected value for how often we are in state $m$ is:
The $$\begin{pmatrix}1&0&0&\dots&0\end{pmatrix}\left(\sum_{i=1}^{n} T^i\right)\begin{pmatrix}0\\0\\\vdots\\0\\1\end{pmatrix}$$
Is the expected number of hits of state $m,$ which is the number of non-overlapping sequences of $m$ heads.
When $m=2$ Wolfram Alpha gives me:
$$\begin{pmatrix}1&0&0\end{pmatrix} T^i\begin{pmatrix}0\\0\\1\end{pmatrix}=\frac{(1-p)^2}{2-p}\left(1-(p-1)^{i-1}\right)$$
So the sum is $$n\frac{(1-p)^2}{2-p}-\frac{(1-p)^2}{(2-p)^2}\left(1-(p-1)^n\right)$$
The limit of $E(X_2)/n$ then is $\frac{(1-p)^2}{2-p}.$
When $m=3,$ I think the limit is:
$$\frac{(1-p)^3}{p^2-3p+3}$$
Conjecture
It is possible that the limit is:
$$\frac{(1-p)^m}{1+(1-p)+\cdots+(1-p)^{m-1}}=\frac{p(1-p)^m}{1-(1-p)^m}=\frac{p}{(1-p)^{-m}-1}$$
As $p\to 0$ we'd expect a limit of $\frac1m,$ and we do get that, for $f(x)=(1-x)^{-m},$ $$\lim_{p\to 0} \frac{p}{f(p)-f(0)}=\frac{1}{f'(0)}=\frac{1}{m}.$$
I think the characteristic polynomial of $T$ is:
$$\lambda(\lambda-1)\frac{\lambda^{m}-(1-p)^m}{\lambda -(1-p)},$$ just from looking at $m=1,2,3,4.$
Attempt to understand $T$
If we write $T=pA_0+(1-p)B$ with $$A_0=\begin{pmatrix}1\\1\\\vdots\\1\end{pmatrix}\begin{pmatrix}1&0&0&\cdots&0\end{pmatrix}\\B=\begin{pmatrix}0&1&0&\cdots&0\\0&0&1&\cdots&0\\
\vdots&\vdots&\vdots &\ddots&0\\
0&0&0&\cdots&1\\
0&1&0&\cdots&0
\end{pmatrix}$$
Then $$A_0B=A_1=\begin{pmatrix}1\\1\\\vdots\\1\end{pmatrix}\begin{pmatrix}0&1&0&\cdots&0\end{pmatrix}$$
For $i=0,\dots,m-1,$ $A_iB=A_{i+1}$ and $A_mB=A_1.$
Also, $BA_i=A_i.$
Also: $A_iA_j=A_j.$
So $$T^k=(pA_0+(1-p)B)^k=(1-p)^kB^k+\sum_{i=0}^{m} c(k,i)A_k$$ for some real $c(k,i).$ You get:
$$\begin{align} T^{k+1}&=(pA_0+(1-p)B)T^k\\&=p(1-p)^{k}A_0B^{k}+p\sum_i c(k,i)A_i+(1-p)^{k+1}B^{k+1}+(1-p)\sum_{i}c(k,i)A_i\\&=(1-p)^{k+1}B^{k+1}+p(1-p)^kA_0B^k+\sum_i c(k,i)A_i
\end{align}$$
So since $A_0B^k=A_i$ where $i=1,2,\dots,m$ has $i\equiv k\pmod m,$ you get:
$$c(k+1,i)=\begin{cases}c(k,i)+p(1-p)^k&i\neq0, i\equiv k\pmod m\\c(k,i)&\text{otherwise}\end{cases}$$
The. $$(1,0,0,\dots,0)T^k(0,0,\dots,1)^T=(1-p)^{k}(1,0,\dots,0)B^k(0,0,\dots,1)^T+c(k,m)$$
You start with $c(1,0)=p,c(1,i)=0.$
And $$(1,0,\dots,0)B^k(0,0,\dots,1)^T=\begin{cases}1&m\mid k\\0&\text{otherwise}\end{cases}$$
So the expected value is:
$$\sum_{j=1}^{\lfloor n/m\rfloor}(1-p)^{mj}+m\left(\sum_{j=0}^{\lfloor n/m\rfloor-1}c(mj+1,m)\right)+(n\bmod m)c(n,m)$$
And $$c(mj+1,m)=\sum_{i=1}^j p(1-p)^{mi}=\frac{p(1-p)^{m}\left(1-(1-p)^{mj}\right)}{1-(1-p)^m}$$
You get $$X_{m,n}=n\frac{p(1-p)^m}{1-(1-p)^m}+O(1).\tag1$$
From this we get my conjecture, with $q=1-p$ the probability of a heads:
$$\frac{X_{m,n}}{n}\to\frac{1-q}{q^{-m}-1}$$
I think there is constant $C,$ depending on $p,m,$ such that we get more specifically:
$$X_{m,n}=n\frac{p(1-p)^m}{1-(1-p)^m}+C+o(1).$$
My brain is exausted, so I’m sure I’d get wrong if I computed $C.$ You can probably replace $o(1)$ with the more specific $O\left((1-p)^n\right).$
Best Answer
Each of the $8$ possibilities of $3$ consecutive independent tosses are equiprobable.
In $n$ tosses, there will be $(n-2)$ such subsequences of $3$ consecutive tosses.
Thus $\Bbb {E}[X] = \frac{n-2}{8}$.