The transformation $\theta$ on $\Omega^{\Bbb N}$ is ergodic. Indeed, it's enough to show that for each cylinder $A$ and $B$, we have
$$\frac 1n\sum_{k=0}^{n-1}\mu(\theta^{-k}A\cap B)\to \mu(A)\mu(B),$$
where $\mu$ is the measure on the product $\sigma$-algebra.
If $A=\prod_{j=0}^NA_j\timesĀ \Omega\times\dots$ and $B=\prod_{j=0}^NB_j\times \Omega\times\dots$, we have for $k>N$
\begin{align}
\theta^{-k}A\cap B&=\{(x_j)_{j\geq 0}, (x_{j+k})_{j\geq 0}\in A, (x_j)_{j\geq 0}\in B\}\\
&=\{(x_j)_{j\geq 0},x_{j+k}\in A_j, 0\leq j\leq N, x_j\in B_j,0\leq j\leq N\}\\
&=B_0\times \dots\times B_N\times \Omega\times\dots\times \Omega\times A_0\times\dots\times A_n\times \Omega\times\dots,
\end{align}
and we use the definition of product measure $\mu$ on cylinders (the $N$ first terms doesn't matter).
Since $\theta$ is ergodic, $\mathcal J_{\theta}$ consists only of events of measure $0$ or $1$. The conditional expectation with respect such a $\sigma$-algebra is necessarily constant.
Let $S_r = \sum_{k=1}^r X_k$ and for easier notation, put $S_0 = 0$
Note that $$ \sum_{k=1}^n \frac{X_k}{k} = \sum_{k=1}^n \frac{S_k-S_{k-1}}{k} = \sum_{k=1}^n \frac{S_k}{k} - \sum_{k=1}^n \frac{S_{k-1}}{k} = \frac{S_n}{n} + \sum_{k=1}^{n-1} \frac{S_k}{k} - \sum_{k=2}^n \frac{S_{k-1}}{k} = \frac{S_n}{n} + \sum_{k=1}^{n-1} \frac{S_k}{k} - \sum_{k=1}^{n-1} \frac{S_k}{k+1} = \frac{S_n}{n} + \sum_{k=1}^{n-1}S_k(\frac{1}{k} - \frac{1}{k+1}) = \frac{S_n}{n} + \sum_{k=1}^{n-1} \frac{S_k}{k}\frac{1}{k+1}$$
Put $W_r = \frac{S_r}{r}$, so that we have $$\frac{1}{\ln(n)} \sum_{k=1}^n \frac{X_k}{k} = \frac{W_n}{\ln(n)} + \frac{1}{\ln(n)} \cdot \sum_{k=1}^{n-1} \frac{W_k}{k+1}$$
Call $Y_n = \frac{W_n}{\ln(n)}$. Since by SLLN $W_n$ tends to $\mu$ a.e, so $Y_n$ tends to $0$ a.e
Now, as we said, $W_n$ tends to $\mu$ a.s. So we have set $\Omega_0$, $\mathbb P(\Omega_0) =1$, such that for $\omega \in \Omega_0$ $W_n(\omega) \to \mu$
Take that $\omega \in \Omega_0$. It is sufficient to show that $\frac{1}{\ln(n)}\sum_{k=1}^{n-1} \frac{W_k(\omega)}{k+1}$ tends to $\mu$.
We need $1$ common fact from analysis, that for any $N \in \mathbb N$:
$$ \lim_{n \to \infty} \frac{\sum_{k=N}^n \frac{1}{k}}{\ln(n)} = 1$$
Now, take any $\epsilon>0$ take such $N \in \mathbb N$ that for $n>N$ we have $|W_n(\omega) - \mu| <\epsilon$.
Then $$ \frac{1}{\ln(n)} \sum_{k=1}^n \frac{W_k(\omega)}{k+1} = \frac{1}{\ln(n)}\sum_{k=1}^N \frac{W_k(\omega)}{k+1} + \frac{1}{\ln(n)} \sum_{k=N}^n \frac{W_k(\omega)}{k+1}$$
The first one has only finitelly many terms in the sum, so tends to $0$ as $n \to \infty$ (note $N$ is fixed)
For the second one, we can bound it from below and above:
$$ \frac{\mu - \epsilon}{\ln(n)} \sum_{k=N}^n \frac{1}{k+1} \le \frac{1}{\ln(n)}\sum_{k=N}^n \frac{W_k(\omega)}{k+1} \le \frac{\mu + \epsilon}{\ln(n)}\sum_{k=N}^n \frac{1}{k+1}$$
Now use our fact from analysis, to conclude that for our $\omega \in \Omega_0$, we have:
$$ \mu -\epsilon \le \liminf \frac{1}{\ln(n)} \sum_{k=N}^n \frac{W_k(\omega)}{k+1} \le \limsup \frac{1}{\ln(n)} \sum_{k=N}^n \frac{W_k(\omega)}{k+1} \le \mu+\epsilon$$
Since $\epsilon >0$ was arbitrary, we have $$\lim \frac{1}{\ln(n)} \sum_{k=1}^n \frac{W_k(\omega)}{k+1} = \mu$$Again, since $\omega \in \Omega_0$ was arbitrary, we have $$\lim \frac{1}{\ln(n)} \sum_{k=1}^n \frac{W_k}{k+1} = \mu \ \ a.e $$
Using everything we proved above, we can conclude: $$ \lim_{n \to \infty} \frac{1}{\ln(n)} \sum_{k=1}^n \frac{X_k}{k} = \mu \ \ a.e $$
Best Answer
I'm not confident in the case where the distributions of the $X_{n,i}$ do not depend on $n$.
Assume that the variances $\rm{Var}(X_{n,i}) = \sigma_n^2$ tend towards zero sufficiently fast that $$\sum_n \frac{\sigma_n^2}{n} < \infty.$$ In particular, it suffices to make the $\sigma_n$ go like any negative power of $n$. Then $$\rm{Var}\left(\sum_{I=1}^n \frac{X_{n,i} - \mu_n}{n}\right) = \frac{\sigma_n^2}{n}.$$ Thus, by Chebyshev's inequality, $$\mathbf{P}\left(\left|\sum_{I=1}^n \frac{X_{n,i} - \mu_n}{n}\right| > c\right) \leq \frac{(\sigma_n^2/n)}{c^2}.$$ By our earlier assumption, these probabilities are summable in $n$, so by Borel-Cantelli, the sums converge a.s. to 0.