In an arbitrary dimension d:
Let $\vec{R}$ be the end-to-end distance vector of a random walk of fixed step length $|\vec{r}_i| = l$. $\vec{R}$ can then be expressed as $\displaystyle \vec{R} = \sum_{i=1}^N \vec{r}_i$, where $\vec{r}_i$ is the vector of the $i$-th step. The Root-Mean-Square End-to-End Distance is given by $\textrm{RMS}=\sqrt { \langle R^2 \rangle }$. Since the steps are mutually independent, the covariance of two steps $\vec{r}_i$ and $\vec{r}_j$ is zero if $i\neq j$ and $\textrm{Cov}(\vec{r}_i, \ \vec{r}_j)= \textrm{Var}(\vec{r}_i)$ if $i=j$. The variance of $ \vec{r}_i$ can be expressed as $ \textrm{Var}(\vec{r}_i)= \langle \vec{r}_i \cdot \vec{r}_i \rangle - \langle \vec{r}_i \rangle^2$. Due to symmetry $\langle \vec{r}_i \rangle=\vec{0}$ and therefore the variance of
of $ \vec{r}_i$ is simply $ \textrm{Var}(\vec{r}_i)= \langle \vec{r}_i \cdot \vec{r}_i \rangle = |\vec{r}_i|^2 = l^2$. Altogether, the covariance of $\vec{r}_i$ and $\vec{r}_j$ equals $\textrm{Cov}(\vec{r}_i, \ \vec{r}_j)=\delta_{ij}l^2$. The covariance of $\vec{r}_i$ and $\vec{r}_j$ can also be expressed as $\textrm{Cov}(\vec{r}_i, \ \vec{r}_j) = \langle \vec{r}_i \cdot \vec{r}_j \rangle - \langle \vec{r}_i \rangle \cdot \langle \vec{r}_j \rangle$. Combining the two different expressions for the covariance and using that
$\langle \vec{r}_i \rangle=0$, results in $\langle \vec{r}_i \cdot \vec{r}_j \rangle =\delta_{ij}l^2$. This result can be used to determine the RMS:
$$\textrm{RMS}=\sqrt { \langle R^2 \rangle } = \sqrt { \langle \vec{R} \cdot \vec{R} \rangle }
=\sqrt { \big\langle \sum_{i=1}^N \vec{r}_i \cdot \sum_{j=1}^N \vec{r}_j \big\rangle }
=\sqrt { \sum_{i=1}^N \sum_{j=1}^N \langle \vec{r}_i \cdot \vec{r}_j \rangle }= $$
$$=\sqrt { \sum_{i=1}^N \sum_{j=1}^N l^2 \delta_{ij} + 0^2}=\sqrt { \sum_{i=1}^N l^2}=\sqrt { N l^2}=l\sqrt { N }$$
Let $Z_i$ denote the $i$-th coordinate of the end-to-end distance vector $\vec{R}$ after $N$ steps, and let $X_i$ and $Y_i$ denote the number of steps taken in the $i$-th dimension in the positive and negative direction respectively. Then the set of random variables $\{X_i, Y_i\}_{i=1}^d$ follows a multinomial distribution with parameters $N$ and $\displaystyle p_i=\frac{N}{2d}$. For sufficiently large values of $N$, $\{X_i, Y_i\}_{i=1}^d$ are approximately iid (independent and identically distributed) Poisson random variables with parameters $\displaystyle \lambda_i = \frac{N}{2d}$. For $\lambda > 20$, i.e. $N>40d$, $\textrm{Po}(\lambda) \sim \textrm{N}(\lambda, \lambda)$. $ Z_i = l(X_i - Y_i)$ and therefore $\displaystyle Z_i \sim \textrm{N}(l(\lambda - \lambda), l^2(\lambda+\lambda))=\textrm{N}(0, 2l\lambda)=\textrm{N}\left(0, \frac{l^2N}{d}\right)$.
$\displaystyle \langle R \rangle
= \langle \sqrt{R^2} \rangle
= \left\langle \sqrt{ \sum_{i=1}^d Z_i^2} \right\rangle$. The square root of a sum of $k$ independent $\textrm{N}(0, 1)$-distributed random variables is distributed according to the chi distribution, $\chi_k$. Therefore $\displaystyle \sqrt{ \sum_{i=1}^d \frac{dZ_i^2}{l^2N}}$ is approximately $\chi_d$-distributed for large values of $N$. The expected value of a $\chi_k$-distributed random variable
is $\displaystyle \sqrt{2} \frac{
\Gamma \left(\frac{k+1}{2}\right)
}{\Gamma \left( \frac{k}{2}\right)}$.
Hence $\displaystyle \langle R \rangle
=\left\langle\sqrt{ \sum_{i=1}^d Z_i^2}\right\rangle
=\left\langle l \sqrt{\frac{N}{d}} \sqrt{ \sum_{i=1}^d \frac{dZ_i^2}{l^2N} }\right\rangle
= l \sqrt{\frac{2N}{d} }\frac{
\Gamma \left(\frac{d+1}{2}\right)
}{\Gamma \left( \frac{d}{2}\right)}$.
Best Answer
Assume that the $k$th step is $x_k$ in $\mathbb R^n$ with $E[x_k]=0$, $\|x_k\|=1$ almost surely, and that the steps $(x_k)$ are independent. Let $s_N=\|x_1+\cdots+x_N\|^2$.
Then $s_N=N+\sum\limits_{k\ne\ell} x_k\cdot x_\ell$. By independence, $E[x_k\cdot x_\ell]=E[x_k]\cdot E[x_\ell]=0$ for every $k\ne\ell$ hence $$ E[s_N]=N. $$ Likewise, $s_N^2=N^2+2N\sum\limits_{k\ne\ell} x_k\cdot x_\ell+t_N$ with $ t_N=\left(\sum\limits_{k\ne\ell} x_k\cdot x_\ell\right)^2=\sum\limits_{k\ne\ell}\sum\limits_{i\ne j}(x_k\cdot x_\ell)(x_i\cdot x_j). $ The only terms in the summation whose mean is not zero are such that $\{k,\ell\}=\{i,j\}$. There are $2N(N-1)$ such terms hence $E[t_N]=2N(N-1)\alpha$ with $\alpha=E[(x_1\cdot x_2)^2]$. Finally, $$ \mathrm{var}(s_N)=E[t_N]=2N(N-1)\alpha. $$ This is an exact formula, valid for every $N\geqslant1$. The value of the parameter $\alpha$ depends on the specifics of the distribution of the displacement $x_1$ since, considering $x_1=(x_1^{(i)})_{1\leqslant i\leqslant n}$, $$ \alpha=\sum\limits_{i=1}^nE[(x_1^{(i)})^2]^2. $$ If the coordinates of $x_1$ are identically distributed then $E[(x_1^{(i)})^2]=\frac1n$ for every $i$ by symmetry hence $\alpha=\frac1n$ and $\mathrm{var}(s_N)=\frac2nN(N-1)\sim\frac2nN^2$ when $N\to\infty$.