Assuming you mean Leonid Kovalev's interpretation, the distribution of the hitting time in the $N = 1$ case is the same as the distribution of the number of cycles of a random permutation of $[n]$.
To be more specific, I'll change coordinates. Let $X_0 = (x_0^1, \ldots, x_0^N) = (S, S, \ldots, S)$. Let $X_1 = (x_1^1, \ldots, x_1^N)$, where $x_1^j$ is chosen uniformly at random from $0, 1, \ldots, x_0^j-1$. Define $X_2$ similarly in terms of $X_1$, and so on.
Then the sequence $(x_0^1, x_1^1, x_2^1, x_3^1, \ldots)$ are as follows:
- $x_0^1 = L$, of course.
- $x_1^1$ is uniformly distributed on $\{ 0, \ldots, S-1 \}$.
- $x_2^1$ is uniformly distributed on $\{ 0, \ldots, x_1^1-1 \}$.
and so on...
In particular the distribution of $x_1^1$ is the distribution of number of elements in a random permutation on $S$ elements which are {\it not} in the cycle containing 1; In particular the distribution of $x_1^1$ is the distribution of number of elements in a random permutation on $S$ elements which are {\it not} in any of the $k$ cycles with the smallest minima.
So the distribution of the smallest $k$ for which $x_k^1 = 0$ is exactly the distribution of the number of cycles of a random permutation of $\{ 1, \ldots, S \}$; this is $1 + 1/2 + \cdots + 1/S \sim \log S + \gamma$, where $\gamma = 0.577\ldots$ is the Euler-Mascheroni constant.
In the higher-dimensional cases, the time to reach $(0, \ldots, 0)$ is just the {\it maximum} of the number of cycles of $N$ permutations chosen uniformly at random; this should still have expectation $\log S$ plus a constant depending on $N$.
The problem is equivalent to choosing $n-1$ points at random on the unit interval and considering the length of the longest resulting subinterval. Given that, the distribution of the maximum and its expected value are in my answer to this recent math.SE question on Average Length of the Longest Segment. It's closely related to the one given in the comments above by Shai Covo. I'll reproduce the answer here (which is given in terms of making $n-1$ cuts randomly chosen on a rope of unit length).
If $X_1, X_2, \ldots, X_{n-1}$ denote the positions on the rope where the cuts are made, let $V_i = X_i - X_{i-1}$, where $X_0 = 0$ and $X_n = 1$. So the $V_i$'s are the lengths of the resulting pieces of rope.
The key idea is that the probability that any particular $k$ of the $V_i$'s simultaneously have lengths longer than $c_1, c_2, \ldots, c_k$, respectively (where $\sum_{i=1}^k c_i \leq 1$), is $$(1-c_1-c_2-\ldots-c_k)^{n-1}.$$ This is proved formally in David and Nagaraja's Order Statistics, p. 135. Intuitively, the idea is that in order to have pieces of size at least $c_1, c_2, \ldots, c_k$, all $n-1$ of the cuts have to occur in intervals of the rope of total length $1 - c_1 - c_2 - \ldots - c_k$. For example, $P(V_1 > c_1)$ is the probability that all $n-1$ cuts occur in the interval $(c_1, 1]$, which, since the cuts are randomly distributed in $[0,1]$, is $(1-c_1)^{n-1}$.
If $V_{(n)}$ denotes the largest piece of rope, then
$$P(V_{(n)} > x) = P(V_1 > x \text{ or } V_2 > x \text{ or } \cdots \text{ or } V_n > x).$$ This calls for the principle of inclusion/exclusion. Thus we have, using the "key idea" above,
$$P(V_{(n)} > x) = n(1-x)^{n-1} - \binom{n}{2} (1 - 2x)^{n-1} + \cdots $$
$$+ (-1)^{k-1} \binom{n}{k} (1 - kx)^{n-1} + \cdots,$$
where the sum continues until $kx > 1$.
Therefore,
$$E[V_{(n)}] = \int_0^{\infty} P(V_{(n)} > x) dx = \sum_{k=1}^n \binom{n}{k} (-1)^{k-1} \int_0^{1/k} (1 - kx)^{n-1} dx $$ $$= \sum_{k=1}^n \binom{n}{k} (-1)^{k-1} \frac{1}{nk} = \frac{1}{n} \sum_{k=1}^n \frac{\binom{n}{k}}{k} (-1)^{k-1} = \frac{H_n}{n},$$
where the last step applies a known binomial sum identity.
For much more on this problem, see Section 6.4 ("Random Division of an Interval") in David and Nagaraja's Order Statistics, pages 133-137, and the corresponding exercises on pages 153-155.
As a side note, the distribution of $V_{(n)}$ was apparently first obtained by Ronald Fisher in "Tests of Significance in Harmonic Analysis," Proceedings of the Royal Society of London, Series A, 1929, pp 54-59. (Sorry for the JSTOR link.)
Best Answer
Define $S_i$, $i=1,\ldots,2n$ to be the number of $1$s in the interval $[0,i]$, minus $i/2$. Then, with your notation, $Y_i:=X_{i+1}-n/2=S_{n+i}-S_i$. You are interested in $M^*_n:=\max_{i=1}^n |Y_i|/\sqrt{n}$. Since $\{S_{[nt]}\}/\sqrt{n}$, $t\geq 0$, converges to one half times a Brownian motion $\{W_t\}$, it follows that $M_n^*$ converges to the random variable $X^*=0.5 \max_{t\in [0,1]} |W_{t+1}-W_t|$. Since this random variable is of order $1$, the expectation you are after is of order $\sqrt{n}$.