First, note that $E_1 = E(\tau_{B_L^x \cup \{o\}})$, ie is just the expected hitting time.
I expect this can depend a lot on the graph. Take, for example, just $\mathbb Z$. Recall that the expected return time to the origin (for a standard symmetric SRW) is infinite. On the other hand, the expected exit time of $[-L,L]$ is order $L^2$. Given that the walk exits $[-L,L]$ before returning to $0$, it hits precisely $L$ distinct vertices.
Now, a slight caveat: I've said "expected return time to the origin is $\infty$" and "expected exit time of $[-L,L]$ is order $L^2$" -- these are both correct statements, but they don't, a priori, imply that "expected exit time of $[-L,L]$, given that the origin is not returned to, is order $L^2$". However, this should be pretty easy to prove: first walk from $0$ to $L/2$ directly (this is the worst that conditioning on not returning to $0$ can do for the first $L/2$ steps); now hitting $0$ or $L$ is the same as exiting the interval $[L/2 - L/2, L/2 + L/2]$, which is order $(L/2)^2$, ie order $L^2$. I'm sure one can make this rigorous.
Using the fact that a SRW on $\mathbb Z^2$ is just a pair of independent SRWs on $\mathbb Z$ (in continuous time), one can likely extend this result to $\mathbb Z^2$ without too much change or additional ideas.
One approach - I don't know if it's the intended one - is to replace the simple random walk by another irreducible and recurrent Markov chain. For this problem, let's take the state space to be $\{0,1,2,\dots\}$ where states $\{1,2,\dots\}$ act like the simple random walk on $\mathbb Z$, but state $0$ transitions to state $k$ with probability $1$.
We can check that the following measure is stationary for this random walk:
$$\mu(x) = \begin{cases}
1 & x=0, \\
2x & 1 \le x \le k, \\
2k & x > k.
\end{cases}$$
To find this, start with $\mu(0)=1$. Stationarity at $0$ says $\mu(0) = \frac12\mu(1)$, so $\mu(1)=2$. Stationarity at $1$ says $\mu(1) = \frac12 \mu(2)$, so $\mu(2) = 4$. Stationarity at $2$ says $\mu(2) = \frac12\mu(1) + \frac12\mu(3)$, so $\mu(3) = 6$... from there, we can induct, or just guess the formula and verify it.
On the other hand, consider the Theorem 5.5.7 measure $\mu_0$ for this random walk. It has $\mu_0(0) = 1$, and $\mu_0(k)$ is the number of visits to $k$, starting from $0$, before returning to $0$. By Theorem 5.5.9, since $\mu_0(0) = \mu(0)=1$, $\mu_0(k) = \mu(k) = 2k$.
Because the first step in this random walk from $0$ is always to $k$, $\mu_0(k)=2k$ is also the number of visits to $k$, starting from $k$, before seeing $0$.
Because this random walk agrees with the simple random walk on $\mathbb Z$ as long as we stay in the states $\{1,2,\dots\}$, $\mu_0(k)=2k$ is also the number of visits to $k$, starting from $k$, before seeing $0$ in the simple random walk.
Best Answer
WOLOG, assume $k>0$
(By induction) When $k = 1$, the expectation is $\mu_0(0) + \frac{1}{2} \mu_0(0) + \frac{1}{2^2} \mu_0(0) + ... = 2$. It means whenever visits $1$, it has $1/2$ probability to do the cycle again, and $1/2$ probability hits $0$ and ends.
Suppose in $k-1$, the expectation is $2(k-1)$. Then, since whenever hit $0$, we must hit $1$ first. Thus in $k$, the expectation is $E_{k-1} + \mu_o(k) + \frac{1}{2} \mu_0(k) + \frac{1}{2^2} \mu_0(k) + ... $. Means after it visit $1$, it has $1/2$ probability to going up and $1/2$ probability to hits $0$ and ends. As $\mu_0(k) = 1$, we have the expectation is $2k$.