This is a classic stopping time problem where you can define a martingale on the process and use the optional sampling theorem 1 to calculate things like the expectation of the process at step n, the expected number of steps its going to reach 0, and so on.
To give you an idea, you can think of the process as $X_{n+1}=X_n+V_n$ where $X_n$ is the position in the real line at time n, and of course $X_0=A$. $X_n$ then depends on the previous position on the real line and a random component $V_n\in\{b,-c\}$, i.i.d with $P(V_n=b)=p$ and $P(v_n=-c)=1-p$. Then it is a matter of finding a suitable martingale and to show that $T=inf\{n\in\mathbb{N}:X_n=0\}$ is a bounded stopping time (here can be tricky because you have to be able to show that there's actually a finite way of achieving 0 from the initial position A by combining b and -c. For example, if A=1, b=2 and c=-2, there is no way you will ever reach 0). If you cannot show that T is not bounded or at least that has finite expectation (or any other condition that makes Dominated Convergence Theorem/Monotone Convergence Theorem work), you might not be able to easily calculate the expectation for example of the number of steps needed to reach 0 using the OST1.
I hope this helps.
Let $L_d = \{1, ..., d\}^2$ be the domain of the random walk. I assume that when OP says
a 2-D random walk that, at any point, has an equal probability of going to any of the adjacent points
they mean that at the boundaries, any "accessible" point is selected with equal probability. Thus, at the point $x = (1, 3)$ and $d = 5$, the points $(1, 2), (1, 4), (2, 3)$ exhaust all the next possible states, each with probability 1/3.
This Markov chain is irreducible and aperiodic on a finite state space, hence ergodic with unique stationary distribution $\pi$. In general, the transition kernel of a finite-state Markov chain can be encoded via a matrix $P$ called the transition matrix. The stationary distribution is then the row vector which satisfies $\pi^T P = \pi^T$ (here the superscript $T$ denotes a transpose). In this case, $P$ would be a $d^2\times d^2$ matrix. You can read more about stationary distributions of Markov chains anywhere you like with Google.
Owing to the asymmetry of the proposal distribution of the Markov chain at the boundary, the stationary distribution will not be uniform. You can see this by running more samples. I ran with $10^6$. Below is a heat map of the probabilities.
Finally, you ask "why" would this be so. If you started from a random location on the grid and ran 1000 steps of the random walk, do you think you could identify with high confidence where you started? No. Each state that's not on the boundary eventually experiences similar "inflow" and "outflow", regardless of where the chain started, so the eventual frequency of visits should be equal for all such states.
Best Answer
I'm assuming those moves have equal probability. Let me write the $n$'th step as $s_n$, so $x_n = \sum_{j=1}^n s_n$. Then $E[x_n] = 0$ and $E[\|x_n\|^2] = n$. If $x_n = (X_n, Y_n)$, then $X_n$ and $Y_n$ both have mean $0$ and their covariance matrix is $\pmatrix{n/2 & 0\cr 0 & n/2\cr}$. You could try Olkin and Pratt's multivariate version of Chebyshev's inequality (see the Wikipedia article on Chebyshev's inequality), but for your purposes it may be enough to use the more elementary $$ P(|X_n| \le k\ \text{and}\ |Y_n| \le k) \ge 1 - P(|X_n| > k) - P(|Y_n| > k) \ge 1 - \frac{n}{2k^2} - \frac{n}{2k^2} = 1 - \frac{n}{k^2}$$