The correspondence between the two cases is to consider the generator equation:
$$(Lu)(x)=-1 \quad x \not \in A \\
u(x)=0 \quad x \in A$$
where $A$ is the set you're talking about hitting and $u$ is the expected time to hit $A$ started at $x$. In the discrete time case, $L=P-I$ where $P$ is the transition probability matrix. In the continuous time case, $L$ is the generator matrix, which is really the basic object for describing a continuous time discrete space Markov chain. If you are not familiar, for $i \neq j$, $L_{ij}$ is the rate of transitions from $i$ to $j$, and $L_{ii}=-\sum_{j \neq i} L_{ij}$.
Intuitively the continuous version is the same as the discrete version but instead of conditioning on the state $1$ time into the future, you condition on the state an arbitrarily small amount of time into the future, using the identities
$$P[X_{t+h}=i \mid X_t=i]=1+L_{ii} h + o(h) \\
P[X_{t+h}=j \mid X_t=i]=L_{ij} h+o(h).$$
As mentioned in the edit, this can be represented as a Markov chain - in particular, a discrete time Markov chain on a finite state space, which is absorbing.
For much of the answer below, my reference is "Grinstead and Snell's Introduction to Probability", currently located at this link https://math.dartmouth.edu/~prob/prob/prob.pdf
For such a Markov chain, the states which are not absorbing are called transient. If there are $t$ transient states (for the posted problem, $t=N^2 - m$) and $r$ absorbing states (for the posted problem, $r=m$), it is common to order the states so that the transient states are first, so that the probability transition matrix is in block form as:
$$
P = \begin{bmatrix} Q & R\\ 0 & I_r \end{bmatrix}
$$
Here $Q$ is $t \times t$, $R$ is $t \times r$, $0$ is the $r \times t$ zero matrix, and $I_r$ is the $r \times r$ identity matrix.
It is known that the $i,j$ entry of the "fundamental" matrix $N = (I_t - Q)^{-1}$ is the expected number of times that the chain will visit transient state $j$ if it started in transient state $i$.
Therefore, the sum of the $i^\mathrm{th}$ row of $N$ is the expected number of steps before being absorbed, if starting in transient state $i$.
The probability of being absorbed into absorbing state $j$, $1\le j \le r$, if the chain started in transient state $i$, is the $i,j$ entry of $B = NR$.
As for "the expected number of steps for each barrier", if the chain starts in a transient state $i$ that has a nonzero probability of not hitting a particular absorbing state $j$, then I believe the interpretation of the "expected number of steps to hit that barrier" would be infinity, since there is a positive probability that it will never hit the barrier.
But, if we are conditioning on the event that the chain is absorbed into barrier $j$, starting from state $i$, then to find the "expected number of steps before being absorbed", you proceed as above with two modifications:
First, you would consider a chain of only the transient states with positive probability of being absorbed into absorbing state $j$, together with absorbing state $j$ (so $r=1$). Any transient states that could not be absorbed into absorbing state $j$ (such as a transient state surround by other barriers) would not be a part of this new chain. Neither would the other absorbing states.
Second, you would use conditional probabilities in your probability transition matrix for this chain, so that the rows still sum to one. For example, if a state used to have four neighbors, but one of them was a barrier that we know it doesn't transition to (since it eventually gets absorbed into a different barrier), then the conditional probability that the random walk transitions to each of those three remaining neighbors is $\frac{1}{3}$.
Then you would have
$$P' = \begin{bmatrix} Q' & R'\\ 0 & 1 \end{bmatrix}$$
You would solve for $N' = (I - Q')^{-1}$, and the expected number of steps before being absorbed, starting from transient state $i$, is the sum of the $i^\mathrm{th}$ row of $N'$.
Best Answer
Since it is not specified in the question, I will assume that the random walk is a discrete-time random walk, which jumps to a neighbouring site at each time-step with equal probability (assuming no stay-back at any time-step).
Suppose the walker starts from a site $v \notin A$. At any given time-step, if the walker is not on a vertex in $A$, then the walker jumps to some vertex in $A$ with probability $\frac{3}{n-1}$ in the next time-step, or to a vertex outside $A$ with probability $\frac{n-4}{n-1}$. Now, you can imagine that the computation of the hitting time can be interpreted as a sequence of Bernoulli trials. Clearly, the mean time taken to hit $A$ will be $\frac{n-1}{3}$. But you can easily go beyond the mean, and calculate the $\textit{full distribution}$ of hitting times. Let us define the random variable $T$ to denote the hitting time. Then the probability that $T$ takes a value $t$ is given by
$$ Prob(T = t) = \big( \frac{n-4}{n-1}\big)^{t-1}\cdot\frac{3}{n-1}.$$ suggesting that the hitting time is geometrically distributed.
In this answer, even though I assumed that the random walk was taking place in discrete-time, and that there were no "stay-back" events, I suppose it will be easy to generalize this solution to those other settings.