There are two equations here: $\pi P=\pi$ and $\sum_i \pi_i=1$. So you will get eight equations to solve for seven variables. You need to omit one of the equations you get from $\pi P=\pi$ and solve the linear equations.
$$P=\left[\begin{array}{ccccccc}l&m&n&0&0&0&0\\o&p&0&0&0&0&0\\
o&p&0&0&0&0&0\\1&0&0&0&0&0&0\\1&0&0&0&0&0&0\\1&0&0&0&0&0&0\\1&0&0&0&0&0&0\end{array}\right]$$
These are the linear equations you need to solve:
$$l\pi_0+o(\pi_1+\pi_2)+\pi_3+\pi_4+\pi_5+\pi_6=\pi_0\\
m\pi_0+p(\pi_1+\pi_2)=\pi_1\\
n\pi_0=\pi_2\\
\pi_0+\pi_1+\pi_2+\pi_3+\pi_4+\pi_5+\pi_6=1$$
Let $m_{ij}$ denote the expected number of steps in the Markov chain, starting from state $i$, to get to state $j$. (In particular, let $m_{ii}$ denote the expected number of steps to leave $i$ and return.)
These can also be computed with a system of equations: to find $m_{1j}, m_{2j}, \dots, m_{nj}$ all at once, the equations are
$$
m_{ij} = 1 + \sum_{k \ne j} P_{ik} m_{kj}
$$
for $i=1, \dots, n$. The idea is that to find $m_{ij}$, you take one step in the Markov chain, then average $m_{kj}$ over all states you could be in next, weighted by the probability of ending up in them.
A theorem about Markov chains says that if the Markov chain is irreducible, then there is a unique stationary distribution $\vec{\pi}$, and $\pi_i$ (the stationary probability of state $i$) is $\frac1{m_{ii}}$. Therefore, if state $i$ can change its transition probabilities, it maximizes $\pi_i$ by minimizing $\frac1{m_{ii}}$.
The equation we wrote down earlier for $m_{ii}$ is
$$
m_{ii} = 1 + \sum_{j \ne i} P_{ij} m_{ji}
$$
so to minimize $m_{ii}$, it's best for state $i$ to set $P_{ij}=1$ for the state $j$ which has the least expected time $m_{ji}$.
Let $\vec{m}$ be the vector $(m_{ji})_{i \ne j}$ of all these expected times. In terms of your transition matrix $P$, let $P'$ be the matrix with row and column $i$ removed. Then the system of equations we wrote down earlier can be summarized as
$$
\vec{m} = \vec{1} + P'\vec{m}
$$
or $(I - P')\vec{m} = \vec{1}$.
(Compare this with the stationary equations $\vec{\pi}(I - P) = \vec{0}$. Among other differences, the stationary vector $\vec{\pi}$ is a row vector that is left-multiplied by $I-P$, while $\vec{m}$ is a column vector we right-multiply by $I-P'$.)
In your example, we have
$$
P' = \begin{bmatrix}0 & 0.5 & 0 \\ 0.7 & 0 & 0.3 \\ 0.5 & 0.1 & 0 \end{bmatrix} \qquad I-P' = \begin{bmatrix}1 & -0.5 & 0 \\ -0.7 & 1 & -0.3 \\ -0.5 & -0.1 & 1\end{bmatrix}
$$
and the vector $\vec{m}$ is $(I-P')^{-1}\vec{1} \approx (2.97248, 3.94495, 2.88073)$.
This tells us that Delta should send all its probability to the state Gamma, which returns to Delta in $m_{\gamma,\delta} \approx 2.88073$ steps. This will make the expected return time to Delta $m_{\delta,\delta} \approx 3.88073$, and the stationary probability of Delta will be $\pi_{\delta} \approx \frac1{3.88073} \approx 0.257683$.
Best Answer
Let $X:=\{X_n:n=0,1,\ldots\}$ be the original Markov chain and $Y:=\{Y_n:n=0,1\ldots\}$ the process obtained by adding the transition times. $Y$ is not a Markov chain, because at time $n$, the $(n+1)^{\mathrm{th}}$ transition time depends on the $(n+1)^{\mathrm{th}}$ state. So we cannot take the approach of finding a stationary distribution for a transition matrix.
Let $E$ be the state space of $X$, then $Y$ takes values in $E\times\mathbb N$. That is, for each $n$ we have $Y_n=(X_n, J_n)$, where $J_n$ is the $n^{th}$ jump time. For example, if $X_0 = i_0$, $X_1 = i_1$, and $T_{i_0,i_1} = 2$, then $Y_0 = (i_0,2)$ and $$ \mathbb P(Y_1 = (i_1, k)) = \sum_{j\in E} kP_{i_1,j}\cdot \mathsf 1_k(T_{i_1,j}). $$ $Y$ is a Markov renewal process, a generalization of Markov chains (and of renewal processes!). For each $n$, let $$\nu_{i,n}=\left(\frac1{1 + \sum_{m=0}^n J_m}\right)\sum_{m=0}^nJ_m\cdot \mathsf 1_i(X_m)$$ be the time spent in state $i$ after $n$ jumps. We want to find $\nu_i :=\lim_{n\to\infty} \nu_{i,n}$.
I believe (though it remains to be proven), that $$ \nu_i = \pi_i \cdot \left(\frac{\sum_{j\in E} T_{i,j}}{\sum_{k\in E}\sum_{l\in E} T_{k,l}} \right). $$ That is, the limiting fraction of time $Y$ is in state $i$ is given by the limiting fraction of time $X$ is in state $i$ multiplied by the ratio of the mean transition time from state $i$ to the mean transition time between any pair of states.