First part:
Let $a,b,c$ represent $3$ consecutive days. Since we are in state $1$, that means we have the sequence $(a,b) = \text{(no rain, rain)}$. In order to jump onto state $0$, there must hold $(b,c) = \text{(rain, rain)}$. Then we have the sequence $(a,b,c) = \text{(no rain, rain, rain)}$. According to the assumptions, starting from $(a,b)$ we can reach $c$ with probability $p=0.5$.
Also, $P_{11} = 0$. Why? If we still have $3$ consecutive days $a,b,c$ then it must hold $(a,b) = \text{(no rain, rain)}$ and $(b,c) = \text{(no rain, rain)}$, which can't happen.
Second part:
Notice that we start from state $0$, thus $\pi(0) = \begin{bmatrix} 1& 0 & 0 & 0\end{bmatrix}$ and we are going to evaluate the probability:
$$\pi(0)\cdot P^2 = \begin{bmatrix} 0.49 & 0.12 & 0.21 & 0.18\end{bmatrix}. $$
Thus, the probability that it rains on Thursday is going to be $p=0.49+ 0.12 = 0.61$ (see part $3$).
Third part:
From part $2$ it is known that the initial state is the state $1$. Assuming that we have the sequence $(a,b,c,d)$ with $a$ corresponding to the first day (Monday) and $d$ correspond to the last day (Thursday). Thus, we want the following to hold:
$$(a,b,c,d) = \text{(rain, rain, x, rain)}.$$ $x$ could either represent a rainy day or a non - rainy day. Thus, the are $2$ paths.
1: $(a,b,c,d) = \text{(rain, rain, rain, rain)}$
2: $(a,b,c,d) = \text{(rain, rain, no rain, rain)}$
Τhus, $(c,d)$ is going to be either (rain, rain), which indeed corresponds to state $0$ or (no rain, rain), which corresponds to state $1$.
Speaking with term of states the first $4-tuple$ corresponds to the path $0\to 0\to 0$, thus we have $p_{00}\cdot p_{00}= 0.7^2=0.49$ and the second $4-tuple$ corresponds to the path $0\to 2\to 1$, thus $p_{02}\cdot p_{21} = 0.3 \cdot 0.4 = 0.12$. Adding the two probabilities, leads us to the answer of the second part.
Your idea to use a conditional probability derived from the computed absorption probabilities seems good at first glance: we only care about states 1 and 4, so we throw out the probabilities for 2 and 5. However, the system will never reach 1 or 4 if it enters 5, so you’ve effectively added the condition that the system never visit 5 on its way to the states of interest. That’s not a condition that’s understood in “starting from 6, 4 will be visited before 1.”
Mechanically, if you use the same method that you used to solve part 4, you end up with the same matrix $P_{T,T}$: neither state 2 nor state 5 is a transient state. If you do include them, you’ll find that $I-P_{T,T}$ is singular.
Drilling down a bit, these matrix manipulations compute the solution to a system of linear equations. Ian has a nice summary of them in this answer. We can compare the systems for questions 4 and 5 to see why the answer is the same for both.
For question 4, let $u(x)$ be the probability that, starting from state $x$, 4 is visited before any of $\{1,2,5\}$. Conditioning on the first step as usual, we get the system $$\begin{align} u(1) &= 0 \\
u(2) &= 0 \\
u(3) &= P_{3,3}\,u(3)+P_{3,4}\,u(4)+P_{3,5}u(5)+P_{3,6}\,u(6) = P_{3,3}\,u(3)+P_{3,6}\,u(6)+P_{3,4} \\
u(4) &= 1 \\
u(5) &= 0 \\
u(6) &= P_{6,1}\,u(1)+P_{6,3}\,u(3)+P_{6,6}\,u(6) = P_{6,3}\,u(3)+P_{6,6}\,u(6). \end{align}$$
Solving this system gives $u(6)\approx0.516$.
For question 5, we let $u(x)$ be the probability of visiting 4 before 1, starting at $x$. The system of equations is similar, but has fewer zeros to begin with: $$\begin{align} u(1) &= 0 \\
u(2) &= P_{2,2}\,u(2)+P_{2,5}\,u(5) \\
u(3) &= P_{3,3}\,u(3)+P_{3,4}\,u(4)+P_{3,5}\,u(5)+P_{3,6}\,u(6) = P_{3,3}\,u(3)+P_{3,5}\,u(5)+P_{3,6}\,u(6)+P_{3,4} \\
u(4) &= 1 \\
u(5) &= P_{5,2}\,u(2)+P_{5,5}\,u(5) \\
u(6) &= P_{6,1}\,u(1)+P_{6,3}\,u(3)+P_{6,6}\,u(6) = P_{6,3}\,u(3)+P_{6,6}\,u(6). \end{align}$$ If you solve this system, you end up with either $u(5)$ or $u(2)$ as a free variable. However, $\{2,5\}$ is a recurrent class that traps the system: once it hits 5 it never leaves this class, which means that $\Pr(X_k=4 \mid X_{<k}\in\{2,5\})=\Pr(X_k=4 \mid X_0\in\{2,5\})=0$, and so $u(5)=u(2)=0$, giving the same system of equations as before.
Your method happens to work for question 6 because keeping 2 as an absorbing state makes 5 transient, so it doesn’t “bleed off” any of the probability as it does in the previous question. You can, of course verify your computation by using the same method that you used for question 4.
Best Answer
The general recipe: the probability to hit a set $A$ before hitting a set $B$ starting at a point $x$ is a function $u(x)$ which satisfies the system of equations
$$(Lu)(x)=0, \quad x \not \in A \cup B \\ u(x)=1, \quad x \in A \\ u(x)=0, \quad x \in B$$
where $L=P-I$ and $P$ is the transition matrix (taken in the row-stochastic convention, which is the convention you are using). You can prove this using the total probability formula, by conditioning on the outcome of the first step. For instance, in your case, $A=\{ 5 \}$, $B=\{ 3 \}$, and the first equation is:
$$u(1)=u(2)/2+u(3)/2=u(2)/2+0=u(2)/2.$$
The $L$ here is usually called the generator of the chain. The solution $u$ is sometimes called the committor of the chain (with respect to $(A,B)$).
In discrete space this is just a system of $|S|$ linear equations in $|S|$ unknowns, where $S$ is the state space.