(a) I think your method fails because you can't make state $3$ absorbing. It has a path to state $2$ that stops you doing that.
If you want to use this kind of method with the fundamental matrix, you could split state $0$ into say, $0$ and $4$, with state $3$ going to $4$ instead of $0$. Then the required probability should be the $(1,4)^{th}$ entry of matrix $FR$ based on the new transition probability matrix.
However, a common way to solve this kind of problem uses first-step analysis (i.e. conditioning):
Let $p_i = P(\text{Exit from state $3$ starting from state $i$}),\; i=1,2,3$. So we want $p_1$. Conditioning on the next step:
\begin{align}
p_1 &= \dfrac{1}{2} p_2 \\
p_2 &= \dfrac{1}{3} p_1 + \dfrac{1}{3} p_3 \\
p_3 &= \dfrac{2}{3} + \dfrac{1}{3} p_2 \\
\end{align}
Solve this system of equations to give:
$$p_1 = \dfrac{2}{13}.$$
$$\\$$
(b) I think there is some validity behind that method but it would need some justification probably based on the reversibility property of random walks on a graph. An argument based only on the number of exits isn't sufficient, IMO.
An easier method is just to continue the above solution for (a). That system of equations also gives
$$p_2 = \dfrac{4}{13}, \qquad p_3 = \dfrac{10}{13}.$$
Now, there are $2$ paths into state $3$ and $1$ path each into states $1$ and $2$. So the probability of entry via these states are:
\begin{align}
P(\text{Entry via } 1) &= \dfrac{1}{4} \\
P(\text{Entry via } 2) &= \dfrac{1}{4} \\
P(\text{Entry via } 3) &= \dfrac{1}{2}.
\end{align}
So,
\begin{align}
P(\text{Exit via state $3$}) &= P(\text{Entry via } 1)p_1 + P(\text{Entry via } 2)p_2 + P(\text{Entry via } 3)p_3 \\
&= \dfrac{1}{4}\cdot \dfrac{2}{13} + \dfrac{1}{4}\cdot \dfrac{4}{13} + \dfrac{1}{2}\cdot \dfrac{10}{13} \\
&= \dfrac{1}{2}.
\end{align}
Best Answer
First of all, you will need to understand how to interpret these matrices. They are Boolean matrices where entry $M_{ij}=1$ if $(i,j)$ is in the relation and $0$ otherwise. As an example, the relation $R$ is \begin{align*} R=\{(0,3),(2,1),(3,2)\}. \end{align*}
Question 1
For the inverse relation, try writing the the pairs contained in $R^{-1}$ and represent this in matrix form. How does this matrix relate to $M_R$?
The intersection of the relations is defined such that $(a,b)\in R\cap S$ if both $(a,b)\in R$ and $(a,b)\in S$. How does this translate to matrices? Consider the entry $(i,j)$ in $M_{R\cap S}$: When it is $1$, what must be true about the entries $(i,j)$ in $M_R$ and $M_S$?
To find the composition, it can be shown that this corresponds to the Boolean product of the matrices.
Question 2
First, you have to find the matrix $M_{R\cup S}$. In the case of the intersection the entries of both matrices $M_R$ and $M_S$ had to be $1$, since both relations should contain the pair. In the union, only one of the relations need to contain the pair. How does this translate to the matrices?
For $n\times n$ matrices, Warshall's algorithm consist of $n$ steps. First, we fix $k=1$ and construct a matrix $W$ where $W_{ij}=M_{ij}\vee (M_{ik}\wedge M_{kj})$ (here $M$ denotes the matrix representation of the relation). Consider a small example:
\begin{align*} M=\left[\begin{array}{c c c} 1 & 0 & 0\\ 0 & 1 & 1\\ 1 & 0 & 0 \end{array}\right] \end{align*}
Fix $k=1$. We can transfer the first row and column immediately. Further, all $1$'s can be transferred. For the remaining entries, we check the corresponding entries in the $k$'th row and column -- here marked with a box. If these are both $1$, we write $1$, otherwise $0$. \begin{align*} W_1=\left[\begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & 1\\ 1 & ? & ? \end{array}\right]=\left[\begin{array}{ccc} 1 & \boxed{0} & 0\\ 0 & 1 & 1\\ \boxed{1} & \mathbf{0} & ? \end{array}\right]=\left[\begin{array}{ccc} 1 & 0 & \boxed{0}\\ 0 & 1 & 1\\ \boxed{1} & 0 & \mathbf{0} \end{array}\right]. \end{align*}
Next, fix $k=2$ and repeat the process on $W_1$. Again transfer the $k$'th row and column and any entries containing $1$. \begin{align*} W_2=\left[\begin{array}{ccc} 1 & 0 & ?\\ 0 & 1 & 1\\ 1 & 0 & ? \end{array}\right]=\left[\begin{array}{ccc} 1 & \boxed{0} & \mathbf{0}\\ 0 & 1 & \boxed{1}\\ 1 & 0 & ? \end{array}\right]=\left[\begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & \boxed{1}\\ 1 & \boxed{0} & \mathbf{0} \end{array}\right]. \end{align*}
Finally, fix $k=3$ to find the last entries. \begin{align*} W_3=\left[\begin{array}{ccc} 1 & ? & 0\\ ? & 1 & 1\\ 1 & 0 & 0 \end{array}\right]=\left[\begin{array}{ccc} 1 & \mathbf{0} & \boxed{0}\\ ? & 1 & 1\\ 1 & \boxed{0} & 0 \end{array}\right]=\left[\begin{array}{ccc} 1 & 0 & 0\\ \mathbf{1} & 1 & \boxed{1}\\ \boxed{1} & 0 & 0 \end{array}\right]. \end{align*}
This final matrix $W_3$ is the transitive closure.