In univariate interval estimation, the set of possible actions is the set of ordered pairs specifying the endpoints of the interval. Let an element of that set be represented by $(a, b),\text{ } a \le b$.
Highest posterior density intervals
Let the posterior density be $f(\theta)$. The highest posterior density intervals correspond to the loss function that penalizes an interval that fails to contain the true value and also penalizes intervals in proportion to their length:
$L_{HPD}(\theta, (a, b); k) = I(\theta \notin [a, b]) + k(b – a), \text{} 0 < k \le max_{\theta} f(\theta)$,
where $I(\cdot)$ is the indicator function. This gives the expected posterior loss
$\tilde{L}_{HPD}((a, b); k) = 1 - \Pr(a \le \theta \le b|D) + k(b – a)$.
Setting $\frac{\partial}{\partial a}\tilde{L}_{HPD} = \frac{\partial}{\partial b}\tilde{L}_{HPD} = 0$ yields the necessary condition for a local optimum in the interior of the parameter space: $f(a) = f(b) = k$ – exactly the rule for HPD intervals, as expected.
The form of $\tilde{L}_{HPD}((a, b); k)$ gives some insight into why HPD intervals are not invariant to a monotone increasing transformation $g(\theta)$ of the parameter. The $\theta$-space HPD interval transformed into $g(\theta)$ space is different from the $g(\theta)$-space HPD interval because the two intervals correspond to different loss functions: the $g(\theta)$-space HPD interval corresponds to a transformed length penalty $k(g(b) – g(a))$.
Quantile-based credible intervals
Consider point estimation with the loss function
$L_q(\theta, \hat{\theta};p) = p(\hat{\theta} - \theta)I(\theta < \hat{\theta}) + (1-p)(\theta - \hat{\theta})I(\theta \ge \hat{\theta}), \text{ } 0 \le p \le 1$.
The posterior expected loss is
$\tilde{L}_q(\hat{\theta};p)=p(\hat{\theta}-\text{E}(\theta|\theta < \hat{\theta}, D)) + (1 - p)(\text{E}(\theta | \theta \ge \hat{\theta}, D)-\hat{\theta})$.
Setting $\frac{d}{d\hat{\theta}}\tilde{L}_q=0$ yields the implicit equation
$\Pr(\theta < \hat{\theta}|D) = p$,
that is, the optimal $\hat{\theta}$ is the $(100p)$% quantile of the posterior distribution, as expected.
Thus to get quantile-based interval estimates, the loss function is
$L_{qCI}(\theta, (a,b); p_L, p_U) = L_q(\theta, a;p_L) + L_q(\theta, b;p_U)$.
To have mutually exclusive events means if one of those events occurs, the others cannot occur. Therefore, for the intersection of mutually exclusive events $A_\mathrm{i}$ with $\mathrm{i} \in \{1, \dots, \mathrm{n} \ | \ \mathrm{n} \in \mathbb{N} \setminus \{1\} \}$ holds $\ \bigcap_1^n A_\mathrm{i} = \emptyset$. This implies $P[\bigcap_1^n A_\mathrm{i}] = P[\emptyset] = 0$. In general, the probability of the union of two events is $P[B\bigcup C] = P[B] + P[C] - P[B\bigcap C]$ . Hence, for mutually exclusive events holds $P[\bigcup_1^n A_\mathrm{i}] = \sum\limits_1^n P[A_i]$. Knowing this, you can apply it to your tasks:
a) $P[A\bigcup B] = P[A] + P[B] = 0.3 + 0.5 =0.8$
b) Occurence of A implies no occurence of B$ \implies P[A] = 0.3$
c) $P[A\bigcap B] = P[\emptyset] = 0$.
As already was suggested in the comments, the solutions in your textbook are not right and inappropriate for this kind of task.
Moreover, what you did in the calculation of a) was assuming $A$ and $B$ are independent and interpreting "either" as "and". Note, in probability theory, the term "or" indicates the union of events and the term "and" indicates the intersection. Therefore, it holds:
$P[A$ or $B] \ge P[A$ and $B]$ $ \iff $$P[A \cup B] \ge P[A \cap B]$.
Best Answer
Let's look at this problem in terms of utility. Let $U(x \vert \theta)$ be the utility under a given action, $x=A, B$ and state of the world/hypothesis $\theta = H_1, H_2$.
Because there are two possible hypotheses and 2 actions, we can enumerate our utilities
$$ U(A \vert H_1) = 1 $$
$$ U(A \vert H_2) = 0 $$
$$ U(B \vert H_1) = 2 $$
$$ U(B \vert H_1) = -1 $$
We are also given the information about the posterior probability of one of the hypotheses, $P(H_1) = p$. Since the hypotheses are mutually exclusive, $P(H_2) = 1-P(H_1)$. Using this information, we can compute the expected utility for each action, with I will denote $E(U(x))$.
$$E(U(A)) = U(A\vert H_1)P(H_1) + U(A \vert H_2)P(H_2) = p$$
$$E(U(B)) = U(B\vert H_1)P(H_1) + U(B \vert H_2)P(H_2) = 2p - (1-p) = 3p-1$$
You're indifferent to the decision when each decision has the same expected utility. So solve $3p-1=p$ or $p=0.5$. This makes sense, when there is an equal chance of either hypothesis being true then both actions have the same expected pay off (namely a dollar).