In univariate interval estimation, the set of possible actions is the set of ordered pairs specifying the endpoints of the interval. Let an element of that set be represented by $(a, b),\text{ } a \le b$.
Highest posterior density intervals
Let the posterior density be $f(\theta)$. The highest posterior density intervals correspond to the loss function that penalizes an interval that fails to contain the true value and also penalizes intervals in proportion to their length:
$L_{HPD}(\theta, (a, b); k) = I(\theta \notin [a, b]) + k(b – a), \text{} 0 < k \le max_{\theta} f(\theta)$,
where $I(\cdot)$ is the indicator function. This gives the expected posterior loss
$\tilde{L}_{HPD}((a, b); k) = 1 - \Pr(a \le \theta \le b|D) + k(b – a)$.
Setting $\frac{\partial}{\partial a}\tilde{L}_{HPD} = \frac{\partial}{\partial b}\tilde{L}_{HPD} = 0$ yields the necessary condition for a local optimum in the interior of the parameter space: $f(a) = f(b) = k$ – exactly the rule for HPD intervals, as expected.
The form of $\tilde{L}_{HPD}((a, b); k)$ gives some insight into why HPD intervals are not invariant to a monotone increasing transformation $g(\theta)$ of the parameter. The $\theta$-space HPD interval transformed into $g(\theta)$ space is different from the $g(\theta)$-space HPD interval because the two intervals correspond to different loss functions: the $g(\theta)$-space HPD interval corresponds to a transformed length penalty $k(g(b) – g(a))$.
Quantile-based credible intervals
Consider point estimation with the loss function
$L_q(\theta, \hat{\theta};p) = p(\hat{\theta} - \theta)I(\theta < \hat{\theta}) + (1-p)(\theta - \hat{\theta})I(\theta \ge \hat{\theta}), \text{ } 0 \le p \le 1$.
The posterior expected loss is
$\tilde{L}_q(\hat{\theta};p)=p(\hat{\theta}-\text{E}(\theta|\theta < \hat{\theta}, D)) + (1 - p)(\text{E}(\theta | \theta \ge \hat{\theta}, D)-\hat{\theta})$.
Setting $\frac{d}{d\hat{\theta}}\tilde{L}_q=0$ yields the implicit equation
$\Pr(\theta < \hat{\theta}|D) = p$,
that is, the optimal $\hat{\theta}$ is the $(100p)$% quantile of the posterior distribution, as expected.
Thus to get quantile-based interval estimates, the loss function is
$L_{qCI}(\theta, (a,b); p_L, p_U) = L_q(\theta, a;p_L) + L_q(\theta, b;p_U)$.
Best Answer
The Bayesian approach is based on determining the probability of a hypothesis with a model using an "a priori" probability that is then updated based on data. On the contrary, the classical hypothesis testing does not admit assigning a probability to the null hypothesis, but just either accepting or refusing it. The error-I type is the probability of wrongly refusing the null hypothesis when it is true. Thus, it is something completely different from the Bayesian logic (since probability is referred to making a mistake, not to the hypothesis itself).
EDIT: I stressed the fact the Bayesian approach is based on assigning a probability to a hypothesis, because this is a crucial difference wrt the classical approach, that mantains parameters are "assigned by Nature", thus not random variables, so you can't make probability statements directly on them. However, after you get your a posteriori probability, then of course you can take action, either by chosing the hypothesis with higher probability, or the one minimizing a given cost function. See, for example, here: https://www.probabilitycourse.com/chapter9/9_1_8_bayesian_hypothesis_testing.php
To sum up, I'd say the difference is: "a posteriori probability vs p-value", not "making vs not making a decision".