normal distribution:
Take a normal distribution with known variance. We can take this variance to be 1 without losing generality (by simply dividing each observation by the square root of the variance). This has sampling distribution:
$$p(X_{1}...X_{N}|\mu)=\left(2\pi\right)^{-\frac{N}{2}}\exp\left(-\frac{1}{2}\sum_{i=1}^{N}(X_{i}-\mu)^{2}\right)=A\exp\left(-\frac{N}{2}(\overline{X}-\mu)^{2}\right)$$
Where $A$ is a constant which depends only on the data. This shows that the sample mean is a sufficient statistic for the population mean. If we use a uniform prior, then the posterior distribution for $\mu$ will be:
$$(\mu|X_{1}...X_{N})\sim Normal\left(\overline{X},\frac{1}{N}\right)\implies \left(\sqrt{N}(\mu-\overline{X})|X_{1}...X_{N}\right)\sim Normal(0,1)$$
So a $1-\alpha$ credible interval will be of the form:
$$\left(\overline{X}+\frac{1}{\sqrt{N}}L_{\alpha},\overline{X}+\frac{1}{\sqrt{N}}U_{\alpha}\right)$$
Where $L_{\alpha}$ and $U_{\alpha}$ are chosen such that a standard normal random variable $Z$ satisfies:
$$Pr\left(L_{\alpha}<Z<U_{\alpha}\right)=1-\alpha$$
Now we can start from this "pivotal quantity" for constructing a confidence interval. The sampling distribution of $\sqrt{N}(\mu-\overline{X})$ for fixed $\mu$ is a standard normal distribution, so we can substitute this into the above probability:
$$Pr\left(L_{\alpha}<\sqrt{N}(\mu-\overline{X})<U_{\alpha}\right)=1-\alpha$$
Then re-arrange to solve for $\mu$, and the confidence interval will be the same as the credible interval.
Scale parameters:
For scale parameters, the pdfs have the form $p(X_{i}|s)=\frac{1}{s}f\left(\frac{X_{i}}{s}\right)$. We can take the $(X_{i}|s)\sim Uniform(0,s)$, which corresponds to $f(t)=1$. The joint sampling distribution is:
$$p(X_{1}...X_{N}|s)=s^{-N}\;\;\;\;\;\;\;0<X_{1}...X_{N}<s$$
From which we find the sufficient statistic to be equal to $X_{max}$ (the maximum of the observations). We now find its sampling distribution:
$$Pr(X_{max}<y|s)=Pr(X_{1}<y,X_{2}<y...X_{N}<y|s)=\left(\frac{y}{s}\right)^{N}$$
Now we can make this independent of the parameter by taking $y=qs$. This means our "pivotal quantity" is given by $Q=s^{-1}X_{max}$ with $Pr(Q<q)=q^{N}$ which is the $beta(N,1)$ distribution. So, we can choose $L_{\alpha},U_{\alpha}$ using the beta quantiles such that:
$$Pr(L_{\alpha}<Q<U_{\alpha})=1-\alpha=U_{\alpha}^{N}-L_{\alpha}^{N}$$
And we substitute the pivotal quantity:
$$Pr(L_{\alpha}<s^{-1}X_{max}<U_{\alpha})=1-\alpha=Pr(X_{max}L_{\alpha}^{-1}>s>X_{max}U_{\alpha}^{-1})$$
And there is our confidence interval. For the Bayesian solution with jeffreys prior we have:
$$p(s|X_{1}...X_{N})=\frac{s^{-N-1}}{\int_{X_{max}}^{\infty}r^{-N-1}dr}=N (X_{max})^{N}s^{-N-1}$$
$$\implies Pr(s>t|X_{1}...X_{N})=N (X_{max})^{N}\int_{t}^{\infty}s^{-N-1}ds=\left(\frac{X_{max}}{t}\right)^{N}$$
We now plug in the confidence interval, and calculate its credibility
$$Pr(X_{max}L_{\alpha}^{-1}>s>X_{max}U_{\alpha}^{-1}|X_{1}...X_{N})=\left(\frac{X_{max}}{X_{max}U_{\alpha}^{-1}}\right)^{N}-\left(\frac{X_{max}}{X_{max}L_{\alpha}^{-1}}\right)^{N}$$
$$=U_{\alpha}^{N}-L_{\alpha}^{N}=Pr(L_{\alpha}<Q<U_{\alpha})$$
And presto, we have $1-\alpha$ credibility and coverage.
Best Answer
Many frequentist confidence intervals (CIs) are based on the likelihood function. If the prior distribution is truly non-informative, then the a Bayesian posterior has essentially the same information as the likelihood function. Consequently, in practice, a Bayesian probability interval (or credible interval) may be very similar numerically to a frequentist confidence interval. [Of course, even if numerically similar, there are philosophical differences in interpretation between frequentist and Bayesian interval estimates.]
Here is a simple example, estimating binomial success probability $\theta.$ Suppose we have $n = 100$ observations (trials) with $X = 73$ successes.
Frequentist: The traditional Wald interval uses the point estimate $\hat \theta = X/n = 73/100 = 0.73.$ And the 95% CI is of the form $$\hat \theta \pm 1.96\sqrt{\frac{\hat \theta(1-\hat \theta)} {n}},$$ which computes to $(0.643,\,0.817).$
This form of CI assumes that relevant binomial distributions can be approximated by normal ones and that the margin of error $\sqrt{\theta(1-\theta)/n}$ is well approximated by $\sqrt{\hat\theta(1-\hat\theta)/n}.$ Particularly for small $n,$ these assumptions need not be true. [The cases where $X = 0$ or $X = n$ are especially problematic.]
The Agresti-Coull CI has been shown to have more accurate coverage probability. This interval 'adds two Success and two Failures' as a trick to get a coverage probability nearer to 95%. It begins with the point estimate $\tilde \theta = (X+2)/\tilde n,$ where $\tilde n + 4.$ Then a 95% CI is of the form $$\tilde \theta \pm 1.96\sqrt{\frac{\tilde \theta(1-\tilde \theta)} {\tilde n}},$$ which computes to $(0.612, 0.792).$ For $n > 100$ and $0.3 < \tilde \theta < 0.7,$ the difference between these two styles of confidence intervals is nearly negligible.
Bayesian: One popular noninformative prior in this situation is $\mathsf{Beta}(1,1) \equiv \mathsf{Unif}(0,1).$ The likelihood function is proportional to $\theta^x(1-\theta)^{n-x}.$ Multiplying the kernels of the prior and likelihood we have the kernel of the posterior distribution $\mathsf{Beta}(x+1,\, n-x+1).$
Then a 95% Bayesian interval estimate uses quantiles 0.025 and 0.975 of the posterior distribution to get $(0.635, 0.807).$ When the prior distribution is 'flat' or 'noninformative' the numerical difference between the Bayesian probability interval and the Agresti-Coull confidence interval is slight.
Notes: (a) In this situation, some Bayesians prefer the noninformative prior $\mathsf{Beta}(.5, .5).$ (b) For confidence levels other than 95%, the Agresti-Coull CI uses a slightly different point estimate. (c) For data other than binomial, there may be no available 'flat' prior, but one can choose a prior with a huge variance (small precision) that carries very little information. (d) For more discussion of Agresti-Coull CIs, graphs of coverage probabilities, and some references, perhaps also see this Q & A.