First off, there is no accepted way to "analyze" a ROC curve: it is merely a graphic that portrays the predictive ability of a classification model. You can certainly summarize a ROC curve using a c-statistic or the AUC, but calculating confidence intervals and performing inference using $c$-statistics is well understood due to its relation to the Wilcoxon U-statistic.
It's generally fairly well accepted that you can estimate the variability in ROC curves using the bootstrap cf Pepe Etzione Feng. This is a nice approach because the ROC curve is an empirical estimate and the bootstrap is non-parametric. Parameterizing anything in such a fashion introduces assumptions and complications such as "is a flat prior really noninformative?" I am not convinced this is the case here.
Lastly, there's the issue of pseudo likelihood. You can induced variability in the ROC curves by putting a prior on $\theta$ which, in all of ROC usage, is the only thing which is typically not considered a random variable. You have then assumed that the variability in TPR and FPR induced by variability in $\theta$ are independent. They are not. In fact they are completely dependent. You are sort calculating a Bayesian posterior for your own weight in kilograms and pounds and saying they do not depend on each other.
Take, as an example, a model with perfect discrimination. Using your method, you will find that the confidence bands are the unit square. They are not! There is no variability in a model with perfect discrimination. A bootstrap will show you that.
If one were to approach the issue of ROC "analysis" from a Bayesian perspective, it would perhaps be most useful to address the problem of model selection by putting a prior on the space of models used for analysis. That would be a very interesting problem.
Best Answer
There are actually multiple ways to do this.
Remember that the AUC is a normalized form of the Mann-Whitney-U statistic, that is, the sum of ranks in either of the class. This means that finding optimal AUC is the problem of ordering all scores $s_1,\ldots,s_N$ so that the scores are higher in one class than the other.
This can be framed for example as a highly infeasible linear-programming-problem which can be solved heuristically with appropriate relaxations, but one method that interests me more is to find approximate gradients to the AUC so that we can optimize with stochastic-gradient-descent.
There's plenty to read about this, here is a naive approach:
Using '$[]$' as the Iverson-Bracket, another way to look at the sought ordering over the scores could be that
$[s_i\leq s_j]=1$ for all $i,j$ where response $y_i=0$ and $y_j=1$
So if the scores are a function of inputs and parameters $s_i = f(x_i,\theta)$
We want to maximize $$M^*=max_\theta \sum_{i,j}[s_i\leq s_j]$$
Consider the relaxation $\tanh(\alpha(s_j-s_i)) \leq [s_i\leq s_j]$
So $$M^*\geq \sum_{i,j}\tanh(\alpha(s_j-s_i))$$
And we could then sample $i$ and $j$ from each class to get contributions $\nabla_\theta \tanh(\alpha(s_j-s_i))$ to the full gradient.