You can do corrected resampled t-tests (Nadeau, 2003). They take care of this lacking independence in repeated cross validation. They are of course less powerful than normal t-tests, but probably the best that is available to you.
You also need your number of outer folds multiplied by your number of repetitions to be at least $30$ because the usual classification performance metrics cannot be assumed to be normally distributed. As you have $100$, you are on the safe side.
Use one sample t-tests since you have paired data. The algorithms have all been trained and tested on the same folds. First you compute all performance differences of two algorithms on the same fold $k\in \{1,\ldots , K\}$ in the same repetition $r\in \{1,\ldots , R\}$:
$$d_{kr}$$
The sample mean and variance are computed the usual way:
$$\hat{\mu}_d= \frac{1}{K\times R} \sum_{k=1}^K \sum_{r=1}^R d_{kr}$$
$$ \hat{\sigma}_d^2=\frac{1}{(K\times R) - 1} \sum_{k=1}^K \sum_{r=1}^R (d_{kr}-\hat{\mu_d})^2 $$
The following adjusted test statistic should be compared with regular student tables for $(K\times R) -1$ degrees of freedom:
$$T = \hat{\mu}_d\left/\sqrt{\left(\frac{1}{K\times R}+\frac{1/K}{1-1/K}\right)\hat{\sigma}_d^2}\right.$$
Instead of the usual test statistic that doesn't correct for non-independent samples.
$$T = \hat{\mu}_d\left/\sqrt{\frac{\hat{\sigma}_d^2}{K\times R}}\right.$$
If you want to split hairs, you can use the actual number of records in the current outer fold divided by the actual number of records in all other outer folds in the correction factor instead of the estimate $\frac{1/K}{1-1/K}$. That's only necessary for small data-sets though.
In your case, $\frac{1}{K\times R}=0.01$ and the correction factor $\frac{1/K}{1-1/K}=\frac{1}{K-1}=0.25$. So your standard error has been inflated compared to a normal t-test by a factor of:
$$\frac{\sqrt{0.01 + 0.25}}{\sqrt{0.01}}=5.01$$
Had you done 10 times repeated 10 fold CV instead (which weka does per default for example), the standard error would have been inflated by only factor:
$$\frac{\sqrt{0.01 + 1/9}}{\sqrt{0.01}}=3.48$$
For a given number of $K \times R$ sample points in the t-test, the correction is harsher the more repetitions and thereby less folds you have. A 100 fold CV without repetition would only inflate the standard error by a factor of 1.42. But you need huge data-sets if you want performance metrics on 1% of your records to behave like interval variables, so you cannot always do that.
For the confidence intervals, continue using the same correction factor as before, the same corrected standard error basically:
$$\hat{\mu_d} \pm t_{\alpha/2}^{(K\times R) -1} \times \sqrt{\left(\frac{1}{K\times R}+\frac{1/K}{1-1/K}\right)\hat{\sigma}_d^2}$$
I'm not sure if they are implemented in an R package, but honestly it takes you only a couple of lines to code this yourself.
Don't forget to correct for multiple comparisons (due to multiple algorithms being compared) afterwards.
Best Answer
As Provost explains in 'An Introduction to ROC Analysis', ROC averaging can be simply done by combining the scores from multiple sets $T_1, ..., T_k$ as you suggested in method (2). This is preferred to method (1) since it can be quite hard to average actual ROC curves, since the specificity (x-axis) values of the points are expected to be different. Therefore, you would need to do a lot of interpolation to average the curves. Another advantage is that the resulting curve from method (2) is smoother and approximates the AUC better, as a low number of scores tends to underestimate the AUROC (at least when calculated via the trapezoidal rule).
However, one should note that an advantage of method (1) is that it enables you to estimate the variance of the AUC.