There can be no single state-of-the-art for goodness of fit (for example no UMP test across general alternatives will exist, and really nothing even comes close -- even highly regarded omnibus tests have terrible power in some situations).
In general when selecting a test statistic you choose the kinds of deviation that it's most important to detect and use a test statistic that is good at that job. Some tests do very well at a wide variety of interesting alternatives, making them decent default choices, but that doesn't make them "state of the art".
The Anderson Darling is still very popular, and with good reason. The Cramer-von Mises test is much less used these days (to my surprise because it's usually better than the Kolmogorov-Smirnov, but simpler than the Anderson-Darling -- and often has better power than it on differences "in the middle" of the distribution)
All of these tests suffer from bias against some kinds of alternatives, and it's easy to find cases where the Anderson-Darling does much worse (terribly, really) than the other tests. (As I suggest, it's more 'horses for courses' than one test to rule them all). There's often little consideration given to this issue (what's best at picking up the deviations that matter the most to me?), unfortunately.
You may find some value in some of these posts:
Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises (about two-sample tests but many of the statements carry over
Motivation for Kolmogorov distance between distributions (more theoretical discussion but there are several important points about practical implications)
I don't think you'll be able to form a confidence interval for the cdf in the Cramer-von Mises and Anderson Darline statistics, because the criteria are based on all of the deviations rather than just the largest.
The same considerations apply as to the distribution of the Kolmogorov–Smirnov test statistic discussed here. The Anderson–Darling test statistic (for a given sample size) has a distribution that (1) doesn't depend on the null-hypothesis distribution when all parameters are known, & (2) depends only on the functional form of the null-hypothesis distribution when location & scale parameters are estimated. I don't know of an R implementation of the A–D test specifically for the exponential distribution with estimated rate parameter, but you could quickly make a function to calculate the test statistic by adapting the ad.test
function from the nortest
package: change the distribution function from the best-fit normal, pnorm((x - mean(x))/sd(x))
, to the best-fit exponential,pexp(x/mean(x))
. Then get critical values for any desired significance level & sample size by simulation.
As to the "best" test, note that different tests are more powerful against different kinds of departure from the null-hypothesis distribution. If you have a quite specific alternative in mind, e.g. a Weibull distribution with shape parameter greater than one, a likelihood ratio test will be more powerful than a general-purpose goodness-of-fit test. For more vaguely specified alternatives it might be helpful to compare the power of various tests against a rogues gallery, following the approach of Stephens (1974), "EDF statistics for goodness of fit and some comparisons", JASA, 69, 347.
Best Answer
Package
adk
was replaced by packagekSamples
:Try: