The reason for the KS test is that its generality, e.g. it's usefulness for non-parametric models comes from the definition of the test statistic under the assumption of the CDF being continuous.
Where we define the KS Statistic as
$$D_n(F) = \max\left(D_n^+(F), D_n^-(F)\right)$$
$$D_n^+(F) = \sup_{x \in \mathbb{R}} [F_n(x) - F(x)]$$
(and the reverse for $D_n^-(F)$).
Then under the null $D_n^+(F) = \max_{0 \le i \le n} \left( F_n(x_i) - F(X_{(i)}) \right)$
Recall that under the null $F(X_{(i)})$ is continuous uniform on $(0,1)$ so the distribution of $F$ doesn't matter.
So you can create your own K-S like test for any discrete distribution, but it won't be a generalized test.
Reference/Citation, Mathematical Statistics (Shao 2010)
There can be no single state-of-the-art for goodness of fit (for example no UMP test across general alternatives will exist, and really nothing even comes close -- even highly regarded omnibus tests have terrible power in some situations).
In general when selecting a test statistic you choose the kinds of deviation that it's most important to detect and use a test statistic that is good at that job. Some tests do very well at a wide variety of interesting alternatives, making them decent default choices, but that doesn't make them "state of the art".
The Anderson Darling is still very popular, and with good reason. The Cramer-von Mises test is much less used these days (to my surprise because it's usually better than the Kolmogorov-Smirnov, but simpler than the Anderson-Darling -- and often has better power than it on differences "in the middle" of the distribution)
All of these tests suffer from bias against some kinds of alternatives, and it's easy to find cases where the Anderson-Darling does much worse (terribly, really) than the other tests. (As I suggest, it's more 'horses for courses' than one test to rule them all). There's often little consideration given to this issue (what's best at picking up the deviations that matter the most to me?), unfortunately.
You may find some value in some of these posts:
Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises (about two-sample tests but many of the statements carry over
Motivation for Kolmogorov distance between distributions (more theoretical discussion but there are several important points about practical implications)
I don't think you'll be able to form a confidence interval for the cdf in the Cramer-von Mises and Anderson Darline statistics, because the criteria are based on all of the deviations rather than just the largest.
Best Answer
I think the question asks for a precise statistical test, not for an histogram comparison. When using the Kolmogorov-Smirnov test with estimated parameters, the distribution of the test statistics under the null depends on the tested distribution, as opposed to the case with no estimated parameter. For instance, using (in R)
leads to
while we get
for the same sample x. The significance level or the p-value thus have to be determined by Monte Carlo simulation under the null, producing the distribution of the Kolmogorov-Smirnov statistics from samples simulated under the estimated distribution (with a slight approximation in the result given that the observed sample comes from another distribution, even under the null).