You are absolutely right in that you don't need to go through Bayes' formula to calculate the relative frequencies -and this is the critical point:if you do as you suggest, you are NOT calculating probabilities, and they are not even "posterior" - they are just "conditional" (not the same thing).
Your question is equivalent to ask "Is descriptive statistics the same thing as inferential statistics?" I know you didn't use the term "statistics" in your question, but you cannot escape it.
Naturally, since you have the data, you can easily compute the relative frequencies, be it joint, marginal, or conditional, of the events that have occurred. What does that tell you? That for events passed, the relative frequencies of the events you are interested in, were so and so.These are not probabilities yet. In order for them to be treated as probabilities, you have to make additional assumptions.
Why? because probabilities are used to describe (and hopefully manage) uncertainty - and uncertainty relates to the unknown (usually the future, but not necessarily - it may refer to events that have happened but for which you don't know the outcome). So in order to move from the known and certain (the empirical frequencies you have calculated -which is what descriptive statistics is all about) to the unknown (the probabilities) it is obvious that you have to make additional assumptions to somehow use relative frequencies in place of probabilities- they are not automatically equivalent.
And here is where "frequentists" and "Bayesians" part ways. Tailored to your question (and oversimplifying of course),
The frequentist would make the following assumption:"I assume that my sample (the data from games played), is representative of what happens "in general" with this team. So, next season will be approximately the same. Given this assumption I can use the relative frequencies obtained from this sample as approximate estimates of the probabilities of what will happen next season." Then go on and calculate $P(team\ wins | team\ scores\ 100)$ directly from the contingency tables.
The Bayesian will object as follows: "your assumption that your sample is "representative" is unfounded. Either you have available other samples also, in which case bring them forth and prove that your current sample is representative, or you don't have other samples, so you cannot proceed as you said - your inference is unreliable". The Bayesian then would say "if we don't have other samples, the best we can do, is to accept our ignorance, then start somewhere -the prior (="before the data") and let the data modify our possibly ad hoc starting point -lead us to the posterior (="after the data")".
This means that in the Bayes' formula that appears in your question, the magnitude $P(team\ wins)$ is NOT calculated from the sample you have, but it is assigned a value a priori as the prior . And then you go to the sample to calculate $P(team\ scores\ 100 | team\ wins)$ and $P(team\ scores\ 100)$, and now you see why you have to go through Bayes formula to arrive at something that can legitimately be called the "posterior" probability $P(team\ wins | team\ scores\ 100)$: it is not calculated only from the sample at hand -if you want to use your calculations in order to say something about games that have not yet been played.
With either approach, you are now in the realm of inferential statistics.
Note: The fact that we do all these through statistical distributions and not as point probabilities (as has already been mentioned), is because we want a fuller picture of the structure of uncertainty that surrounds the future outcomes, but also, in order to calculate the uncertainty/error in our estimated probabilities.
Best Answer
You are mixing up two different "levels" of probability. You have an unknown distribution for the bag of numbers, and let $H$ denote the proportion of mixed digit numbers in the bag. Your prior probability should be a distribution over the possibilities for $H$ (which is some number between $0$ and $1$, you don't know how many numbers are in the bag even!). So, you assumed a prior that suggests with certainty that the proportion of mixed digit numbers in the bag is $0.72$, i.e. your prior distribution is $P(H=0.72) = 1$ and $P(H\neq 0.72) = 0$.
This doesn't seem like a good starting guess (and it looks like if you applied the update rule anyway, you get the same distribution back, deterministically 0.72). Maybe you should start with $H$ being the uniform distribution on $[0,1]$?
Then the update rule will update the distribution over $H$. With the given observation you'd expect the new distribution of $H$ to become more centered towards $0.80$ after this one update? In this case you'll have to compute conditional densities.
Alternatively, if you want to work with a discrete distribution, you could assume a prior that $H$ can only be one of $\{0,0.25,0.5,0.75,1\}$ with equal probability (also probably not a good prior, as in the limit the update will center on one of these values, even though it's possible the bag has a proportion other than these ones), and after one update at least $0$ and $1$ will be eliminated. The point of the question I guess is to get a feel for how updates work.