Quick thoughts:
1) The key issue is what applied question you are trying to answer for your audience, because that determines what information you want from your statistical analysis. In this case, it seems to me that you want to estimate the magnitude of differences between groups (or perhaps the magnitude of ratios of the groups if that is the measure more familiar to your audience). The magnitude of differences is not directly provided by the analyses you presented in the question. But it is straight forward to get what you want from the Bayesian analysis: you want the posterior distribution of the differences (or ratios). Then, from the posterior distribution of the differences (or ratios), you can make a direct probability statement such as this:
"The 95% most credible differences fall between [low 95% HDI limit] and [high 95% HDI limit]" (here I'm using the 95% highest density interval [HDI] as the credible interval, and because those are by definition the highest density parameter values they are glossed as 'most credible')
A medical-journal audience would intuitively and correctly understand that statement, because it's what the audience typically thinks is the meaning of a frequentist confidence interval (even though that's not meaning of a frequentist confidence interval).
How do you get the differences (or ratios) from Stan or JAGS? Merely by post-processing of the completed MCMC chain. At each step in the chain, compute the relevant differences (or ratios), then examine the posterior distribution of the differences (or ratios). Examples are given in DBDA2E https://sites.google.com/site/doingbayesiandataanalysis/ for MCMC generally in Figure 7.9 (p. 177), for JAGS in Figure 8.6 (p. 211), and for Stan in Section 16.3 (p. 468), etc.!
2) If you are compelled by tradition to make a statement about whether or not a difference of zero is rejected, you have two Bayesian options.
2A) One option is to make probability statements regarding intervals near zero, and their relation to the HDI. For this, you set up a region of practical equivalence (ROPE) around zero, which is merely a decision threshold appropriate for your applied domain --- how big of a difference is trivially small? Setting such boundaries is routinely done in clinical non-inferiority testing, for example. If you have an 'effect size' measure in your field, there might be conventions for 'small' effect size, and the ROPE limits could be, say, half of a small effect. Then you can make direct probability statements such as these:
"Only 1.2% of the posterior distribution of differences is practically equivalent to zero"
and
"The 95% most credible differences are all not practically equivalent to zero (i.e., the 95% HDI and ROPE do not overlap) and therefore we reject zero." (notice the distinction between the probability statement from the posterior distribution, versus the subsequent decision based on that statement)
You can also accept a difference of zero, for practical purposes, if the 95% most credible values are all practically equivalent ot zero.
2B) A second Bayesian option is Bayesian null hypothesis testing. (Notice that the method above was not called "hypothesis testing"!) Bayesian null hypothesis testing does a Bayesian model comparison of a prior distribution that assumes the difference can only be zero against an alternative prior distribution that assumes the difference could be some diffuse range of possibilities. The result of such a model comparison (usually) depends very strongly on the particular choice of alternative distribution, and so careful justification must be made for the choice of alternative prior. It is best to use at-least-mildly-informed priors for both the null and alternative so that the model comparison is genuinely meaningful. Note that the model comparison provides different information than estimation of differences between groups because the model comparison is addressing a different question. Thus, even with a model comparison, you will still want to provide the posterior distribution of magnitude of differences between groups because the your audience will want to know the magnitude of difference and its uncertainty (credible interval) regardless of whether or not you decided to reject or accept a difference of zero.
There might be ways to do a Bayesian null hypothesis test from the Stan/JAGS/MCMC output, but I do not know in this case. For example, one could try a Savage-Dickey approximation to a Bayes factor, but that would rely on knowing the prior density on the differences, which would require some mathematical analysis or some additional MCMC approximation from the prior.
The two methods for deciding about null values are discussed in Ch. 12 of DBDA2E https://sites.google.com/site/doingbayesiandataanalysis/. But I really don't want this discussion to get side-tracked by a debate about the "proper" way to assess null values; they're just different and they provide different information. The main point of my reply is point 1, above: Look at the posterior distribution of the differences between groups.
Best Answer
This is a good question. I've spent a good amount of time during my Ph.D in Biostatistics consulting for academic physicians and their research. If you (and moderators) will allow for an opinion based answer then I'm happy to give it.
Medicine for some reason has created a culture in which the physician is intended to do everything themselves. Study design, data collection, analysis, writing, oh yea and on top of that their clinical duties and learning more about their specialty. These include responsibilities of an epidemiologist, data architect, statistician, just to name a few. Personally, I think that is a ridiculous onus to put on a researcher. This also might explain why medical research seems to be a copy-paste affair with bad statistics. Statistics is hard to learn, medicine is hard to learn, so learning both tends to mean taking shortcuts on one or the other or both (and understandably, it is the statistical rigour that is sacrificed).
Rather than succumb to these expectations it might be wiser to, as whuber notes, befriend a biostatsitician. Collaboration is a good way to learn, because you get consistent advice tailored to your specific situation as opposed to a mish mash of approaches from different courses with different learning goals. I'm not saying to defer all statistical work to a statistician, nor am I saying you should not learn about statistics independently, but I think rushing to learn all these things while also being a physician will lead to poorer work than if you were patient and collaborative.
The question is then "How do I meet/befriend a biostatsitician". Your medical school is likely attached to a university, in which there may or may not be an epidemiology department. Epidemiologists focus very carefully on how to do quality studies in a medical setting. THey should be well versed enough in statistics to help you out with design, data collection, and analysis. If you don't have an epidemiology department, there may be someone in a stats/math department, or in the sociology department (sociology is not exactly like biostatistics, but the difference between an epidemiologist and a sociologist grows smaller and smaller).
EDIT:
EdM makes a good point about the basis of fundamental probability and statistics. I'm not prepared to give a list of topics to learn and places to learn them. I think any undergraduate curriculum in science can give you enough to get started.
That being said, if pressed to offer one resource on a basis of prob and stats, I would recommend Introduction to Medical Statistics by Martin Bland. The book is geared towards medical students and in the introduction states
The book however does not cover probability, and so you're free to pick up most introductory texts on the matter to cover that base. I agree with Bland that this book should serve as a good basis to read academic medical literature critically, and should serve as an excellent jumping off point to learn more about statistics in medicine.