Although it may appear that the mean of the log-transformed variables is preferable (since this is how log-normal is typically parameterised), from a practical point of view, the log of the mean is typically much more useful.
This is particularly true when your model is not exactly correct, and to quote George Box: "All models are wrong, some are useful"
Suppose some quantity is log normally distributed, blood pressure say (I'm not a medic!), and we have two populations, men and women. One might hypothesise that the average blood pressure is higher in women than in men. This exactly corresponds to asking whether log of average blood pressure is higher in women than in men. It is not the same as asking whether the average of log blood pressure is higher in women that man.
Don't get confused by the text book parameterisation of a distribution - it doesn't have any "real" meaning. The log-normal distribution is parameterised by the mean of the log ($\mu_{\ln}$) because of mathematical convenience, but equally we could choose to parameterise it by its actual mean and variance
$\mu = e^{\mu_{\ln} + \sigma_{\ln}^2/2}$
$\sigma^2 = (e^{\sigma^2_{\ln}} -1)e^{2 \mu_{\ln} + \sigma_{\ln}^2}$
Obviously, doing so makes the algebra horribly complicated, but it still works and means the same thing.
Looking at the above formula, we can see an important difference between transforming the variables and transforming the mean. The log of the mean, $\ln(\mu)$, increases as $\sigma^2_{\ln}$ increases, while the mean of the log, $\mu_{\ln}$ doesn't.
This means that women could, on average, have higher blood pressure that men, even though the mean paramater of the log normal distribution ($\mu_{\ln}$) is the same, simply because the variance parameter is larger. This fact would get missed by a test that used log(Blood Pressure).
So far, we have assumed that blood pressure genuinly is log-normal. If the true distributions are not quite log normal, then transforming the data will (typically) make things even worse than above - since we won't quite know what our "mean" parameter actually means. I.e. we won't know those two equations for mean and variance I gave above are correct. Using those to transform back and forth will then introduce additional errors.
I sense two areas of confusion here.
One is the logarithmic data transformation of predictor variables (like mapping Time to TimeLog) versus the logarithmic link function used in the generalized linear model. The former has to do with the predictor variables, the second with the response variable and its relationship to the linear part of the model.
In ordinary least-squares linear regression, it is standard practice to transform predictor variables as necessary to meet desirable characteristics like linearity, constant variance of the residuals between predictions and observed outcome values, and so on. So a log transform of time (as a predictor variable) might be called for regardless of the type of linear model you are pursuing. The linear regression provides, for any case of interest, a single linear predictor that is a linear combination of all the (potentially transformed) predictor-variable values for that case.
A generalized linear model allows such linear modeling of outcome variables that might not be adequately handled without further transformation of a linear predictor, which in principle could provide predicted values over all of $(-\infty,\infty)$. The link function in a generalized linear model has to do with mapping between the linear predictor and the response variable; it doesn't directly care whether the original predictor variables were somehow transformed before they were combined into the overall linear predictor. So from that perspective you don't have to worry.
The second area of confusion is in your formulation of the generalized linear mixed model. As Isabella Ghement and Dimitris Rizopoulos have both mentioned, there are two problems here. First, unless you are dealing with such large numbers of mutations that they effectively have a continuous distribution, count data should be modeled as count data with Poisson or negative-binomial generalized linear models. Second, the way you have treated your time variable as a random effect (you say "fixed effects" in the question but you evidently meant "random effects" from the formulation of your model) would only rarely make sense. Please make sure that you fully understand the implications of treating time as a random effect in the way that you have, as others have noted. Did you perhaps intend to treat time as a fixed effect but with a different slope versus time for different individuals? If so, please consult the lmer cheat sheet for the correct way to code that.
In response to comment:
The best way to capture a change of Mutations with Time is to include Time as a fixed effect. (Including Time, however transformed, as a random effect as in your model doesn't accomplish that in any useful way that I see.) The regression coefficient for Time then gives a direct measure of the rate of increase of Mutations with Time. (For simplicity, I'm assuming Mutations to increase linearly over Time, and ignoring for now the link function of the generalized model.) Your model doesn't presently include a fixed effect for Time in any way.
If you think that Medication will affect the rate of increase of Mutations with Time, as opposed to simply affecting the number of Mutations at Time=0, then you need also to include an interaction term between the two fixed effects of Mutations and Time. The intercept of the model (under default R handling) is then the value of Mutations at Time=0 for whatever Medication you have specified as the reference category.
Your (1|Sample)
term then allows that intercept to differ among Samples. For the rate of change of Mutations also to differ among Samples (beyond any effects due to Medication differences among samples), add a term involving (Time|Sample)
. That's precisely how the web page you linked in your comment allowed Time to contribute to a random effect term even though it is a fixed effect. This answer on the lmer cheat sheet shows how to specify such a term depending on the assumptions that you are willing to make.
Best Answer
Everything is on the log (expected rate) scale. However, note that standard errors don't transfer easily to the exponentiated scale, because what's a symmetric sampling distribution on the log scale isn't on the exponentiated scale. So, if you want something like confidence intervals constructed as estimate +- normal quantile * SE, you calculate those on the log scale and exponentiate the confidence interval limits. Similarly, of you want to use the random effects (either the specific estimate for a unit in the data used to fit the model, or a new previously unseen unit for which we draw from the random effects distribution), you add those on the log-scale and then exponentiate.