You should be wary of decide about transforming or not the variables just on statistical grounds. You must look on interpretation. ¿Is it reasonable that your responses is linear in $x$? or is it more probably linear in $\log(x)$? And to discuss that, we need to know your varaibles... Just as an example: independent of model fit, I wouldn't believe mortality to be a linear function of age!
Since you say you have "large data", you could look into splines, to let the data speak about transformations ... for instance, package mgcv in R. But even using such technology (or other methodsto search for transformations automatically), the ultimate test is to ask yourselves what makes scientific sense. ¿What do other people in your field do with similar data?
If logarithms of predictors, generically $x$, are helpful, and centring variables on their mean is helpful, would it help to centre before transforming?
Once you have subtracted the mean from a variable, then necessarily at least one value is now negative and logarithms can't (usefully) be calculated (setting aside complex analysis).
Even if you discard the specific suggestion of $\log(x−$ mean of $x)$ on those grounds, the more general idea of transforming $(x−$ mean of $x)$ still
requires a transformation that will work with positive, zero and negative values; there are some (cube root, asinh, ...) but they won't usually help you in any situation in which logarithms are being contemplated seriously
implies that the mean of untransformed data is in some sense a natural or even a convenient origin for the transformed scale, which I think is usually not the case. So it's no go generally for your [1] in my view.
By all means, centre variables, transformed or not, in presenting regression results; it's the same regression and it's a matter of convenience how you explain it. So on your [2] I don't think it changes model interpretation at all; it's just convenience whether you write about centred results.
By the way, there is no "of course" about using $\log(x+1)$ even if $x \ge 0$. That's an ad hoc fudge that some people use, especially it seems in some branches of biology. But there is no standard or accepted logic to it.
Best Answer
There's no requirement for the control variables to have any particular distribution. Indeed, the marginal distribution of the IVs or even the DV is not even an issue -- you're checking something that is not related to any regression assumption whatever. (Many posts on site discuss regression assumptions in detail.)
There's also no reason to expect that a transformation of an IV that makes the relationship more linear would not make the distributional shape of an IV less normal. Since the distributional shape of an IV is not of any consequence doing the first at the expense of the second should not worry you in and of itself.
However, if you're trying transformations and choosing one with higher $R^2$ and then using the same data for hypothesis tests or confidence intervals or prediction intervals among other things (i.e. using the same data on which you chose a model to evaluate the model or predict from it) then the statistical procedures don't have the properties they're intended to -- among other things, p-values are too low, standard errors are too small.