Actually, I'd say just the opposite. Multicolinearity is often scoffed at as a concern. The only time this is a real issue is when one variable can be written as an exact linear function of others in the model (a male dummy variable would be exactly equal to a constant/intercept term minus a female dummy variable; hence, you can't have all three in your model). A prime example is Goldberger's comparison to "micronumerousity."
Perfect multicolinearity means that your model cannot be estimated; (not perfect) multicolinearity often leads to large standard errors, but no bias or real problems; heteroskedasticity means that your standard errors are incorrect and your estimates are inefficient.
First, I would create a model that yields the parameter estimates as I want to interpret them (level change, percent change, etc.) by using logs as appropriate. Then, I would test for heteroskedasticity. The most accepted option is to simply use robust standard errors to give you correct standard errors, but for inefficient parameter estimates. Alternatively, you can use weighted least squares to get efficient estimates, but this has become less common unless you know the relationship between the variances of your observations (they each depend upon the size of the observation---like population of a country). Indeed, in cross section econometrics using a data set of any real size, robust standard errors have become required irrespective of the outcome of a BP test; they are applied almost automatically.
There isn't a good test for endogeneity. You're real problem is that the regressor is correlated with the error; OLS will force the regressor to be uncorrelated with the residual. So you won't find any correlation there. Endogeneity is what makes econometrics hard and is a whole topic unto itself.
IMO (as not-a-logician or formally trained statistician per se), one shouldn't take any of this language too seriously. Even rejecting a null when p < .001 doesn't make the null false without a doubt. What's the harm in "accepting" the alternative hypothesis in a similarly provisional sense then? It strikes me as a safer interpretation than "accepting the null" in the opposite scenario (i.e., a large, insignificant p), because the alternative hypothesis is so much less specific. E.g., given $\alpha=.05$, if p = .06, there's still a 94% chance that future studies would find an effect that's at least as different from the null*, so accepting the null isn't a smart bet even if one cannot reject the null. Conversely, if p = .04, one can reject the null, which I've always understood to imply favoring the alternative. Why not "accepting"? The only reason I can see is the fact that one could be wrong, but the same applies when rejecting.
The alternative isn't a particularly strong claim, because as you say, it covers the whole "space". To reject your null, one must find a reliable effect on either side of the null such that the confidence interval doesn't include the null. Given such a confidence interval (CI), the alternative hypothesis is true of it: all values within are unequal to the null. The alternative hypothesis is also true of values outside the CI but more different from the null than the most extremely different value within the CI (e.g., if $\rm CI_{95\%}=[.6,.8]$, it wouldn't even be a problem for the alternative hypothesis if $\mathbb P(\rm head)=.9$). If you can get a CI like that, then again, what's not to accept about it, let alone the alternative hypothesis?
There might be some argument of which I'm unaware, but I doubt I'd be persuaded. Pragmatically, it might be wise not to write that you're accepting the alternative if there are reviewers involved, because success with them (as with people in general) often depends on not defying expectations in unwelcome ways. There's not much at stake anyway if you're not taking "accept" or "reject" too strictly as the final truth of the matter. I think that's the more important mistake to avoid in any case.
It's also important to remember that the null can be useful even if it's probably untrue. In the first example I mentioned where p = .06, failing to reject the null isn't the same as betting that it's true, but it's basically the same as judging it scientifically useful. Rejecting it is basically the same as judging the alternative to be more useful. That seems close enough to "acceptance" to me, especially since it isn't much of a hypothesis to accept.
BTW, this is another argument for focusing on CIs: if you can reject the null using Neyman–Pearson-style reasoning, then it doesn't matter how much smaller than $\alpha$ the p is for the sake of rejecting the null. It may matter by Fisher's reasoning, but if you can reject the null at a level of $\alpha$ that works for you, then it might be more useful to carry that $\alpha$ forward in a CI instead of just rejecting the null more confidently than you need to (a sort of statistical "overkill"). If you have a comfortable error rate $\alpha$ in advance, try using that error rate to describe what you think the effect size could be within a $\rm CI_{(1-\alpha)}$. This is probably more useful than accepting a more vague alternative hypothesis for most purposes.
* Another important point about the interpretation of this example p value is that it represents this chance for the scenario in which it is given that the null is true. If the null is untrue as evidence would seem to suggest in this case (albeit not persuasively enough for conventional scientific standards), then that chance is even greater. In other words, even if the null is true (but one doesn't know this), it wouldn't be wise to bet so in this case, and the bet is even worse if it's untrue!
Best Answer
To check the presence or absence of heteroskedasticity you should use not only one test. There are many other tests (e.g. White test, F test etc.) and also use plots to check the gls assumption (e.g. “fitted-values” plot etc.).
The accurate and comprehensive analysis contains multiple number of procedures. If at least one test or plot shows that there is the presence of heteroskedasticity, then you should reject the null hypothesis.
Particularly, regarding R, you should check the documentation or other sources. E.g.: https://www.statology.org/breusch-pagan-test-r/