Impossible to know because you don't know the base rate at which H0 is actually true or false. That being said...
You are already 'protected' against type I error when you restrict yourself to doing posthoc analyses only on elements of models that were statistically significant overall. In addition, posthoc correction procedures tend towards draconian criteria for evidence. Therfore, I'd tend to think that you'd be drifting towards increasing your risk of notable Type II error. So, I'd suggest that (if your peers will allow it) you look at the magnitude of the effects from your overall analysis and not fiddle around with posthocs.
The idea that only non-orthogonal comparisons require adjustment is a myth. See section 6.1 of Frane (2015): http://jrp.icaap.org/index.php/jrp/article/view/514/417
In general, computing several alternate statistics and picking the one that gives you the answer you like best is a bad policy and can cause error inflation (as it's a form of multiple comparisons in itself). It's best to have a statistical plan before you look at your data.
Bonferroni is less powerful than Holm. Holm is less powerful than some other procedures that require more assumptions. Sidak is only a tiny bit more powerful than Bonferroni and requires the assumption of non-negative dependence. If you just want to compare each treatment to control, and not compare the different treatments to each other, you can use Dunnett's procedure (which is designed for that purpose).
Not sure what you mean by "post hoc." Unfortunately, different people use that term in different ways.
Multiplicity applies any time you conduct more than one comparison.
See 5.
If you're not interested in the omnibus result, there's no reason to perform the omnibus test. As you observed, you can just go straight to the individual tests, adjusted for multiplicity (though it may be advisable to use the omnibus error term for those tests, which can provide more power in some cases). Some people perform the omnibus test and then use Fisher's LSD method (i.e. do the individual comparisons without adjustment), but that doesn't generally control the familywise error rate and may thus be hard to justify.
I don't see why the significance of a main effect should inherently affect whether you adjust the other tests.
Response to @Sophocole's reply from Aug 5, 2016 to @Bonferroni's answer from Aug 3, 2016.
I don't know who you talked to at IBM, but SPSS has several ways to control the familywise error rate, including Bonferroni, Tukey, and Dunnett tests (just google "multiple comparisons in SPSS" and you'll see). The same goes for any other reputable statistical package, including SAS and R. And if you're using a simple method like Bonferroni, you can probably do the adjustment in your head.
Regarding doing multiple tests of a single comparison and choosing the one that gives you the answer you like best, it's pretty straightforward to see what the problem with that is. If you try one method that produces error at a rate of 5%, but then you get a second, third, and fourth chance with alternative methods, obviously the error rate is going to be bigger than 5%. That's like playing darts and setting up a second, third, and fourth bull's eye in slightly different positions on the dart board--obviously, you're increasing your chances of getting lucky.
If you're in a very early stage of your research where you're just exploring around and error rates aren't a big concern, then by all means, test your heart out and don't bother with adjustments--you could even just look at the plots and mean differences and not do any formal testing at all if that suits your needs. But if you're trying to publish a claim or sell a treatment based on your results, you likely need statistical rigor. And if you're trying to get a drug approved by the FDA, you can forget about playing loose with error control!
By the way, you may want to read that Nakagawa article again. It seems he is not arguing against "getting rid of multiplicity adjustments altogether." He apparently thinks Bonferroni and Holm are generally too conservative for behavioral ecology research, but he does endorse false discovery rate control.
Best Answer
If you specify
adjust = "mvt"
in thelsmeans
call, you'll get exactly the same results as the glht call (except for minor differences due to the fact that the computations are simulation-based). The difference would come if you summarize the tests in theglht
object with some option other than the one-step method (which is the default). The one-step method protects the error rate for simultaneous confidence intervals, which is stronger (and hence more conservative) than the step-down methods. The mvt method is the exact one-step method when the distributional assumptions hold.Also, in a nicely balanced experiment with homogeneous errors, there is no difference between the Tukey method and the mvt method. That is, the Tukey method is the mvt method for the particular covariance structure encountered in such a balanced design.