The question seems to rely on a mistaken notion$^\dagger$.
A generalized linear model (GLM) does not in general assume constant variance.
Instead, there's an assumed variance function, $v(\mu)$, that relates the variance to the mean, $\text{Var}(Y_i)= \phi\,v(\mu_i)$, based on the particular distribution family in the exponential-family class of distributions.
So in the generalized linear model, interest focuses not on constant variance, but on a correctly specified variance function*.
* as in any model, George Box's famous aphorism applies - so we don't generally believe a variance function to be exactly correct, just a close enough description that the resulting inferences will be good enough for our particular purposes.
As a result a formal test of correctly specified variance doesn't really make sense, since it's answering a question we already know the answer to (no, it's not exactly correct), and any sufficiently large sample would tell us so.
Further, and more practically, even with the potential for an incorrectly specified variance (to an extent where the effect is substantial), choosing your procedures on the basis of a formal test of assumptions may be less advisable than simply not making an assumption you're not comfortable with. In the case of normal models at least, a number of papers indicate that it's better not to use a procedure that doesn't assume constant variance.
$\\$
$\dagger$(or, just possibly, a difference from the usual terminology, in which case your intent should be made more explicit)
Yes, it is possible for the omnibus ANOVA test statistic (testing the null hypothesis that the data arise from groups with the same mean) to be non-significant, while individual tests (allowing for multiple comparisons) are significant.
This is because the individual tests have greater statistical power to detect a difference than the omnibus test. As such, you can report the results of the individual tests with an explanatory note.
The advice to only run post hoc tests if the omnibus test is significant is due to Fisher, whose Least Significant Difference test requires that the global test null hypothesis be rejected. Modern tests such as Dunnett's are stand-alone.
As a general point I would advise less reliance on p-values and more on effect sizes.
Best Answer
After reporting the significance and type of your ANOVA test, you can create a figure showing the difference between the means and the confidence intervals for each pair of groups with significant differences, providing the p-value for each pairwise comparison. In addition it is adviced to correct for multiple testing if the number of groups is large, and to filter those results with low size effects.
Below there is a possible representation in which the difference in mean proportions of genes belonging to Glycolysis pathways is tested across 6 different groups. Each row represents a pairwise comparison and in columns there are, from left to right, the comparison performed (e.g. group 5 vs group 6), the mean proportion for each group (bar plot), the difference in mean proportions (confidence interval), and the p-value (uncorrected for multiple testing, it would be adviced to add "corrected" in the label otherwise).
EDIT
In this post it is shown how to use the library
multicompView
to perform a Tukey Test and to generate a representation similar to the one I shown above (which was created with STAMP and hence it limited to a particular type of data). For completeness, I paste the example of the code proposed in that post: