At first, my data showed not a normality, so I transformed to log10
and became good normal distribution.
Note that this automatically transforms the effects. See this question on the same transformation you did. You would have to discuss if you rather want a generalized linear model (not "general linear model").
It is generally not a good idea to transform in order to keep the distributional assumptions. If the assumptions are not met, you will miss your type-I-error rate, which is bad, but is healed often by large sample sizes. On the other hand, if you transform it in a way that you cannot interpret the effects in terms of your experiment, it is worse.
Variance homogeneity (homoskedasticity) is usually not so big a problem any more as there are procedures which can deal with it (I think they are even implemented in SPSS). In your case, data are even perfectly balanced (N=23), so there is even an exact procedure for it. I would generally suggest to use a procedure supporting homoskedasticity. If your choice of method depends on the result of a variance homogenity test, you are in fact using a two-staged procedure that behaves differently than the single staged procedures.
If the assumption of normality for one-way ANOVA does not hold, you can turn to a nonparametric analog to the one-way ANOVA: the Kruskal-Wallis test. Just as the assumption of normality underlying the unpaired t test may not be met, thus motivating the use of the rank sum test, onne can then use Dunn's test, or the more powerful (but less well known) Conover-Iman test to conduct post hoc pairwise tests if one rejects the omnibus Kruskal-Wallis test's null hypothesis.
In their most general form the nonparametric tests (Kruskal Wallis, rank sum, Dunn's, etc.) do not assume equal variances among groups. Instead, they test:
$$H_{0}:P(X_{A}>X_{B})=0.5$$
with
$$H_{a}:P(X_{A}>X_{B})\ne0.5$$
Or in words: the null hypothesis is that the probability that a randomly selected observation from group A is greater than a randomly selected observation from group B equals one half. The alternative is that the probability is not one half. For the Kruskal-Wallis test, the null hypothesis is that the probability that a randomly selected value from any group is greater than a randomly selected observation from any other group equals one half, with the alternative that at least one group that has a probability not equal to one half for being greater than a randomly selected value from another group.
One can interpret these as tests of location shift, median difference, or mean difference if the variances for all groups are all equal and the shapes of the distribution are the same (this is a pretty stringent requirement!), but nonparametric tests do not require such assumptions to use.
I have published a software package to perform Dunn's test for R (dunn.test), and Dunn's test for Stata (dunntest), and a software package to perform the Conover-Iman test for R (conover.test), and the Conover-Iman test for Stata (conovertest). Both packages correct for ties, and implement an array of familywise error rate and false discovery rate adjustments for multiple comparisons.
References
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
Conover, W. J. (1999). Practical Nonparametric Statistics. Wiley, Hoboken, NJ, 3rd edition.
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Best Answer
If I am interpreting your question correctly, there are 5 treatment levels. One group is control, one group is, say, standard treatment, and 3 groups are investigational treatment at 3 different doses. You have a sample of fish that have been randomized 1:2:1:1:1 in this experiment for the 5 groups. You are interested in whether the investigational treatment is better than standard treatment and/or control.
If indeed your experiment follows as I've described, you ought to break this out into multiple comparisons. No need for Bonferroni correction, since the substantive question "Is treatment better?" is approximately the same in all possible comparisons. T-tests do indeed handle unequal variances. However, just as ordinary linear regression adjusting for a binary predictor is an analogue of a t-test with equal variances, generalized estimating equations is an analogue of the t-test with unequal variances. You can correct for heteroscedasticity by calculating robust standard errors which is either easily supplied as an option in SPSS or else SPSS is horribly outdated.
This just goes to show that when you test for assumptions, it's very difficult to interpret the meaning of the p-value. The assumption was a problem, you find the assumptions aren't met, now you have two problems. Better to have no assumptions at all.