Can I use GLM normal distribution with LOG link function on a DV that has already been log transformed?
Yes; if the assumptions are satisfied on that scale
Is the variance homogeneity test sufficient to justify using normal distribution?
Why would equality of variance imply normality?
Is the residual checking procedure correct to justify choosing the link function model?
You should beware of using both histograms and goodness of fit tests to check the suitability of your assumptions:
1) Beware using the histogram for assessing normality. (Also see here)
In short, depending on something as simple as a small change in your choice of binwidth, or even just the location of the bin boundary, it's possible to get quite different impresssions of the shape of the data:
That's two histograms of the same data set. Using several different binwidths can be useful in seeing whether the impression is sensitive to that.
2) Beware using goodness of fit tests for concluding that the assumption of normality is reasonable. Formal hypothesis tests don't really answer the right question.
e.g. see the links under item 2. here
About the variance, that was mentioned in some papers using similar datasets "because distributions had homogeneous variances a GLM with a Gaussian distribution was used". If this is not correct, how can I justify or decide the distribution?
In normal circumstances, the question isn't 'are my errors (or conditional distributions) normal?' - they won't be, we don't even need to check. A more relevant question is 'how badly does the degree of non-normality that's present impact my inferences?"
I suggest a kernel density estimate or normal QQplot (plot of residuals vs normal scores). If the distribution looks reasonably normal, you have little to worry about. In fact, even when it's clearly non-normal it still may not matter very much, depending on what you want to do (normal prediction intervals really will rely on normality, for example, but many other things will tend to work at large sample sizes)
Funnily enough, at large samples, normality becomes generally less and less crucial (apart from PIs as mentioned above), but your ability to reject normality becomes greater and greater.
Edit: the point about equality of variance is that really can impact your inferences, even at large sample sizes. But you probably shouldn't assess that by hypothesis tests either. Getting the variance assumption wrong is an issue whatever your assumed distribution.
I read that scaled deviance should be around N-p for the model for a good fit right?
When you fit a normal model it has a scale parameter, in which case your scaled deviance will be about N-p even if your distribution isn't normal.
in your opinion the normal distribution with log link is a good choice
In the continued absence of knowing what you're measuring or what you're using the inference for, I still can't judge whether to suggest another distribution for the GLM, nor how important normality might be to your inferences.
However, if your other assumptions are also reasonable (linearity and equality of variance should at least be checked and potential sources of dependence considered), then in most circumstances I'd be very comfortable doing things like using CIs and performing tests on coefficients or contrasts - there's only a very slight impression of skewness in those residuals, which, even if it's a real effect, should have no substantive impact on those kinds of inference.
In short, you should be fine.
(While another distribution and link function might do a little better in terms of fit, only in restricted circumstances would they be likely to also make more sense.)
I don't think beta regression, as suggested by @O_Devinyak, will work well for this case as there are exact 0s and 1s in the data and the beta distribution only works for values between, but not including, 0 and 1.
A solution that has become more popular in economics is the so-called fractional logit model, which economists tend to attribute to Papke and Wooldridge (1996), though the basic idea can be traced back to at least Wedderburn (1974). Nowadays it is fairly easy to estimate such models. For example in Stata (the statistical program I know best) you would use the glm
program in combination with the link(logit) family(binomial) vce(robust)
options.
Wedderburn, R. W. 1974. Quasi-likelihood functions, generalized linear models, and the Gauss—Newton method. Biometrika, 61(3): 439-447.
Papke, Leslie E. and Jeffrey M. Wooldridge. 1996. Econometric methods for fractional response variables with an application to 401(k) Plan participation rates. Journal of Applied Econometrics, 11(6): 619-632.
Best Answer
I was facing similar problems with probability of loss as my dependent variable (bounded to 0% and 100%), and I was about to use logit as smoothing function (to be unbounded) to then using OLS in estimating my independent macroeconomics parameters.
First, you have to ensure that the plot of transformed dependent variable is quite linearly scattered. Second, you need to prove that the error of response is normally distributed (otherwise the OLS estimator is suboptimal). Third, if your variance of error is heteroscedastic then you need some weighting technique to keep your OLS estimator is BLUE.
You will need to use another smoothing function if the first does not held. You will need to use maximum likelihood estimator if the second and the third are not held.
If I were you, I would take R-squared as goodness indicator as I am using OLS rather than MLE. And instead of deviance residual, maybe you could try to see Cook’s D in this manner.