I think the more important point is suggested in @whuber's comment. Your whole approach is misfounded because by taking logarithms you effectively are throwing out of the dataset any students with zero missing days in either 2010 or 2011. It sounds like there are enough of these people to be a problem, and I am sure your results will be wrong based on the approach you are taking.
Instead, you need to fit a generalized linear model with a poisson response. SPSS can't do this unless you have paid for the appropriate module, so I'd suggest upgrading to R.
You will still have the problem of interpreting coefficients, but this is secondary to the importance of having a model that is basically appropriate.
The best solution is, at the outset, to choose a re-expression that has a meaning in the field of study.
(For instance, when regressing body weights against independent factors, it's likely that either a cube root ($1/3$ power) or square root ($1/2$ power) will be indicated. Noting that weight is a good proxy for volume, the cube root is a length representing a characteristic linear size. This endows it with an intuitive, potentially interpretable meaning. Although the square root itself has no such clear interpretation, it is close to the $2/3$ power, which has dimensions of surface area: it might correspond to total skin area.)
The fourth power is sufficiently close to the logarithm that you ought to consider using the log instead, whose meanings are well understood. But sometimes we really do find that a cube root or square root or some such fractional power does a great job and it has no obvious interpretation. Then, we must do a little arithmetic.
The regression model shown in the question involves a dependent variable $Y$ ("Collections") and two independent variables $X_1$ ("Fees") and $X_2$ ("DIR"). It posits that
$$Y^{1/4} = \beta_0 + \beta_1 X_1 + \beta_2 X_2 +\varepsilon.$$
The code estimates $\beta_0$ as $b_0=2.094573355$, $\beta_1$ as $b_1=0.000075223$, and $\beta_2$ as $b_2=0.000022279$. It also presumes $\varepsilon$ are iid normal with zero mean and it estimates their common variance (not shown). With these estimates, the fitted value of $Y^{1/4}$ is
$$\widehat{Y^{1/4}} = b_0 + b_1 X_1 + b_2 X_2.$$
"Interpreting" regression coefficients normally means determining what change in the dependent variable is suggested by a given change in each independent variable. These changes are the derivatives $dY/dX_i$, which the Chain Rule tells us are equal to $4\beta_iY^3$. We would plug in the estimates, then, and say something like
The regression estimates that a unit change in $X_i$ will be associated with a change in $Y$ of $4b_i\widehat{Y}^3$ = $4b_i\left(b_0+b_1X_1+b_2X_2\right)^3$.
The dependence of the interpretation on $X_1$ and $X_2$ is not simply expressed in words, unlike the situations with no transformation of $Y$ (one unit change in $X_i$ is associated with a change of $b_i$ in $Y$) or with the logarithm (one percent change in $X_i$ is associated with $b_i$ percent change in $Y$). However, by keeping the first form of the interpretation, and computing $4b_1$ = $4\times 0.000075223$ = $0.000301$, we might state something like
A unit change in fees is associated with a change in collections of $0.000301$ times the cube of the current collections; for instance, if the current collections are $10$, then a unit increase in fees is associated with an increase of $0.301$ in collections and if the current collections are $20$, then the same unit increase in fees is associated with an increase of $2.41$ in collections.
When taking roots other than the fourth--say, when using $Y^p$ as the response rather than $Y$ itself, with $p$ nonzero--simply replace all appearances of "$4$" in this analysis by "$1/p$".
Best Answer
The model is
$$\frac{1}{\sqrt{Y}} = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p + \varepsilon$$
where $Y$ is the original outcome, the $X_i$ are the explanatory variables, the $\beta_i$ are the coefficients, and $\varepsilon$ are iid, mean-zero error terms. Writing $b_i$ for the estimated value of $\beta_i$, we see that a one-unit change in $X_i$ adds $b_i$ to the right hand side. Starting from any baseline set of values $(x_1, \ldots, x_p)$, this induces a change in predicted values from $\widehat{1/\sqrt{y}} = b_0 + b_1 x_1 + \cdots + b_p x_p$ to $\widehat{1/\sqrt{y'}} = b_0 + b_1 x_1 + \cdots + b_p x_p + b_i$. Subtracting the first equation from the second gives
$$\frac{1}{\sqrt{\hat{y'}}} - \frac{1}{\sqrt{\hat{y}}} = b_i.$$
Solving for $\hat{y'}$ gives
$$\hat{y'} = \frac{\hat{y}}{1 + 2b_i\sqrt{\hat{y}} + b_i^2 \hat{y}}.$$
One may stop here, but often we seek simpler expressions: the behavior of this one might not be any easier to understand than the original model. Simplification can be achieved provided $b_i$ is very small. If necessary, we can contemplate a tiny change in $X_i$, say by an amount $\delta$, which would replace $b_i$ in the preceding equation by $\delta b_i$. Using a sufficiently small value of $\delta$ will assure the denominator is close to $1$. When it is,
$$\frac{\hat{y}}{1 + 2\delta b_i\sqrt{\hat{y}} + \delta^2 b_i^2 \hat{y}} \approx \hat{y}(1 - 2\delta b_i\sqrt{\hat{y}} - \delta^2 b_i^2 \hat{y}),$$
whence the change in predicted values is
$$\hat{y'} - \hat{y} \approx -\delta (2b_i\sqrt{\hat{y}} + \delta b_i^2 \hat{y}).$$
Taking $\delta$ to be so small that $\delta b_i^2 \hat{y} \ll 2 b_i\sqrt{\hat{y}}$ allows us to drop the second term in the right hand side. That is, for very tiny changes, the predicted outcome changes by $-(2b_i\sqrt{\hat{y}})$ times the amount of change in $x_i$.
Comments
The appearance of the negative sign indicates that an increase in $X_i$ will decrease $Y$ when $b_i$ is positive and increase $Y$ when $b_i$ is negative. Normally, we avoid this (potentially confusing) sign reversal by using $-1/\sqrt{Y}$ instead of $1/\sqrt{Y}$ when making a reciprocal square root transformation (or any other transformation that reverses the order of numbers).
This solution method is always applicable no matter how $Y$ is re-expressed, but it can lead to complicated algebra for other transformations of $Y$. Those who know the basics of differential calculus will recognize that all we're doing here is approximating the change in $\hat{y}$ to first order using its derivative with respect to $x_i$, so they will be able to avoid most of the algebraic manipulations.