Since you are using a bootstrap approach, I assume that you do not adhere to Baron & Kenny's (1986) mediation procedure. As you most likely know, anyone following Baron & Kenny's procedure would have stopped their analysis upon finding that there is no significant total effect. In such a case we would conclude that there is simply nothing there to be explained by mediation.
More modern approaches to mediation (e.g., Shrout & Bolger 2002, who advocated bootstrapping) do not require a significant total effect in order to speak of mediation. A significant indirect effect would be considered sufficient. What is more, the distinction between partial or full mediation makes much less sense in this paradigm than it did for Baron & Kenny. If we no longer require a significant total effect, then what becomes of the idea that we have obtained partial mediation if the direct effect is still substantial when controlling for the mediator? It seems to make no sense anymore - we never asked for the total effect to be substantial prior to introducing the mediator, then why ask for a substantial direct effect after the mediator has been added to the model?
And what is a substantial remainder anyhow? A significant one? That does not seem right: our remainder can be statistically non-significant (our Null is that there is no effect, after all) and still be substantial.
We are increasingly told not to rely on p-values alone to judge and describe effects. Accordingly, relying on p-values to arrive at a binary classification as "partial" or "full" mediation seems neither necessary nor advisable. In the modern view, mediation is essentially a set of SEM scenarios. I therefore suggest you report your mediation as such, if possible, stating the coefficients and confidence intervals for all relevant paths and avoiding the notion of partial or full mediation altogether.
If you have not read it already, Rucker et al. (2011) demonstrate how significant indirect effect may come about in the absence of a significant direct or total effect (e.g., because of differences in statistical power available to detect effects on different paths).
I am not an SPSS expert, but from what I am reading from your output you have two models, Model 1 (top) and Model 2 (bottom). Model 2 has an added predictor with a lower F statistic (5.765 --> 2.651). Additionally Model 2 has an R square of 0.067 which is very low. Indicating the model does not have a good impact.
Also, note that for Model 2, your t-statistics are 2.004 (p-val: 0.47) and 1.628 (p-val: 0.106). Both parameters are not significant at the 0.95 confidence level, hence you cannot report
From the reference I've read (http://web.pdx.edu/~newsomj/da2/ho_mediation.pdf)
If X is no longer significant when M is controlled, the finding supports full mediation.
If X is still significant (i.e., both X and M both significantly predict Y), the finding supports partial
mediation.
Your mediation factor (the added predictor) is not significant in Model 2. That said, I would not conclude there is a significant mediation factor. In order for you to conclude mediation, both factors have to be significant. Hence, you have two choices.
Rule out the effect of mediation
Lower your confidence level to 0.90, and then you can conclude mediation.
I'd be very cautious with option 2, as your are now modeling your analysis around trying to obtain significant results, vs trying to be as truthful to the data (and null hypothesis) as possible.
If I've misinterpreted something in the output, let me know and I'll adjust my answer.
Thanks
Best Answer
If your a path (from predictor to mediator) is not significant then you DO NOT have a mediation effect. The proposed mediator has an effect on the outcome variable and your proposed predictor also has an effect on the outcome variable, just NOT by the mediator you proposed. And yes, PROCESS is a plug-in that you can install in SPSS has the most current methodology for examining mediating effects, so that is the best tool you can use at the moment.