Your main problem with the initial calculation is there's no good reason why $e^{\text{sd}(\log(Y))}$ should be like $\text{sd}(Y)$. It's generally quite different.
In some situations, you can compute a rough approximation of $\text{sd}(Y)$ from $\text{sd}(\log(Y))$ via Taylor expansion.
$$\text{Var}(g(X))\approx \left(g'(\mu_X)\right)^2\sigma^2_X\,.$$
If we consider $X$ to be the random variable on the log scale, here, $g(X)=\exp(X)$
If $\text{Var}(\exp(X))\approx \exp(\mu_X)^2\sigma_X^2$
then $\text{sd}(\exp(X))\approx \exp(\mu_X)\sigma_X$
These notions carry across to sampling distributions.
This tends to work reasonably well if the standard deviation is really small compared to the mean, as in your example.
> mean(y)
[1] 10
> sd(y)
[1] 0.03
> lm=mean(log(y))
> ls=sd(log(y))
> exp(lm)*ls
[1] 0.0300104
If you want to transform a CI for a parameter, that works by transforming the endpoints.
If you're trying to transform back to obtain point estimate and interval for the mean on the original (unlogged) scale, you will also want to unbias the estimate of the mean (see the above link): $E(\exp(X))\approx \exp(\mu_X)\cdot (1+\sigma_X^2/2)$, so a (very) rough large sample interval for the mean might be $(c.\exp(L),c.\exp(U))$, where $L,U$ are the upper and lower limits of a log-scale interval, and $c$ is some consistent estimate of $1+\sigma_X^2/2$.
If your data are approximately normal on the log scale, you may want to treat it as a problem of producing an interval for a lognormal mean.
(There are other approaches to unbiasing mean estimates across transformations; e.g. see Duan, N., 1983. Smearing estimate: a nonparametric retransformation method. JASA,
78, 605-610)
The question of statistical significance requires knowledge of the variability surrounding those numbers. If those numbers were population parameters, there'd be no statistical test necessary (they are different). However, if those numbers are sample means, the extent to which they are different depends on how variable the raw data are. For example, if I have means of 5 and 6, each with sd of 0.001, these look very different than means of 5 and 6 with sd of 100.
So you see, statistical significance is meaningless without variability estimates. If you have this information, you can conduct tests.
Best Answer
I am rather confused by your text saying that you have the population's scores, if you do, and you are not looking at a sample of a population, then you have your answer since you have the mean and the standard deviation. If you have a sample of a population for which you want to make decisions, then I would advise to conduct further analysis.
Since the data appears to be negatively skewed, I would advise you to conduct a non-parametric test on that data. However, you can also reach the same conclusions with a confidence interval. You can see if the mean +/- confidence interval includes the value 0, if it does not, you can assume that the mean of the population is different than 0.
When you have this sort of data you should always see the skewness and kurtosis values, and if they are too off, I would conduct a non-parametric test. Unless you have a big enough sample, which you do not.