It looks like you want to make a confidence interval. If that's not what you are trying to do, and you just want to report the standard deviation, than it is fine to write
$1.23 \pm 0.52$
with out the $\sigma$, as long as you make it clear that you are reporting the standard deviation and not a confidence interval, since many readers will assume this notation refers to a confidence interval. I've never seen the notation
$1.23 \pm 0.52\sigma $
used for this before, but it's possible it could be normal in your field. Unlikely though.
Reporting standard deviations instead of variances
I think you are right in that standard deviation of each PC can perhaps be a more reasonable or a more intuitive (for some) measure of its "influence" than its variance. And actually it even has a clear mathematical interpretation: variances of PCs are eigenvalues of the covariance matrix, but standard deviations are singular values of the centered data matrix [only scaled by $1/\sqrt{n-1}$].
So yes, it is completely fine to report it. Moreover, e.g. R does report standard deviations of PCs rather than their variances. For example running this simple code:
irispca <- princomp(iris[-5])
summary(irispca)
results in this:
Importance of components:
Comp.1 Comp.2 Comp.3 Comp.4
Standard deviation 2.0494032 0.49097143 0.27872586 0.153870700
Proportion of Variance 0.9246187 0.05306648 0.01710261 0.005212184
Cumulative Proportion 0.9246187 0.97768521 0.99478782 1.000000000
There are standard deviations here, but not variances.
Explained variance
A PC that contains 95% of the data variance might contain only 80% of the variation in the data as measured in standard deviations: isn't the latter a better descriptor?
However, note that after presenting standard deviations, R does not display a "proportion of standard deviation", but instead a proportion of variance. And there is a very good reason for that.
Mathematically, total variance (being a trace of covariance matrix) is preserved under rotations. This means that the sum of variance of original variables is equal to the sum of variances of PCs. In case of the same Fisher Iris dataset, this sum is equal to $4.57$, and so we can say that PC1, having a variance of $2.05^2=4.20$ explains $92\%$ of the total variance.
But the sum of standard deviations is not preserved! The sum of standard deviations of original variables is $3.79$. The sum of standard deviations of PCs is $2.98$. They are not equal! So if you want to say that PC1 with standard deviation $2.05$ explains $x\%$ of the "total standard deviation", what would you take as this total? There is no answer, because it simply does not make sense.
The bottom line is that it is completely fine to look at the standard deviation of each PC and even compare them between each other, but if you want to talk about "explained" something, then only "explained variance" makes sense.
Best Answer
If you report the mean, then it is more appropriate to report the standard deviation as it is expressed in the same unity. Think about dimensional homogeneity in physics.
Moreover, it is easier for the reader to consider confidence intervals (for large n, in order to use the Central Limit Theorem and consider a normal distribution) if the standard deviation is provided rather than the variance.
However, you may consider reporting the variance if you are interested in comparing variance and bias, or giving "different variance components", since the total variance is the sum of the intra and inter variances, while the standard deviations do not sum up.