The standard deviation is the square root of the variance.
The standard deviation is expressed in the same units as the mean is, whereas the variance is expressed in squared units, but for looking at a distribution, you can use either just so long as you are clear about what you are using. For example, a Normal distribution with mean = 10 and sd = 3 is exactly the same thing as a Normal distribution with mean = 10 and variance = 9.
Reporting standard deviations instead of variances
I think you are right in that standard deviation of each PC can perhaps be a more reasonable or a more intuitive (for some) measure of its "influence" than its variance. And actually it even has a clear mathematical interpretation: variances of PCs are eigenvalues of the covariance matrix, but standard deviations are singular values of the centered data matrix [only scaled by $1/\sqrt{n-1}$].
So yes, it is completely fine to report it. Moreover, e.g. R does report standard deviations of PCs rather than their variances. For example running this simple code:
irispca <- princomp(iris[-5])
summary(irispca)
results in this:
Importance of components:
Comp.1 Comp.2 Comp.3 Comp.4
Standard deviation 2.0494032 0.49097143 0.27872586 0.153870700
Proportion of Variance 0.9246187 0.05306648 0.01710261 0.005212184
Cumulative Proportion 0.9246187 0.97768521 0.99478782 1.000000000
There are standard deviations here, but not variances.
Explained variance
A PC that contains 95% of the data variance might contain only 80% of the variation in the data as measured in standard deviations: isn't the latter a better descriptor?
However, note that after presenting standard deviations, R does not display a "proportion of standard deviation", but instead a proportion of variance. And there is a very good reason for that.
Mathematically, total variance (being a trace of covariance matrix) is preserved under rotations. This means that the sum of variance of original variables is equal to the sum of variances of PCs. In case of the same Fisher Iris dataset, this sum is equal to $4.57$, and so we can say that PC1, having a variance of $2.05^2=4.20$ explains $92\%$ of the total variance.
But the sum of standard deviations is not preserved! The sum of standard deviations of original variables is $3.79$. The sum of standard deviations of PCs is $2.98$. They are not equal! So if you want to say that PC1 with standard deviation $2.05$ explains $x\%$ of the "total standard deviation", what would you take as this total? There is no answer, because it simply does not make sense.
The bottom line is that it is completely fine to look at the standard deviation of each PC and even compare them between each other, but if you want to talk about "explained" something, then only "explained variance" makes sense.
Best Answer
standard deviation = square root of variance, hence standard deviation is usually written as $\sigma$ and variance as $\sigma^2$.
Here's a basic explanation comparing the two and how they can be used in basic situations.
In more complicated statistics, variance is used in a lot of ways, such as amount of variance explained (like to calculate $R^2$ or the Intra-Class Correlation coefficient (ICC), which is the amount of variance explained by each level in a multi-level experiment or model.