With your new information, that all the components of the positive-definite matrix are positive, it becomes easy. While it follows directly from the Perron-Frobenius theorem (which is valid for square matrices with non-negative elements, symmetric or not), in the symmetric case it is much easier.
Let the positive-definite matrix be $S$. The eigenvector corresponding to the largest eigenvector is the vector $x$ obtaining the maximum in the following problem:
$$
\lambda_{\mathrm{max}} = \mathrm{max}_{\{x \colon \| x\|=1\}} x^T S x
$$(that is, the "argmax") where $\lambda_{\text{max}}$ is the largest eigenvalue.
Suppose to get a contradiction that $x_1$ is negative, while the other components of $x$ are non-negative. We can write
$$
x^T S x = x_1 S_{11} x_1+2x_1 \sum_{j=2}^m s_{1j} x_j + \sum_{i=2}^m \sum_{j=2}^m x_i s_{ij} x_j
$$
Note that the first and third terms are positive while the second term is negative, and we can get a strictly larger value by switching the sign of $x_1$, which respects the restriction on norm. That gives the contradiction you need. A similar argument can be written for any other pattern of negative/positive sign.
You get the coefficients from PCA. These coefficients are multiplied by your observation matrix to obtain the components. So, multiply rotation by the new observation matrix instead. Don't forget to center it.
Here's the code.
Run PCA and see how the score matrix is obtained from the original data and the rotation. Note, that I'm NOT centering, and you probably should.
> x=matrix(c(1,2,3,2,4,5.5),3,2)
> x
[,1] [,2]
[1,] 1 2.0
[2,] 2 4.0
[3,] 3 5.5
> r=prcomp(x,retx=1,center=FALSE)
> r$rotation
PC1 PC2
[1,] -0.4666132 0.8844615
[2,] -0.8844615 -0.4666132
> r$x
PC1 PC2
[1,] -2.235536 -0.04876479
[2,] -4.471072 -0.09752958
[3,] -6.264378 0.08701220
> x %*% r$rotation
PC1 PC2
[1,] -2.235536 -0.04876479
[2,] -4.471072 -0.09752958
[3,] -6.264378 0.08701220
Now, apply the same rotation to the different data (again, see that I am NOT centering).
> y=matrix(c(1,2,3,2,4,6.5),3,2)
> y
[,1] [,2]
[1,] 1 2.0
[2,] 2 4.0
[3,] 3 6.5
> y %*% r$rotation
PC1 PC2
[1,] -2.235536 -0.04876479
[2,] -4.471072 -0.09752958
[3,] -7.148839 -0.37960095
Note the similarity of the new scores.
By the way, this is used a lot in forecasting with PCA. We obtain the rotation on historical data, then apply it to new data.
Best Answer
While it is not a direct answer (as it is about pointwise mutual information), look at paper relating word2vec to a singular value decomposition of PMI matrix: