Here is a general guideline for $2 \times 2$ orthogonal matrices.
They have one of the two forms
$$\text{Either} \ \ R = \begin{bmatrix}
a &-b\\[0.3em]
b & \ \ \ a\\[0.3em]
\end{bmatrix} \ \ \ \ \text{or} \ \ \ \ S = \begin{bmatrix}
a & \ \ \ b\\[0.3em]
b & -a\\[0.3em]
\end{bmatrix}$$
with norm $1$ column vectors (thus $a^2+b^2=1$), the first case with $\det(A)=a^2+b^2=1$, the second with $\det(A)=-(a^2+b^2)=-1$.
More precisely, they have the form (you have cited the first one, the second one is less known...):
$$R_{\theta} = \begin{bmatrix}
\cos(\theta) & -\sin(\theta)\\[0.3em]
\sin(\theta) & \ \ \ \cos(\theta)\\[0.3em]
\end{bmatrix} \ \ \ \ \ \ \text{or} \ \ \ \ \ \ S_{\alpha}=\begin{bmatrix}
\cos(2 \alpha) & \ \ \ \sin(2 \alpha)\\[0.3em]
\sin(2 \alpha) & -\cos(2 \alpha)\\[0.3em]
\end{bmatrix} $$
where $\theta$ is the rotation angle, of course, and $\alpha$ is the polar angle of the axis or symmetry i.e., the angle of one of its directing vectors with the x-axis.
Thus, for your question, once you have recognized that a matrix is a symmetry matrix, it suffices to pick the upper left coefficient $ \cos(2 \alpha)$ and identify the possible $\alpha$s, with a disambiguation brought by the knowledge of $ \sin(2 \alpha)$.
Given an oriented 2D subspace $\mathsf{\Pi}$ of a real inner product space $V$ and any angle $\theta$, there exists a rotation $R(\mathsf{\Pi},\theta)$ which acts as a rotation by $\theta$ when restricted to $\mathsf{\Pi}$ and acts as the identity map when restricted to the orthogonal complement $\mathsf{\Pi}^\perp$. Since $V=\mathsf{\Pi}\oplus\mathsf{\Pi}^\perp$ is a(n orthogonal) direct sum, every vector is (uniquely) expressible as sum of a vector in $\mathsf{\Pi}$ and a vector in $\mathsf{\Pi}^\perp$, and using linearity this definition allows us to apply $R(\mathsf{\Pi},\theta)$ to any vector. Picking any two orthogonal unit vectors within $\mathsf{\Pi}$ compatible with the orientation and conjoining that with any basis for $\mathsf{\Pi}^\perp$ yields a basis for $V$ with respect to which $R(\mathsf{\Pi},\theta)$ is block diagonal, with the usual $2\times 2$ rotation matrix as one block and the identity matrix of the appropriate dimensions as the other block.
These are called plane rotations. In 3D we usually think of rotations as happening around a rotation axis, however this kind of thinking doesn't generalize to higher dimensions whereas the plane of rotation idea does generalize. Indeed, any rotation $R\in\mathrm{SO}(V)$ is expressible as
$$R=\prod_i R(\mathsf{\Pi}_i,\theta_i) $$
for some oriented, two-dimensional, mutually orthogonal subspaces $\mathsf{\Pi}_1,\cdots,\mathsf{\Pi}_\ell$ and angles $\theta_1,\cdots,\theta_\ell$. (Obviously $\ell\le(\dim V)/2$.) As the $\mathsf{\Pi}$s are orthogonal to each other, the factors in the above product all commute, which is why no order needs to be specified in the product.
Are the set of planes $\{\mathsf{\Pi}_1,\cdots,\mathsf{\Pi}_\ell\}$ an invariant of $R$ or not? Not necessarily. For instance, consider multiplication by $i$ on $\mathbb{C}^2$. Any complex one-dimensional subspace of $\mathbb{C}^2$ (there are a $\mathbb{CP}^1$ worth of them) is a real two-dimensional stable subspace. However, it turns out of the angles $\theta_1,\cdots,\theta_\ell$ are all distinct mod $2\pi$ up to sign, then $\{\mathsf{\Pi}_1,\cdots,\mathsf{\Pi}_\ell\}$ is an invariant.
Indeed, notice that $R^{-1}$ acts the same way but with opposite angles. With a simple picture we can see that $R+R^{-1}$ acts as the scalar $2\cos(\theta_i)$ on $\mathsf{\Pi}_i$. Therefore, $\mathsf{\Pi}_i$ is precisely the $2\cos(\theta_i)$-eigenspace of $R+R^{-1}$. This may not be computationally practical, perhaps more useful for finding the $\theta$-associated stable subspace would be finding the span of the $e^{i\theta}$ and $e^{-i\theta}$ eigenspaces of the complexification $V\otimes_{\mathbb{R}}\mathbb{C}$ and intersecting with $V$. (I don't really think about linear algebra from the practical side though, so this may be unhelpful.)
Best Answer
The vector $(1,0)$ along the $x$ axis is rotated into $\frac1{\sqrt2}(1,1)$ in the first quadrant. Thus $A$ rotates counterclockwise, which is by convention associated with positive angles; it represents a rotation by $+\frac\pi4$.
In a wider sense, one might also say that $A$ rotates by an angle $\frac\pi4$ if strictly speaking it rotated by $-\frac\pi4$. The distinction between the two is only valid in $\mathbb R^2$; in $\mathbb R^3$ the rotation by $-\frac\pi4$ around an axis is the rotation by $\frac\pi4$ around the inverse axis. As we tend to think of rotations in three dimensions, this reduction to positive rotation angles is sometimes also applied in talking about $\mathbb R^2$.