The vector $(1,0)$ along the $x$ axis is rotated into $\frac1{\sqrt2}(1,1)$ in the first quadrant. Thus $A$ rotates counterclockwise, which is by convention associated with positive angles; it represents a rotation by $+\frac\pi4$.
In a wider sense, one might also say that $A$ rotates by an angle $\frac\pi4$ if strictly speaking it rotated by $-\frac\pi4$. The distinction between the two is only valid in $\mathbb R^2$; in $\mathbb R^3$ the rotation by $-\frac\pi4$ around an axis is the rotation by $\frac\pi4$ around the inverse axis. As we tend to think of rotations in three dimensions, this reduction to positive rotation angles is sometimes also applied in talking about $\mathbb R^2$.
Here is a general guideline for $2 \times 2$ orthogonal matrices.
They have one of the two forms
$$\text{Either} \ \ R = \begin{bmatrix}
a &-b\\[0.3em]
b & \ \ \ a\\[0.3em]
\end{bmatrix} \ \ \ \ \text{or} \ \ \ \ S = \begin{bmatrix}
a & \ \ \ b\\[0.3em]
b & -a\\[0.3em]
\end{bmatrix}$$
with norm $1$ column vectors (thus $a^2+b^2=1$), the first case with $\det(A)=a^2+b^2=1$, the second with $\det(A)=-(a^2+b^2)=-1$.
More precisely, they have the form (you have cited the first one, the second one is less known...):
$$R_{\theta} = \begin{bmatrix}
\cos(\theta) & -\sin(\theta)\\[0.3em]
\sin(\theta) & \ \ \ \cos(\theta)\\[0.3em]
\end{bmatrix} \ \ \ \ \ \ \text{or} \ \ \ \ \ \ S_{\alpha}=\begin{bmatrix}
\cos(2 \alpha) & \ \ \ \sin(2 \alpha)\\[0.3em]
\sin(2 \alpha) & -\cos(2 \alpha)\\[0.3em]
\end{bmatrix} $$
where $\theta$ is the rotation angle, of course, and $\alpha$ is the polar angle of the axis or symmetry i.e., the angle of one of its directing vectors with the x-axis.
Thus, for your question, once you have recognized that a matrix is a symmetry matrix, it suffices to pick the upper left coefficient $ \cos(2 \alpha)$ and identify the possible $\alpha$s, with a disambiguation brought by the knowledge of $ \sin(2 \alpha)$.
Best Answer
I'll pick things up from the partial progress in the comments. As you have established, if $A = \pmatrix{a&b\\c&d}$, then $A^TA = I$ implies that $$ a^2 + c^2 = b^2 + d^2 = 1, \quad ab + cd = 0. $$ On the other hand, $AA^T = 0$ implies that $a^2 + b^2 = c^2 + d^2 = 1$. With that, we have $$ a^2 + c^2 = c^2 + d^2 \implies a^2 = d^2 \implies d = a \quad \text{or } \quad d = -a. $$ Similarly, $a^2 + b^2 = a^2 + c^2$ implies that $b^2 = c^2$, so that either $c = b$ or $c = -b$. Now, from the determinant of $A$, we have $$ ad - bc = 1 \implies a(\pm a) - b(\pm b) = 1 = a^2 + b^2. $$ Argue that this equality can only hold if $d = a$ and $c = -b$. Thus, $A$ is a matrix of the form $$ A = \pmatrix{a & b\\ -b & a}, $$ where $a^2 + b^2 = 1$. From there, you should be able to argue that such a $\theta$ exists and is unique.