For the case where it's over $\mathbb R^2$ it's quite trivial. The images of $e_x$ and $e_y$ is the columns of the matrix. According to the requirement these has to be of unit length and orthogonal - therefor we have the requirement that the matrix has to be orthonormal.
In addition to preserve orientation we have that the images of $e_y$ has to be the image of $e_x$ rotated $\pi/2$ counter clock wise (given positively oriented coordinate system). That is if the image of $e_x$ is $(u,v)$ the image of $e_y$ has to be $(-v, u)$ which means that the matrix will have positive determinant.
The reverse can as easily be seen as being true.
For higher dimension you will eventually run into the very definition of orientation being dependent on the determinant. However one could define rotations as images composed of "trivial" rotations around the axes for example, in this case one can see in similar way that we will end up with exactly those matrices as well.
Given an oriented 2D subspace $\mathsf{\Pi}$ of a real inner product space $V$ and any angle $\theta$, there exists a rotation $R(\mathsf{\Pi},\theta)$ which acts as a rotation by $\theta$ when restricted to $\mathsf{\Pi}$ and acts as the identity map when restricted to the orthogonal complement $\mathsf{\Pi}^\perp$. Since $V=\mathsf{\Pi}\oplus\mathsf{\Pi}^\perp$ is a(n orthogonal) direct sum, every vector is (uniquely) expressible as sum of a vector in $\mathsf{\Pi}$ and a vector in $\mathsf{\Pi}^\perp$, and using linearity this definition allows us to apply $R(\mathsf{\Pi},\theta)$ to any vector. Picking any two orthogonal unit vectors within $\mathsf{\Pi}$ compatible with the orientation and conjoining that with any basis for $\mathsf{\Pi}^\perp$ yields a basis for $V$ with respect to which $R(\mathsf{\Pi},\theta)$ is block diagonal, with the usual $2\times 2$ rotation matrix as one block and the identity matrix of the appropriate dimensions as the other block.
These are called plane rotations. In 3D we usually think of rotations as happening around a rotation axis, however this kind of thinking doesn't generalize to higher dimensions whereas the plane of rotation idea does generalize. Indeed, any rotation $R\in\mathrm{SO}(V)$ is expressible as
$$R=\prod_i R(\mathsf{\Pi}_i,\theta_i) $$
for some oriented, two-dimensional, mutually orthogonal subspaces $\mathsf{\Pi}_1,\cdots,\mathsf{\Pi}_\ell$ and angles $\theta_1,\cdots,\theta_\ell$. (Obviously $\ell\le(\dim V)/2$.) As the $\mathsf{\Pi}$s are orthogonal to each other, the factors in the above product all commute, which is why no order needs to be specified in the product.
Are the set of planes $\{\mathsf{\Pi}_1,\cdots,\mathsf{\Pi}_\ell\}$ an invariant of $R$ or not? Not necessarily. For instance, consider multiplication by $i$ on $\mathbb{C}^2$. Any complex one-dimensional subspace of $\mathbb{C}^2$ (there are a $\mathbb{CP}^1$ worth of them) is a real two-dimensional stable subspace. However, it turns out of the angles $\theta_1,\cdots,\theta_\ell$ are all distinct mod $2\pi$ up to sign, then $\{\mathsf{\Pi}_1,\cdots,\mathsf{\Pi}_\ell\}$ is an invariant.
Indeed, notice that $R^{-1}$ acts the same way but with opposite angles. With a simple picture we can see that $R+R^{-1}$ acts as the scalar $2\cos(\theta_i)$ on $\mathsf{\Pi}_i$. Therefore, $\mathsf{\Pi}_i$ is precisely the $2\cos(\theta_i)$-eigenspace of $R+R^{-1}$. This may not be computationally practical, perhaps more useful for finding the $\theta$-associated stable subspace would be finding the span of the $e^{i\theta}$ and $e^{-i\theta}$ eigenspaces of the complexification $V\otimes_{\mathbb{R}}\mathbb{C}$ and intersecting with $V$. (I don't really think about linear algebra from the practical side though, so this may be unhelpful.)
Best Answer
I too am unfamiliar with the term "orientation matrix". My guess is that this term is used to mean very different (unrelated) things in different contexts.
Also, as Gerry mentions, this is not a rotation matrix. To see this: A rotation matrix is a type of orthogonal matrix. A matrix $A$ is orthogonal if $A^TA=I_n$ (i.e. its transpose is its inverse). This implies that the rows and columns of the matrix form an orthonormal basis for $\mathbb{R}^n$. This in turn implies that each row and column is a unit vector and clearly $[1\;1\;1]^T$ is not a unit vector (its length is $\sqrt{3}$). So it's definitely not a rotation matrix.
Now what is the deal with this matrix? It's not obvious from the definition (to me anyway) what's going on, but with a slight change in perspective the purpose becomes clear.
Consider $A=(x_A,y_A)$, $B=(x_B,y_B)$, and $C=(x_C,y_C)$. Then $\vec{BA}=A-B=(x_A-x_B,y_A-y_B)$ and $\vec{BC}=C-B=(x_C-x_B,y_C-y_B)$. If we form the cross product $\vec{BA}\times\vec{BC}$ then the cross product is $\vec{0}$ if $ABC$ are colinear. The cross product points "upward" if the points are arranged clockwise and downward if the points are arranged counter-clockwise. So if we dot the cross product with the vector $\vec{k}=[0\;0\;1]$, we'll get a positive answer if clockwise and negative if counter-clockwise.
Now keep in mind dotting the cross product with the vector is the same as computing the determinant of the $3\times 3$ matrix $$ \begin{bmatrix} 0 & 0 & 1 \\ x_A-x_B & y_A-y_B & 0 \\ x_C-x_B & y_C-y_B & 0 \end{bmatrix}$$
How does this connect back to the so-called orientation matrix? Let's work with its transpose (since the matrix and its transpose have the same determinant this doesn't matter): $$\begin{bmatrix} 1 & 1 & 1 \\ x_A & x_B & x_C \\ y_A & y_B & y_C \end{bmatrix} \sim \begin{bmatrix} 0 & 1 & 1 \\ x_A-x_B & x_B & x_C \\ y_A-y_B & y_B & y_C \end{bmatrix} \sim \begin{bmatrix} 0 & 1 & 0 \\ x_A-x_B & x_B & x_C-x_B \\ y_A-y_B & y_B & y_C-y_B \end{bmatrix} \sim$$ $$\begin{bmatrix} 0 & 1 & 0 \\ x_A-x_B & 0 & x_C-x_B \\ y_A-y_B & y_B & y_C-y_B \end{bmatrix} \sim \begin{bmatrix} 0 & 1 & 0 \\ x_A-x_B & 0 & x_C-x_B \\ y_A-y_B & 0 & y_C-y_B \end{bmatrix}$$
So far I've just added multiples of rows and columns to other rows and columns (this does not change the determinant). Now I need to swap columns (this changes the sign of the determinant): $$\begin{bmatrix} 0 & 0 & 1 \\ x_A-x_B & x_C-x_B & 0 \\ y_A-y_B & y_C-y_B & 0 \end{bmatrix}$$
Finally we can replace the submatrix (the $2 \times 2$ matrix in the lower lefthand corner) with its transpose. This doesn't change the determinant.
In the end we have that if the determinant of the orientation matrix is positive, then my original matrix coming from the $\vec{k}$ dotted with the cross product is negative (and vice-versa).
Thus the points are arranged in a clockwise fashion if the determinant of the orientation matrix is negative and counter-clockwise if it's positive.
There's probably a more efficient route to the answer but this is what came to mind first. I hope it helps! :)