Recall that two vectors are orthogonal if and only if their inner product is zero. You are incorrect in asserting that if the columns of $Q$ are orthogonal to each other then $QQ^T = I$; this follows if the columns of $Q$ form an orthonormal set (basis for $\mathbb{R}^n$); orthogonality is not sufficient. Note that "$Q$ is an orthogonal matrix" is not equivalent to "the columns of $Q$ are pairwise orthogonal".
With that clarification, the answer is that if you only ask that the columns be pairwise orthogonal, then the rows need not be pairwise orthogonal. For example, take
$$A = \left(\begin{array}{ccc}1& 0 & 0\\0& 0 & 1\\1 & 0 & 0\end{array}\right).$$
The columns are orthogonal to each other: the middle column is orthogonal to everything (being the zero vector), and the first and third columns are orthogonal. However, the rows are not orthogonal, since the first and third rows are equal and nonzero.
On the other hand, if you require that the columns of $Q$ be an orthonormal set (pairwise orthogonal, and the inner product of each column with itself equals $1$), then it does follow: precisely as you argue. That condition is equivalent to "the matrix is orthogonal", and since $I = Q^TQ = QQ^T$ and $(Q^T)^T = Q$, it follows that if $Q$ is orthogonal then so is $Q^T$, hence the columns of $Q^T$ (i.e., the rows of $Q$) form an orthonormal set as well.
Hint:
Let's say you have a linear transformation $T:\mathbb{R}^n \rightarrow \mathbb{R}^n$.
For convenience, denote $\mathbf{x}_i = T(\mathbf{e}_i)$, where $\mathbf{e}_i$ is the $i$th standard basis vector. So, for example, $\mathbf{e}_1 = \langle 1, 0, 0 \rangle$ in $\mathbb{R}^3$. Likewise, $\mathbf{e}_2 = \langle 0, 1, 0 \rangle$, etc.
Then $T$ is encoded by the $n \times n$ matrix $[\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n]$.
As a simple example, consider a $90^\circ$ counterclockwise rotation about the origin in $\mathbb{R}^2$. Note that $\langle 1, 0 \rangle \mapsto \langle 0, 1 \rangle$, and further $\langle 0, 1 \rangle \mapsto \langle -1, 0 \rangle$. So our linear transformation is encoded by the matrix $\left[ \begin{array}{ccc}
0 & -1 \\
1 & 0
\end{array} \right]$.
Now applying this to solve your problem will be a bit of work, but if I'm not mistaken, the standard basis vectors lie on the midpoints of edges of this tetrahedron, and edges will map to edges. It'll be your task to figure out precisely where they are mapping to.
Another approach:
Let $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$, and $\mathbf{v}_4$ denote the vertices of your tetrahedron. Let $T$ be one of the given linear transformations. Then $T(\mathbf{v}_1) = \mathbf{v}_i$ for some $i$, and so forth. In this manner you can arrive at a system of equations whose solution yields the entries of the matrix encoding $T$.
Best Answer
Usually the term "orthogonal matrix" is reserved for matrices whose columns are not only mutually perpendicular, but also unit vectors. So if you were to divide each entry in your matrix above by $2$, it would be an orthogonal matrix.
There appears to be no standard term for a matrix whose columns are just orthogonal without any restriction on their norms.