I was just writing up notes for this in connection with a section on Smith Normal Form in my abstract algebra course. I'll summarize the results here.
Row operations on the matrix change the input basis and column operations on the matrix change the output basis.
First, let $\{u_1, u_2, \ldots, u_n\}$ denote the input basis.
Row operation: $r_i \to r_i + k r_j$. Effect: Replace $u_j$ with $u_j - k u_i$. (Note that the $i$ and $j$ switch, and there is a sign change.)
Row operation: $r_i \leftrightarrow r_j$. Effect: Swap $u_i$ and $u_j$.
Row operation: $r_i \to k r_i$, where $k$ is a unit. Effect: Replace $u_i$ with $k^{-1} u_i$.
Next, let $\{v_1, v_2, \ldots, v_m\}$ be the output basis.
Column operation: $c_i \to c_i + k c_j$. Effect: Replace $v_i$ with $v_i - k v_j$. (Here there's just a sign change.)
Column operation: $c_i \leftrightarrow c_j$. Effect: Swap $v_i$ and $v_j$.
Column operation: $c_i \to k c_i$, where $k$ is a unit. Effect: Replace $v_i$ with $k^{-1} v_i$.
Actually, for Smith Form if you know the Smith form and either the final input basis or the final output basis, you can find the other basis easily. All of this assumes you're doing the ``easy'' case of a Euclidean domain; you need a fourth type of operation over a PID.
Short answer: you are changing basis ... twice!
The transformation $T$ is described geometrically in terms of the standard basis $\mathcal{E} = \{e_1, e_2, e_3\}$ for $\mathbb{R}^3$, so the matrix is
$$ \big[\,T\,\big]_{\mathcal{E}} = \left[ \begin{array}{*{3}{c}} 1&0&0 \\ 0&1&0 \\ 0&0&-1 \end{array} \right].$$
To obtain the matrix in $\mathcal{B}$-coordinates, you must first change basis to the standard coordinates, then apply the tranformation there, and finally change basis back to the new basis. This is called conjugation (or a similarity transformation, as you likely know):
$$ \big[\,T\,\big]_{\mathcal{B}} = \underset{\mathcal{B} \leftarrow \mathcal{E}}{\mathcal{P}} \; \big[\,T\,\big]_{\mathcal{E}} \; \underset{\mathcal{E} \leftarrow \mathcal{B}}{\mathcal{P}}$$
The trick is that when you write a vector $b_1$, you are implicitly expressing it in terms of the standard basis, so $b_1 = [\,b_1\,]_{\mathcal{E}}$, which is why the change-of-basis matrix from $\mathcal{B}$ to $\mathcal{E}$ is
$$\underset{\mathcal{E} \leftarrow \mathcal{B}}{\mathcal{P}} = \big[ \big[\,b_1\,\big]_{\mathcal{E}} \;\; \big[\,b_2\,\big]_{\mathcal{E}} \;\; \big[\,b_3\,\big]_{\mathcal{E}} \big] = \big[ b_1 \;\; b_2 \;\; b_3 \big].$$
Now, apply the matrix $T$ and give the result back in $\mathcal{B}$-coordinates, and you obtain
$$
\big[\,T\,\big]_{\mathcal{B}} = \big[ \big[\,T(b_1)\,\big]_{\mathcal{B}} \;\; \big[\,T(b_2)\,\big]_{\mathcal{B}} \;\; \big[\,T(b_3)\,\big]_{\mathcal{B}} \big]
.$$
Best Answer
As mentioned in the comment, such a transpose map on $W$ and $V$ is coordinate dependent.
If you want a coordinate independent map, you'll have to stick with the map from $W^*$ to $V^*$.
For a finite dimensional vector space, the double dual space of $V$ (i.e. $V^{**}$) is naturally isomorphic to $V$ (natural here has a technical definition, but essentially means this isomorphism does not depend on coordinates).
On the other hand, even though the dual space of $V$ (i.e. $V^*$) and $V$ are isomorphic, we do not have that they are naturally isomorphic. Picking an isomorphism is nearly the same thing as selecting an inner product (i.e. adding geometry to $V$).
Given a linear map, $A:V \to W$, we can define $A^T:W^* \to V^*$ naturally via $A^T(f)(v)=f(A(v))$. [$f:W \to \mathbb{R}$ is a linear functional on $W$, $A^T(f):V \to \mathbb{R}$ is a linear functional on $V$.]
If we want to define $A^T$ as a map from $W$ to $V$, we'll need to pass (somehow) from $W^*$ to $W$ and $V^*$ to $V$. One way to do this is via inner products.
Suppose that $\langle v_1,v_2 \rangle_V$ is an inner product on $V$ and $\langle w_1,w_2 \rangle_W$ is an inner product on $W$. Then $v \mapsto \langle v, \cdot \rangle_V$ gives an isomorphism from $V$ to $V^*$. Let's give this a name: $\varphi_V(v)=\langle v, \cdot \rangle_V$. Likewise, $w \mapsto \langle w,\cdot \rangle_W$ from $W$ to $W^*$ denoted $\varphi_W(w)=\langle w,\cdot \rangle_W$.
Then we have: $A^T(\varphi_W(w))(v) = (\varphi_W(w))(A(v)) = \langle w, A(v) \rangle_W$. If we then use $\varphi_V^{-1}$ we can turn the linear map $A^T(\varphi_W(w))$ into a vector in $V$. Thus composing maps as follows: $\varphi_V^{-1} \circ A^T \circ \varphi_W$ gives us your desired transpose map: $$\varphi_V^{-1} \circ A^T \circ \varphi_W:W \to V$$ If we call this $\widetilde{A^T} = \varphi_V^{-1} \circ A^T \circ \varphi_W$, we have $$ \langle \widetilde{A^T}(w),v \rangle_V = \langle w, A(v) \rangle_W$$ for all $v \in V$ and $w \in W$. Some texts would just use the above equality as the definition of the transpose map (but, of course, this depends on those pesky inner products).
The issue is that turning the transpose map into a map on $W$ and $V$ requires us to pick isomorphisms between $V$ and its dual as well as $W$ and its dual. A special case of this is selecting an inner product.
Picking isomorphisms between vector spaces and their duals is very closely related to selecting bases [Given a basis for $V$, you get a dual basis for $V^*$ and bam you have an isomorphism.] Likewise, picking an inner product is closely related to selecting a basis [Pick a basis, declare it to be orthonormal, and bam you have an inner product.]
In the end, you really can't get the transpose map to be "coordinate free" on the original vector spaces in the way you're looking for. It all comes down to the fact that a vector space and its dual are not naturally isomorphic.