I believed that we only had to ensure that [condition for adjointness]. But this reasoning would imply that this operator (and any linear operator on a vector space) over R is self adjoint.
Why? Not every linear operator satifies that condition. The one you gave is an example. Let $x=(1,0,0)$ and $y=(0,1,0)$ and let the transformation act on the right of row vectors.
Then $\langle Tx\cdot y\rangle=\langle (-1,1,0)\cdot(0,1,0)\rangle=1$ but
$\langle x\cdot Ty\rangle=\langle (1,0,0)\cdot(0,5,0)\rangle=0$.
Thus this operator is not self-adjoint.
Let me first recall the definition of the transpose. Let $T: \ \ X \longrightarrow Y$ be a linear operator from $X$ to $Y$. The dual $X^*$ is the space of (continuous) linear functional on $X$, i.e., linear operators from $X \longrightarrow \mathbb{C}$. So if $x\in X^*, \ v \in X$, $x(v) \in \mathbb{C}$. We may even denote the composition of $x$ with $v$ as
$$
x(v):= (x,v). \ \ \quad \ \ (0)
$$
Now the transpose of $T$ is the linear operator $ T^t : Y^* \longrightarrow X^*$ defined as
$$
(y, Tv) = ( T^t y, v) \ \ \quad \ \ (1)
$$
for all $ v\in X, \ y \in Y^*$. Take now bases of $X,Y$ (say $e_i, \ f_j$) and natural bases of $X^*,Y^*$ (satisfying $ (\eta_i , e_j) = \delta_{i,j}$ and so on). Assume $\dim X = n$ and $\dim Y = m$.
So for example $(0)$ becomes
$$
(x,v) = \sum_{i} x_i v_i \ \ \quad \ \ (2)
$$
The matrix of $T$ in these bases is defined as
$$
Te_i = \sum_{j=1}^m T_{j,i} f_j
$$
(for $i=1,2,\ldots, n$). Using the convention above we have
$$
Tv = T\sum_i v_i e_i = \sum_{i,j} v_i T_{j,i} f_j
$$
so that the components of $(Tv)_k$ are given by $ (Tv)_k = \sum_i T_{k,i} v_i$. These are the usual matrix-vector multiplication rules.
You can see that the transpose is associated to the matrix $(T^t)_{i,j}=T_{j,i}$. Note that $T$ corresponds to a $m\times n$ matrix.
However what ultimately allows this to work, or alternatively, what makes definition $(1)$ works, is the following simple fact.
Once you have a matrix of $m\times n$ numbers $T_{i,j}$, you can form $n$-dimensional vectors via $\sum_{j=1}^m T_{k,j} v_j$ ($k=1,\ldots n$) but also $m$-dimensional vectors via $\sum_{i=1}^n T_{i,k} w_i$ ($k=1,\ldots m$).
The latter operation corresponds (using the usual matrix-vector multiplications rules) to
$$
(w_1,w_2, \ldots w_m) \left(\begin{array}{cccc}
T_{1,1} & T_{1,2} & \cdots & T_{1,n}\\
T_{2,1}\\
\vdots\\
T_{m,1} & & \cdots & T_{m,n}
\end{array}\right)
$$
according to the usual rules (row-vector times matrix).
To form the adjoint we proceed in a very similar way. However now we use the scalar product to identify linear functionals. That is, instead of $(2)$
we use the scalar product $\langle \bullet, \bullet \rangle$:
$$
\langle x, v \rangle = \sum_{j=1}^n x_j^* v_j
$$
Then the construction of the adjoint is the same as that for the transpose, but you get the additional complex conjugate terms.
In component of course, the matrix associated to the adjoint satisfies:
$$
(T^*)_{i,j} = T_{j,i}^*
$$
Best Answer
The point of the definition is to extend the notion of the "conjugate transpose" so that it makes sense on an arbitrary inner product space. I'm not sure what you mean by "does that definition follow from definition of inner product space". However, I think it might be helpful to see why if $V = \Bbb C^n, W = \Bbb C^m$ with the usual inner product and $T:V \to W$ is the operator on $V$ defined by $T(x) = Ax$, then the adjoint operator $T^*: W \to V$ is $T^*(x) = A^*x$. In other words, taking the adjoint is "the same as" taking the conjugate transpose.
Let $A'$ denote the conjugate-transpose of $A$. Recall that the usual inner product on $\Bbb C^n$ is given by $$ \langle x,y\rangle = y'x = \sum_{k=1}^n x_k \bar y_k. $$ If we define $T(x) = Ax$ and $S(x) = A'x$, then we find that for $x \in V$ and $y \in W$, we have $$ \langle T(x),y \rangle = y'(Ax) = (y'A)x = (A'y)'x = \langle x,S(y) \rangle. $$ So, $S$ is indeed the adjoint operator to $T$.