I think you may be putting too much constraint on what the inner product looks like. While the Euclidean dot product can be computed as $\langle x,y\rangle = x^Ty$, inner products in general look like $\langle x,y\rangle = x^TAy$ where $A$ is symmetric positive-definite. If you impose the Euclidean dot product in one basis, and then change the basis, two vectors are still orthogonal with respect to that inner product but the matrix multiplication itself changes. In other words, the dot product is the special case where $A=I$, but changing the basis changes this matrix.
I’d say that the motivation for defining orthogonality in terms of the inner product is to uncover the algebraic/computational ramifications of the geometric picture. As far as computability, it’s very difficult to verify if two vectors are orthogonal via a picture (i.e. how do you know the picture is exact?) whereas the artifact of a zero inner product is quite easy to verify. It’s often a general goal in all areas of mathematics to abstract the computational device that gives rise to a qualitative feature.
The inner product definition of orthogonality also generalizes to other areas of mathematics. For instance in analysis a common inner product that’s taken between two functions defined on the interval $[0,1]$ is
$$
\langle f,g\rangle = \int_0^1 f(x)g(x)dx.
$$
This gives us a way to adapt linear algebraic concepts to functions in a way that the geometric picture does not. Most of the time functions that are orthogonal with respect to this inner product don’t “look” perpendicular in the same way that vectors do.
When we write a cross-product as a determinant, we are really abusing notation. If you look at the expression,
\begin{align}
\mathbf{a\times b}
&= \begin{vmatrix}
\mathbf{i}&\mathbf{j}&\mathbf{k}\\
a_1&a_2&a_3\\
b_1&b_2&b_3\\
\end{vmatrix}
\\
&=
(a_2b_3 - a_3b_2)\mathbf{i} -(a_1b_3 - a_3b_1)\mathbf{j} +(a_1b_2 - a_2b_1)\mathbf{k}
\\
&=\begin{pmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{pmatrix},
\end{align}
you can see that in each term we have a product of two dimensionful `length' coordinates, giving an interpretation as area.
When we talk about it being numerically the area of a parallogram, we are talking about the norm of this vector,
\begin{align}
\left\lVert
\mathbf{a\times b}
\right\rVert
&=
\sqrt{(a_2b_3 - a_3b_2)^2 + (a_1b_3 - a_3b_1)^2 + (a_1b_2 - a_2b_1)^2}.
\end{align}
Counting the dimensions is easiest if we just consider
\begin{align}
&\mathbf{a} = \begin{pmatrix} a_1 \\ 0 \\ 0\end{pmatrix}
&\mathbf{b} = \begin{pmatrix} 0 \\ b_2 \\ 0\end{pmatrix},
\end{align}
then
\begin{align}
\left\lVert
\mathbf{a\times b}
\right\rVert
&=
\sqrt{(a_1b_2)^2}
\\
&=
a_1 b_2,
\end{align}
ie an area.
Why do we understand a determinant as corresponding to volume? Consider instead the scalar triple product,
\begin{align}
\left(\mathbf{a\times b}\right) \cdot \mathbf{c}
&=
\left( a_2b_3-a_3b_2 \right) c_1 +
\left( a_3b_1-a_1b_3 \right) c_2 +
\left( a_1b_2-a_2b_1 \right) c_3
\\
&=
\begin{vmatrix}
c_1&c_2&c_3\\
a_1&a_2&a_3\\
b_1&b_2&b_3\\
\end{vmatrix}.
\end{align}
Then if we count the dimensions, eg by choosing
\begin{align}
&\mathbf{a} = \begin{pmatrix} a_1 \\ 0 \\ 0\end{pmatrix}
&\mathbf{b} &= \begin{pmatrix} 0 \\ b_2 \\ 0\end{pmatrix}
&\mathbf{c} = \begin{pmatrix} 0 \\ 0 \\ c_3\end{pmatrix},
\end{align}
to simplify the algebra,
we get
$$
\left(\mathbf{a\times b}\right) \cdot \mathbf{c}
=
a_1 b_2 c_3,
$$
a product of three lengths and hence a volume.
Final comment. Why do we abuse notation to write a cross-product as a determinant?
Really, we are interested in the only totally-antisymmetric tensor in three dimensions, called the Levi-Civita, or alternating, symbol $\varepsilon_{ijk}$, defined as
$$
\varepsilon_{ijk} =
\begin{cases}
+1 & \text{if } (i,j,k) \text{ is } (1,2,3), (2,3,1), \text{ or } (3,1,2), \\
-1 & \text{if } (i,j,k) \text{ is } (3,2,1), (1,3,2), \text{ or } (2,1,3), \\
\;\;\,0 & \text{if } i = j, \text{ or } j = k, \text{ or } k = i
\end{cases}.
$$
Both the cross-product and determinant are properly defined using this, with
$$
\mathbf{a}\times\mathbf{b} = \sum_{i,j,k=1}^3 \mathbf{e}_i \;\varepsilon_{ijk} \, a_j b_k
$$
and
$$
\det A
=
\begin{vmatrix}
A_{11}&A_{12}&A_{13}\\
A_{21}&A_{22}&A_{23}\\
A_{31}&A_{32}&A_{33}\\
\end{vmatrix}
=
\sum_{i,j,k=1}^3
\varepsilon_{ijk} \,
A_{1i}
A_{2j}
A_{3k}
$$
which allows us to use our mnemonic for calculating the latter (an expansion in the minors, the $2\times 2$ submatrices) to remember how to calculate the former.
Best Answer
If you have found the three vertices, I'll name them $A$, $B$ and $C$, then form the following vectors: $\overrightarrow{AB}=B-A$ and $\overrightarrow{AC}=C-A$. By doing this, you actually 'move' the triangle $\Delta ABC$ to one with a vertex in the origin $O$, namely $\Delta O(B-A)(C-A)$ - with identical area of course.
The area of the parallellogram spanned by these two vectors is given by the magnitude of their cross product. The area of the triangle is exactly half of the area of that parallellogram, so: $$\mbox{Area(triangle)} = \frac{\left| \overrightarrow{AB}\times\overrightarrow{AC}\right|}{2} = \frac{\left| \left(B-A\right)\times\left(C-A\right)\right|}{2}$$ You should find $\sqrt{19} \approx 4.36$.