You've defined $z_{1} \cdot z_{2} = Re(\overline{z_{1}} z_{2})$, and you want to show that $z_{1} \cdot z_{2} = 0 \iff z_{1} \perp z_{2}$. Without loss of generality, assume $\arg(z_{1}) \geq \arg(z_{2})$, and note that $z_{1} \perp z_{2}$ if, and only if, $\arg(z_{1}) - \arg(z_{2}) = \frac{\pi}{2}$ or $\arg(z_{1}) - \arg(z_{2}) = \frac{3\pi}{2}$.
Next, write the two complex numbers in polar form: $z_{1} = |z_{1}| e^{i \arg(z_{1})}$ and $z_{2} = |z_{2}| e^{i \arg(z_{2})}$. Then we have
$$z_{1} \cdot z_{2} = |z_{1}||z_{2}| \cos(-\arg(z_{1}))\cos(\arg(z_{2}) - |z_{1}| |z_{2}| \sin(-\arg(z_{1})) \sin(\arg(z_{2})).$$
Because cosine is even and sine is odd, we have
$$z_{1} \cdot z_{2} = |z_{1}||z_{2}| \cos(\arg(z_{1}))\cos(\arg(z_{2}) + |z_{1}| |z_{2}| \sin(\arg(z_{1})) \sin(\arg(z_{2})).$$
By trig identities, this is equal to
$$z_{1} \cdot z_{2} = |z_{1}| |z_{2}| \cos(\arg(z_{1}) - \arg(z_{2})).$$
Since $|z_{1}| \neq 0$ and $|z_{2}| \neq 0$ (for, otherwise, $z_{1}$ and $z_{2}$ are trivially perpendicular), we must have $\cos(\arg(z_{1}) - \arg(z_{2})) = 0$, which can only happen if $\arg(z_{1}) - \arg(z_{2}) \in \{\frac{\pi}{2}, \frac{3\pi}{2}\}$.
For the converse, just work it backwards. We have $\arg(z_{1}) - \arg(z_{2})) \in \{\frac{\pi}{2}, \frac{3\pi}{2}\}$, so $z_{1} \cdot z_{2} = |z_{1}| |z_{2}| \cos( \arg(z_{1}) - \arg(z_{2})) = 0$.
So $z_{1} \cdot z_{2} = 0 \iff z_{1} \perp z_{2}$.
Given $m$ orthogonal vectors $v_1, v_2, \ldots, v_m$ in $\mathbb R^n$, a vector orthogonal to them is any vector $x$ that solves the matrix equation
$$\begin{pmatrix}v_1^T \\ v_2^T \\ \vdots \\ v_m^T\end{pmatrix} x = 0.$$
To put this a bit more concretely, suppose
$$v_1 = \begin{pmatrix}v_{11} \\ v_{12} \\ \vdots \\ v_{1n}\end{pmatrix},\quad
v_2 = \begin{pmatrix}v_{21} \\ v_{22} \\ \vdots \\ v_{2n}\end{pmatrix},\
\ldots,\quad
v_m = \begin{pmatrix}v_{m1} \\ v_{m2} \\ \vdots \\ v_{mn}\end{pmatrix},\
\mbox{and}\quad
x = \begin{pmatrix}x_{1} \\ x_{2} \\ \vdots \\ x_{n}\end{pmatrix}$$
where the numbers $v_{ij} \in \mathbb R$ are all known
and the numbers $x_i \in \mathbb R$ are all unknown.
Then the matrix equation above can also be written
$$
\begin{pmatrix}
v_{11} & v_{12} & \cdots & v_{1n} \\
v_{21} & v_{22} & \cdots & v_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
v_{m1} & v_{m2} & \cdots & v_{mn}
\end{pmatrix}
\begin{pmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{pmatrix}
= \begin{pmatrix}0 \\ 0 \\ \vdots \\ 0\end{pmatrix}.$$
This is equivalent to the system of linear equations
$$
\begin{array}{ccccccccl}
v_{11}x_1 &+& v_{12}x_2 &+& \cdots &+& v_{1n}x_n &=& 0, \\
v_{21}x_1 &+& v_{22}x_2 &+& \cdots &+& v_{2n}x_n &=& 0, \\
\vdots&&\vdots&&\ddots&&\vdots&&\vdots \\
v_{m1}x_1 &+& v_{m2}x_2 &+& \cdots &+& v_{mn}x_n &=& 0.
\end{array}$$
That is, you need to solve a linear system of $m$ equations with $n$ unknowns.
This is something you can do using row reduction.
The solution will never be unique; if the vector $x$ is a solution
then the vector $cx$ is also a solution, where $c$ is any scalar constant.
If the $m$ vectors include fewer than $n-1$ independent vectors, the solution is not even unique up to a scalar constant;
you can have multiple vectors in different directions that are all orthogonal
to the given vectors.
If $m \geq n$ there may not be a solution at all;
the $m$ vectors may span $\mathbb R^n$.
There will, however, be solutions as long as the set of given vectors does not contain
$n$ or more mutually independent vectors.
In your particular case, if you are not aware of the fact that the cross-product
of two independent vectors in $\mathbb R^3$ is orthogonal to each of those vectors,
you have
$$v_1 = \begin{pmatrix}v_{11}\\v_{12}\\v_{13}\end{pmatrix}
= \begin{pmatrix}-1\\1\\1\end{pmatrix} \quad \mbox{and} \quad
v_2 = \begin{pmatrix}v_{21}\\v_{22}\\v_{23}\end{pmatrix}
= \begin{pmatrix}\sqrt{2}\\1\\-1\end{pmatrix},$$
so you could solve the system of equations
$$\begin{eqnarray}
-1\cdot x_1 + 1\cdot x_2 + 1 \cdot x_3 &=& 0, \\
\sqrt{2}\cdot x_1 + 1\cdot x_2 - 1 \cdot x_3 &=& 0.
\end{eqnarray}$$
Blindly applying the methods I was taught in high school, I find this is equivalent to
$$\begin{array}{ccccccl}
x_1 &-& x_2 &-& x_3 &=& 0, \\
&&\left(1+\sqrt{2}\right)x_2 &+&\left(-1+\sqrt{2}\right) x_3 &=& 0.
\end{array}$$
At this point we can make an arbitrary choice of for $x_3$ and proceed to solve the
equations as a system of two equations in two unknowns.
Best Answer
No (unless one of the vectors is the 0 vector). Generally the dot product of two non zero vectors is zero iff the two vectors are perpendiculars.
One way to think of why it is that way - is to think of the dot product between two vectors $\vec a$ and $\vec b $ as the multiplication of the size of $\vec a$ and the size of the projection of $\vec b$ of on $\vec a$. Now if the two vectors are not perpendiculars, then the size of the projection of $\vec b$ on a (lets just call it B) is a non zero scalar (can be negative). and as the size of $\vec a$ (call it A) is positive then the multiplication of two non zero numbers A*B cannot be zero.
On the other hand if the vectors are prependiculars, then the projection of $\vec b$ on $\vec a$ is just a single dot, with size $B=0$. and there for $A*B=A*0=0$