This is known as orthogonal diagonalisation, and the process is outlined on Wikipedia.
First write $q(x, y)$ as
$$q(x, y) = [x\quad y]A\left[\begin{array}\ x\\ y\end{array}\right]$$
where
$$A = \left[\begin{array}\ 0 & 3\\ 3 & 0\end{array}\right].$$
Note that $A$ is symmetric. As such, it is orthogonally diagonalisable. That is, there is an invertible matrix $P$ satisfying $P^T = P^{-1}$ such that $P^TAP$ is a diagonal matrix. To find $P$, you need to find an orthonormal basis of eigenvectors; these form the columns of $P$. The eigenvectors of $A$ are $\lambda_1 = 3$ and $\lambda_2 = -3$ with eigenvectors $v_1 = [1\quad 1]^T$ and $v_2 = [-1\quad 1]^T$ respectively. As eigenvectors from distinct eigenvalues are orthogonal, normalising $v_1$ and $v_2$, we obtain an orthonormal basis: $\left\{\frac{1}{\sqrt{2}}v_1, \frac{1}{\sqrt{2}}v_2\right\}$. Therefore
$$P = \frac{1}{\sqrt{2}}\left[\begin{array}{cc} 1 & -1\\ 1 & 1\end{array}\right].$$
Note that
$$P^TAP = \frac{1}{\sqrt{2}}\left[\begin{array}{cc} 1 & 1\\ -1 & 1\end{array}\right]\left[\begin{array}\ 0 & 3\\ 3 & 0\end{array}\right]\frac{1}{\sqrt{2}}\left[\begin{array}{cc} 1 & -1\\ 1 & 1\end{array}\right] = \left[\begin{array}{cc} 3 & 0\\ 0 & -3\end{array}\right]$$
which is diagonal.
Now make the variable substitution
$$\left[\begin{array}\ \hat{x}\\ \hat{y}\end{array}\right] = P^{-1}\left[\begin{array}\ x\\ y\end{array}\right].$$
Then we have
\begin{align*}
q(x, y) &= 6xy\\
&= [x\quad y]A\left[\begin{array}\ x\\ y\end{array}\right]\\
&= \left[\begin{array}\ x\\ y\end{array}\right]^TA\left[\begin{array}\ x\\ y\end{array}\right]\\
&= \left(P\left[\begin{array}\ \hat{x}\\ \hat{y}\end{array}\right]\right)^TA\left(P\left[\begin{array}\ \hat{x}\\ \hat{y}\end{array}\right]\right)\\
&= [\hat{x}\quad \hat{y}]P^TAP\left[\begin{array}\ \hat{x}\\ \hat{y}\end{array}\right]\\
&= [\hat{x}\quad \hat{y}]\left[\begin{array}{cc} 3 & 0\\ 0 & -3\end{array}\right]\left[\begin{array}\ \hat{x}\\ \hat{y}\end{array}\right]\\
&= 3\hat{x}^2 - 3\hat{y}^2.
\end{align*}
Geometrically, $P$ corresponds to a rotation of the coordinate system by $\frac{\pi}{4}$. To see this, note that $P$ is the rotation matrix for $\theta = \frac{\pi}{4}$.
There can't be an orthonormal basis for which the quadratic form has canonical diagonal form, because if there were such a basis $\{v_1,v_2,v_3\}$, then for any unit vector $u = \sum_i \lambda_i v_i$ we would have $$q(u) = \left(\sum_i \lambda_i v_i\right)^T A \left(\sum_i \lambda_i v_i\right) = \begin{bmatrix}\lambda_1 & \lambda_2 & \lambda_3\end{bmatrix} \begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}\lambda_1\\ \lambda_2 \\ \lambda_3\end{bmatrix} \leq \sum_i \lambda_i^2 = 1$$
But this is false, because from your definition of $q$, we see that e.g. $q(0,1,0)=8$.
By writing $A$ as a symmetric matrix you can find an orthonormal basis that diagnonalizes it (because a symmetric matrix always can always be diagonalized by a rotation matrix), but the diagonal entries will then be the eigenvalues of $A$, not $1$.
Best Answer
Your quadratic form is represented by a unique symmetric matrix $S$ in the sense that $F(x)=x\cdot Sx$. In this case, the matrix $S$ is $$ S=\left[\begin{array}{rrr} 17 & -8 & 4 \\ -8 & 17 & -4 \\ 4 & -4 & 11 \end{array}\right] $$ To construct this $S$, note that the $(i, j)$th entry is half the coefficient of $x_ix_j$ in $F(x)$ if $i\neq j$ and equal to the coefficient of $x_ix_j$ if $i=j$.
The process of putting $F$ in "canonical form" is sometimes referred to as "completing the square." The strategy here is to diagonalize $S$ as $S=QDQ^\top$ where $Q$ is orthogonal ($Q^\top=Q^{-1}$). We then "change variables" by defining $y=Q^\top x$. Our quadratic form can then be written as $$ F(x) = x\cdot Sx = x^\top QDQ^\top x = (Q^\top x)^\top D (Q^\top x) = y^\top D y = \lambda_1\cdot y_1^2 + \lambda_2\cdot y_2^2 + \lambda_3\cdot y_3^2 $$ where the $\lambda_i$'s are the diagonal entries of $D$ (which are also the eigenvalues of $S$).
To go about this process, we need to find orthonormal bases for the eigenspaces of $S$. It's not terribly difficult to show that this $S$ has two eigenvalues $\lambda_1=27$ and $\lambda_2=9$. Bases for the eigenspaces are given by \begin{align*} E_{27} &= \operatorname{Null}(S-27\cdot I_3)=\operatorname{Span}\{\left\langle2,\,-2,\,1\right\rangle\} \\ E_{9} &= \operatorname{Null}(S-9\cdot I_3) = \operatorname{Span}\{\left\langle1,\,0,\,-2\right\rangle, \left\langle0,\,1,\,2\right\rangle\} \end{align*} Note, however, that these bases are not orthonormal. To make them orthonormal, we can apply the Gram-Schmidt algorithm. Can you carry out this algorithm and finish "completing the square"?