Add the last row multiplied by -1 to all other rows, and we get,
det$|V|$=det$\left|\begin{array}{}
0 & x_1-x_n & x_1^2-x_n^2 & \cdots & x_1^n-x_n^n \\
0 & x_2-x_n & x_2^2-x_n^2 & \cdots & x_2^n-x_n^n \\
\vdots & \vdots & \vdots & & \vdots \\
0 & x_{n-1}-x_n & x_{n-1}^2-x_n^2 & \cdots & x_{n-1}^n-x_n^n \\
1 & x_n & x_n^2 & \cdots & x_n^n \\
\end{array} \right|
$
=det$\left|\begin{array}{}
x_1-x_n & x_1^2-x_n^2 & \cdots & x_1^n-x_n^n \\
x_2-x_n & x_2^2-x_n^2 & \cdots & x_2^n-x_n^n \\
\vdots & \vdots & & \vdots \\
x_{n-1}-x_n & x_{n-1}^2-x_n^2 & \cdots & x_{n-1}^n-x_n^n \\
\end{array} \right|
$
=$\prod\limits_{k=1}^{n-1}(x_k-x_n)$ det$\left|\begin{array}{}
1 & x_1+x_n & x_1^2+x_1x_n+x_n^2 & \cdots & \sum \limits_{k=0}^{n-1}x_1^{n-k-1}x_n^{k} \\
1 & x_2+x_n & x_2^2+x_2x_n+x_n^2 & \cdots & \sum \limits_{k=0}^{n-1}x_2^{n-k-1}x_n^{k} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_{n-1}+x_n & x_{n-1}^2+x_{n-1}x_n+x_n^2 & \cdots & \sum \limits_{k=0}^{n-1}x_{n-1}^{n-k-1}x_n^{k} \\
\end{array} \right|
$
=$\prod\limits_{k=1}^{n-1}(x_k-x_n)$ det$|V_1|$
In $V_1$, first add 1st column multiplied by $-x_n$ to 2nd column, and add 1st column multiplied by $-x_n^2$ to 3nd column, ..., and add 1st column multiplied by $-x_n^{n-1}$ to $(n-1)$'s column, we get
det$|V_1|$ = det$\left|\begin{array}{}
1 & x_1 & x_1^2+x_1x_n & \cdots & \sum \limits_{k=0}^{n-2}x_1^{n-k-1}x_n^{k} \\
1 & x_2 & x_2^2+x_2x_n & \cdots & \sum \limits_{k=0}^{n-2}x_2^{n-k-1}x_n^{k} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_{n-1} & x_{n-1}^2+x_{n-1}x_n & \cdots & \sum \limits_{k=0}^{n-2}x_{n-1}^{n-k-1}x_n^{k} \\
\end{array} \right|
$
Then add 2nd column multiplied by $-x_n$ to 3rd column, ..., and add 2nd column multiplied by $-x_n^{n-2}$ to $(n-1)$'s column, we get
det$|V_1|$ = det$\left|\begin{array}{}
1 & x_1 & x_1^2 & \cdots & \sum \limits_{k=0}^{n-3}x_1^{n-k-1}x_n^{k} \\
1 & x_2 & x_2^2 & \cdots & \sum \limits_{k=0}^{n-3}x_2^{n-k-1}x_n^{k} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_{n-1} & x_{n-1}^2 & \cdots & \sum \limits_{k=0}^{n-3}x_{n-1}^{n-k-1}x_n^{k} \\
\end{array} \right|
$
Repeat above process, and use induction hypothesis, we have
det$|V_1|$ = det$\left|\begin{array}{}
1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\
1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\
\vdots & \vdots & \vdots & & \vdots \\
1 & x_{n-1} & x_{n-1}^2 & \cdots & x_{n-1}^{n-1} \\
\end{array} \right|
$ = $\prod_{1\le i<j\le n-1} (x_j-x_i)$
So finally
det$|V|$ =$\prod\limits_{k=1}^{n-1}(x_k-x_n)$ det$|V_1|$ = $\prod_{1\le i<j\le n} (x_j-x_i)$
What you're trying to prove is unfortunately not true. A counterexample will be given at the end; I'll explain how I got there.
First of all, the statement makes sense in a world where all matrices are normal:
Normal case
For simplicity, we work over the complex numbers. Let $M^*$ (resp. $x^*$) denote the Hermitian adjoint of a matrix $M$ (resp. column vector $x$). Recall that a matrix $M$ is normal if and only if it is unitarily equivalent to a diagonal matrix; that is, there exists a unitary matrix $U$ and a diagonal matrix $D$ such that $M = U^*DU$.
Proposition. If $M \in \mathbb{C}^{n\times n}$ is normal, then $\text{Re}(x^* M x) \geq 0$ for all $x\in \mathbb{C}^n$ if and only if all eigenvalues of $M$ lie in the closed right half-plane $\{z \in \mathbb{C} \, : \, \text{Re}(z) \geq 0\}$.
Proof. Note that
$$ 2\cdot\text{Re}(x^*Mx) = x^*Mx + (x^*Mx)^* = x^*Mx +x^*M^*x = x^*(M + M^*) x. $$
Therefore we have $\text{Re}(x^*Mx) \geq 0$ for all $x \in \mathbb{C}^n$ if and only if $M + M^*$ is positive semidefinite. Let $M = U^*DU$; then $M + M^* = U^* (D + D^*) U$, so $M + M^*$ is positive semidefinite if and only if all eigenvalues of $M$ (i.e. the entries of $D$) lie in the closed right half-plane. $\quad\Box$
If $M$ is real and $x,y\in\mathbb{R}^n$, then $(x - iy)^\top M (x + iy) = x^\top M x + y^\top M y + ix^\top My - iy^\top M x$, so $\text{Re}((x + iy)^* M (x + iy)) = x^\top M x + y^\top M y$. Therefore:
Corollary. If $M \in \mathbb{R}^{n\times n}$ is normal, then $x^\top Mx \geq 0$ for all $x \in \mathbb{R}^n$ if and only if all eigenvalues of $M$ lie in the closed right half-plane $\{z \in \mathbb{C} \, : \, \text{Re}(z) \geq 0\}$.
Now for a solution in a world where all matrices are normal. As you observed, all eigenvalues of $L$ lie in the closed right half-plane, so if $L$ is normal then $x^\top L x \geq 0$ for all $x \in \mathbb{R}^n$. Let $Q^{1/2}$ denote the unique positive semidefinite square root of $Q$. Then for all $y \in \mathbb{R}^n$ we have
$$ y^\top Q^{1/2} L Q^{1/2}y = (Q^{1/2}y)^\top L \, Q^{1/2}y \geq 0, $$
so if $Q^{1/2} L Q^{1/2}$ is normal then all eigenvalues of $Q^{1/2} L Q^{1/2}$ lie in the closed right half-plane. Now we use that $AB$ and $BA$ have the same eigenvalues (where $A = Q^{1/2}L$ and $B = Q^{1/2}$), so we find that the eigenvalues of $QL = Q^{1/2} Q^{1/2} L$ also lie in the closed right half-plane. Thus, if $QL$ is also normal, the conclusion follows.
The crucial property of normal matrices that we used can be formulated in terms of the numerical range: if $M$ is normal, then the numerical range of $M$ is the convex hull of the eigenvalues of $M$. However, this is not always true if $M$ is not normal, and if $n \leq 4$ then this fails for all non-normal matrices (see [MM55] and [Joh76]). In particular, this means that we may have $x^\top Mx < 0$ even if all eigenvalues have positive real part.
Matrices that arise from directed graphs are not typically normal. Furthermore, even if $L$ is normal, the same might not be true for $Q^{1/2}LQ^{1/2}$ or $QL$.¹ So we should start looking for counterexamples. Indeed, after trying a few small cases, I found the following counterexample:
$$ A = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix},\quad L = \begin{pmatrix} 1 & 0 & 1 \\ 1 & 2 & 1 \\ 0 & 0 & 0 \end{pmatrix},\quad Q = \begin{pmatrix} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{pmatrix},\quad x = \begin{pmatrix} 2 \\ 1 \\ -4 \end{pmatrix}. $$
Then $x^\top QL x = -2$.
¹: The product of normal matrices is not necessarily normal. In fact, every square matrix is a product of two normal matrices: polar decomposition.
References.
[MM55]: B. N. Moyls, M. D. Marcus, Field convexity of a square matrix, Proceedings of the American Mathematical Society, vol. 6 (1955), issue 6, pp. 981–983. DOI: 10.1090/S0002-9939-1955-0075921-5
[Joh76]: Charles R. Johnson, Normality and the numerical range, Linear Algebra and Its Applications, vol. 15 (1976), issue 1, pp. 89–94. DOI: 10.1016/0024-3795(76)90080-X
Best Answer
Since I found a reference which answers this question (and noone has provided any other solution) I will give a short answer to my own question.
This type of integrals has been studied in the context of two-matrix models. The integral in question has been discussed in arXiv:0804.0873 (section 3), while a larger class of integrals has been discussed in arXiv:0512056 [math-ph] (appendix A).