If the graph is bipartite, choose a bipartition and let $P = P^{-1}$ be the diagonal matrix with $+1$ or $-1$ in the diagonal entries depending on which side of the bipartition we're on.
Because diagonal matrices commute, $PDP^{-1} = PP^{-1}D = D$.
On the other hand, $PAP^{-1} = PAP = -A$: left-multiplying by $P$ negates all the rows corresponding to the $-1$ side of the bipartition, and right-multiplying by $P$ negates all the columns corresponding to the $-1$ side of the bipartition. Each $1$ entry in $A$ corresponds to an edge from the $+1$ side to the $-1$ side, so it gets negated once: either due to its row, or due to its column.
Therefore $P(D+A)P^{-1} = D-A$, and the two matrices are similar.
For the reverse direction, you already have the idea, but I just want to point out that if $$\sum_{(i,j) \in E(G)} (x_i+x_j)^2 = 0$$ then we must have $x_i + x_j = 0$ for all $(i,j) \in E(G)$.
This forces each $x_i$ to be equal to either $x_1$ or $-x_1$, by applying this identity along a path from vertex $1$ to vertex $i$; in particular, we get $x_1$ if the path has even length and $-x_1$ if the path has odd length. We can take the vertices $i$ with $x_1 = x_i$ to be one side of the bipartition, and the other vertices to be the other side; since $x_i + x_j = 0$ for all $(i,j) \in E(G)$, all edges go from one side to the other.
(Alternatively, we can use the $x_i + x_j = 0$ condition to show that if any vertex $i$ is contained in an odd walk, then $x_i = 0$; if the graph is not bipartite, every vertex is contained in such a walk, so $x=0$ and we don't get an eigenvector.)
The negative of the graph Laplacian, $-L$, is the generator of a continuous time Markov chain, where you leave each node at a rate proportional to its degree, choosing a neighboring node uniformly at random. Given an initial distribution $p$ as a row vector, the distribution at time $t$ is $p e^{-Lt}$. Now the small eigenvalues of $L$ correspond to the (relatively) large eigenvalues of $e^{-Lt}$.
There is no probability current into or out of a connected component in a random walk on an undirected graph, so one eigenvector of the whole chain is going to be the invariant distribution concentrated on that connected component, which has eigenvalue $1$ in $e^{-Lt}$ and thus eigenvalue $0$ in $L$. The small but nonzero eigenvalues of $L$ correspond to slow probability currents, e.g. between two sets of vertices, both "large", which are connected by some sort of "chokepoint".
There is an analogous line of thinking for discrete time, but then you need to use the random walk normalized Laplacian instead in order to make the comparison quantitative.
tl;dr version: it is because $x \mapsto e^{-x}$ is a decreasing function.
Best Answer
If I haven't made any error in my code, the answer is no
Here is a counter example:
$L=\begin{bmatrix} 1 & 0 & 0 & -1 & 0 & 0\\ 0 & 3 & 0 & -1 & -1 & -1\\ 0 & 0 & 2 & 0 & -1 & -1\\ -1 & -1 & 0 & 2 & 0 & 0\\ 0 & -1 & -1 & 0 & 2 & 0\\ 0 & -1 & -1 & 0 & 0 & 2\\ \end{bmatrix} $
This graph is connected, and the Laplacian has eigenvalues $\left({(5 + \sqrt{17})/2, 3, 2, 2, (5 - \sqrt{17})/2, 0}\right)$
Now take the weights:
$L_w= \begin{bmatrix} 1. & 0. & 0. & -1. & 0. & 0.\\ 0. & 1. & 0. & 2. & -4. & 1.\\ 0. & 0. & -3. & 0. & 4. & -1.\\ -1. & 2. & 0. & -1. & 0. & 0.\\ 0. & -4. & 4. & 0. & 0. & 0.\\ 0. & 1. & -1. & 0. & 0. & 0.\\ \end{bmatrix} $
this has eigenvalues $({-6.82741, 5.88255, -2.55556, 1.50042, 0, 0})$.
for completeness, here is the python code I used to find the counterexample:
Edit
my example has a diagonal element $0$ artificially (some weights cancel out) but this is not necessary. One can adapt the code adding/replacing with this piece, and still get counterexamples