Intuitively, @D.Thomine already explained why the increments are not independent nor stationary. Another way to see it is by noting that the Ornstein-Uhlenbeck is characterized by its "return to mean" property. What I mean by that is that when $V_t$ is far from zero, the term $-\beta V_t\,\mathrm dt$ has a tendency to drag back the process to $0$ since (again heuristically) $\mathbb E[\Delta V_t]=-\beta V_t\Delta t$. It thus seems clear, at least intuitively, that the increments are not independent.
We can do the calculations. I am not $100\%$ sure of these, so be sure to check if something seems out of the ordinary.
The unique solution of the Ornstein-Uhlenbeck stochastic differential equation is
$$
V_t=\nu e^{-\beta t}+\sigma\int_0^te^{-\beta(t-u)}\,\mathrm dW_u,
$$
since an application of Itô's formula yields
$$
\mathrm dV_t=-\beta\nu e^{-\beta t}\,\mathrm dt-\sigma\beta e^{-\beta t}\left(\int_0^te^{\beta u}\,\mathrm dW_u\right)\,\mathrm dt+\sigma\,\mathrm dW_t
=-\beta V_t\,\mathrm dt+\sigma\,\mathrm dW_t.
$$
As you hint in OP, $(V_t)$ is a Gaussian process, thus it has independent increments if and only if $\mathrm{Cov}\,(V_{t+s}-V_s,V_s)=0$ for all $t,s\ge 0$. This happens if and only if
$$
\mathbb E\left[\left(\int_0^{t+s}e^{-\beta(t+s-u)}\,\mathrm dW_u-\int_0^se^{-\beta(s-u)}\,\mathrm dW_u\right)\left(\int_0^se^{-\beta(s-u)}\,\mathrm dW_u\right)\right]=0.
$$
Each of the terms can be calculated according to the Itô isometry formula:
\begin{align*}
&\mathbb E\left[\left(\int_0^{t+s}e^{-\beta(t+s-u)}\,\mathrm dW_u\right)\left(\int_0^se^{-\beta(s-u)}\,\mathrm dW_u\right)\right]\\
&=e^{-\beta(t+2s)}\mathbb E\left[\left(\int_0^se^{\beta u}\,\mathrm dW_u\right)^2\right]+\mathbb E\left[\left(\int_s^{t+s}e^{-\beta(t-u)}\,\mathrm dW_u\right)\left(\int_0^se^{-\beta(s-u)}\,\mathrm dW_u\right)\right]\\
&=e^{-\beta(t+2s)}\frac1{2\beta}\left(e^{2\beta s}-1\right),
\end{align*}
and similarly,
$$
\mathbb E\left[\left(\int_0^se^{-\beta(s-u)}\,\mathrm dW_u\right)\left(\int_0^se^{-\beta(s-u)}\,\mathrm dW_u\right)\right]=e^{-2\beta s}\frac1{2\beta}\left(e^{2\beta s}-1\right).
$$
Thus,
$$
\mathrm{Cov}\,(V_{t+s}-V_s,V_s)=-\frac{\sigma^2}{2\beta}\left(1-e^{-2\beta s}\right)\left(1-e^{-\beta t}\right).
$$
At this point, we note that the covariance is negative, as expected. Additionally, if $t$ is fixed, and $s$ goes to infinity, then heuristically, the covariance of the increments tend to a covariance which depends on $t$.
Thus, the increments are negatively correlated, even as $s$ goes to infinity.
Lastly, note that if we start from a distribution $\nu\sim N(0,\frac{\sigma^2}{2\beta})$ which is independent from the Wiener process $(W_t)$, then $(V_t)$ has a constant distribution, $\forall t\ge 0, V_t\sim N(0,\frac{\sigma^2}{2\beta})$. In this case,
\begin{align*}
\mathrm{Cov}\,(V_{t+s}-V_s,V_s)&=\mathbb E\left[\left(\nu e^{-\beta(t+s)}-\nu e^{-\beta s}\right)\nu e^{-\beta s}\right]-\frac{\sigma^2}{2\beta}\left(1-e^{-2\beta s}\right)\left(1-e^{-\beta t}\right)\\
&=\frac{\sigma^2}{2\beta}\left(e^{-\beta(t+2s)}-e^{-2\beta s}\right)-\frac{\sigma^2}{2\beta}\left(1-e^{-2\beta s}\right)\left(1-e^{-\beta t}\right)\\
&=-\frac{\sigma^2}{2\beta}\left(1-e^{-\beta t}\right),
\end{align*}
so here also the increments remain of negative covariance, and incidentally, this is the limiting covariance of the initially considered process. Even if the distribution of the process is stationary, the increments remain negatively correlated.
We can also look at the distribution of $V_{t+s}-V_s$. It is clearly Gaussian, of mean $0$, and the variance is given by
\begin{align*}
&\mathrm{Var}\,(V_{t+s}-V_s)\\
&=\frac{\sigma^2}{2\beta}\left(\left(e^{-\beta(t+s)}-e^{-\beta s}\right)^2+\left(\left(1-e^{-2\beta(t+s)}\right)+\left(1-e^{-2\beta s}\right)-2e^{-\beta(t+2s)}\left(e^{2\beta s}-1\right)\right)\right)\\
&=\frac{\sigma^2}{\beta}\left(1-e^{-\beta t}\right).
\end{align*}
Hence the increments are stationary when the distribution of the process is stationary, and
$$
V_{t+s}-V_s\sim N\left(0,\frac{\sigma^2}{\beta}\left(1-e^{-\beta t}\right)\right).
$$
Best Answer
I will try to summarize the derivation found in Harrison (1990)*:
Let $M_t\equiv \sup \left\{ X_s,0 \le s \le t \right\}$.
Basically the derivation is accomplished in 2 steps:
The first step goes like this:
Let $X_t$ be the standard Brownian motion, i.e. $\mu=0$ and $\sigma=1$. Then we have:
$$ \begin{align} F_t(x,y) & = P \left\{X_t\leq x,M_t\leq y\right\} \\ & =P \left\{X_t\leq x\right\}-P \left\{X_t\leq x,M_t>y\right\}\\ & =\Phi\left(\frac{x}{\sqrt{t}}\right)-P \left\{X_t\leq x,M_t>y\right\} \end{align} $$ with $\Phi(\cdot)$ denoting the standard normal distribution.
Now, the term $P\left\{X_t\le x, M_t > y \right\}$ can be calculated heuristically using the so-called reflection principle (note that the restriction $\mu=0$ is critical here): for every sample path of $X$ that hits level $y$ before time $t$ but finishes below level $x$ at time $t$, there is another equally probable path (shown by the dotted line in Figure 1) that hits $y$ before $t$ and then travels upward at least $y - x$ units to finish above level $y + (y - x) = 2y - x$ at time $t$.
Thus $$ \begin{align} P\left\{X_t\le x, M_t > y \right\} & = P\left\{X_t\ge 2y-x \right\} \\ & = P\left\{X_t\le x-2y \right\} = \Phi\left(\frac{x-2y}{\sqrt{t}}\right) \end{align} $$
(This argument can be made rigorous using the strong Markov property)
So we have:
$$ P \left\{X_t\leq x,M_t\leq y\right\}=\Phi\left(\frac{x}{\sqrt{t}}\right)-\Phi\left(\frac{x-2y}{\sqrt{t}}\right) $$
and as a corollary:
$$ P \left\{X_t\in d x,M_t\le y\right\}=g_t(x,y) d x $$ where $g_t(x,y)=\frac{1}{\sqrt{t}}\left(\phi\left(\frac{x}{\sqrt{t}}\right)-\phi\left(\frac{x-2y}{\sqrt{t}}\right)\right)$ and $\phi(\cdot)$ is the standard normal density.
Remembering the last result from above we can proceed to the next step where we want to calculate the joint distribution of $X_t$ and $M_t$ for general values $\mu$ and $\sigma$. For this we will need to perform the change of measure:
$$ P\left\{X_t\in d x,M_t\le y\right\}=f_t(x,y) d x $$ where $f_t(x,y)$ is obtained through the change of measure using the Radon-Nikodym derivative (through a necessary rescaling):
$$ f_t(x,y)=\frac{1}{\sigma}\exp\left(\frac{\mu x}{\sigma^2}-\frac{\mu^2 t}{2\sigma^2} \right) g_t\left(\frac{x}{\sigma},\frac{y}{\sigma}\right) $$ and $g_t(\cdot,\cdot)$ being defined above.
If we employ some simplifications we can get the $f_t(x,y)$ to look like the density from the question.
From there we can perform the integration to get the following result:
$$ F_t(x,y) = \int_{-\infty}^xf_t(z,y) dz = \Phi\left(\frac{x-\mu t}{\sigma\sqrt{t}}\right)-e^{2\mu y/\sigma^2}\Phi\left(\frac{x-2y-\mu t}{\sigma \sqrt{t}}\right) $$
It should be straightforward to refine it for the process starting at a level $\alpha$.
If we define $\tau_y$ as the first $t$ at which $X_t=y$ (possibly $+\infty$ if $\mu<0$), then obviously $\tau_y > t \iff M_t < y$. Letting $x \nearrow y$ gives:
$$ \begin{align} P\left\{\tau_y > t\right\} &= P \left\{M_t < y \right\} = F_t(y, y)\\ &= \Phi\left(\frac{y-\mu t}{\sigma\sqrt{t}}\right)-e^{2\mu y/\sigma^2}\Phi\left(\frac{-y-\mu t}{\sigma \sqrt{t}}\right) \end{align} $$
for $y>0$.
* Harrison, J. M. (1985). Brownian motion and stochastic flow systems (pp. 89-91). New York: Wiley.