When $X$ is normally distributed with known mean $M_1$ and covariance $\Sigma_1$ or with mean $M_2$ and covariance $\Sigma_2$, as indicated in comments to the question, then $V^{\ '}X$ is normally distributed either with mean $\mu_1 = V^{\ '} M_1$ and covariance $\sigma_1^2 = V^{\ '} \Sigma_1 V$ or with mean $\mu_2 = V^{\ '} M_2$ and covariance $\sigma_2^2 = V^{\ '} \Sigma_2 V$; $\mu_2 \gt \mu_1$. We might then care to optimize the chance of correct classification. This can be done provided we stipulate a prior distribution for the two classes. Letting $\pi_1$ be the chance of class 1 and $\pi_2$ the chance of class 2 and $\phi$ the standard normal pdf, then the posterior probabilities of the classes are equal (and therefore $x$ is at the threshold) when
$$f(x) = \pi_1 \phi(\frac{x - \mu_1}{\sigma_1}) - \pi_2 \phi(\frac{x - \mu_2}{\sigma_2}) = 0.$$
There will be at most one zero of $f$ between $x = \mu_1$ and $x = \mu_2$. (When the zeros lie outside this interval we might question the utility of this classifier.) Assuming one exists and choosing $v_0$ to be the negative of this zero gives a linear classifier $X \to V^{\ '}X + v_0$ that, when negative, indicates class 1 is more likely than class 2 and, when positive, indicates class 2 is more likely than class 1.
A simple case arises when the two classes are taken to be equally likely, $\pi_1 = \pi_2 = 1/2,$ for then it is clear from the symmetry and unimodality of $\phi$ that $v_0 = -(\mu_1 + \mu_2)/2$. Note, though, that in general it is not the case that the zero equals $\pi_1 \mu_1 + \pi_2 \mu_2$ (although that might be a good starting guess in a systematic search for the zero).
Define a random variable $C\in\{1,2\}$ with prior distribution $\mu_C$ given by
$$
\mu_C(A) = P\{C\in A\} = \frac{1}{2} I_A(1) + \frac{1}{2} I_A(2) \, ,
$$
where $A$ is any subset of $\{1,2\}$.
Use the notation $X=(X_1,X_2)$ and $x=(x_1,x_2)$. Suppose that
$$X\mid C=1\sim N(\mu_1,\Sigma_1)\, ,$$
$$X\mid C=2\sim N(\mu_2,\Sigma_2)\, ,$$
where $\mu_1=(2, 2)^\top$, $\Sigma_1=\textrm{diag}(2,1)$, $\mu_2=(2,4)^\top$ and $\Sigma_2=\textrm{diag}(4,2)$.
Now, study this
http://en.wikipedia.org/wiki/Multivariate_normal_distribution
to understand that
$$
f_{X\mid C}(x\mid 1) = \frac{1}{2\pi\sqrt{2}} \exp\left(-\frac{1}{2}\left(\frac{(x_1-2)^2}{2} + \frac{(x_2-2)^2}{1} \right)\right) \, ,
$$
$$
f_{X\mid C}(x\mid 2) = \frac{1}{4\pi\sqrt{2}} \exp\left(-\frac{1}{2}\left(\frac{(x_1-2)^2}{4} + \frac{(x_2-4)^2}{2} \right)\right) \, .
$$
Using Bayes Theorem, we have
$$
P\{C=1\mid X=x\} = \frac{\int_{\{1\}} f_{X\mid C}(x\mid c) \,d\mu_C(c)}{\int_{\{1,2\}} f_{X\mid C}(x\mid c)\, d\mu_C(c)} = \frac{\frac{1}{2} f_{X\mid C}(x\mid 1)}{\frac{1}{2} f_{X\mid C}(x\mid 1) + \frac{1}{2} f_{X\mid C}(x\mid 2)} \, .
$$
The idea is to decide for the first classification if
$$
P\{C=1\mid X=x\} = \frac{1}{1+\frac{f_{X\mid C}(x\mid 2)}{f_{X\mid C}(x\mid 1)}} > \frac{1}{2} \, ,
$$
which is equivalent to
$$
\frac{f_{X\mid C}(x\mid 2)}{f_{X\mid C}(x\mid 1)} < 1 \, ,
$$
or
$$
\log f_{X\mid C}(x\mid 2) - \log f_{X\mid C}(x\mid 1) < 0 \, ,
$$
which gives us
$$
\log \frac{1}{2} - \frac{(x_1-2)^2}{8} - \frac{(x_2-2)^2}{4} + \frac{(x_1-2)^2}{4} + \frac{(x_2-4)^2}{2} < 0 \, . \qquad (*)
$$
Therefore, you decide that the point $x$ belongs to classification $1$ if it is inside the ellipse defined by
$$
\frac{(x_1-2)^2}{8(2+\log 2)} + \frac{(x_2-6)^2}{4(2+\log 2)} = 1 \, ,
$$
otherwise, you decide for classification $2$.
Best Answer
For a cost matrix $$L= \begin{bmatrix} 0 & 0.5 \\ 1 & 0 \end{bmatrix} \begin{matrix} c_1 \\ c_2 \end{matrix} \;\text{prediction} \\ \hspace{-1.9cm} \begin{matrix} c_1 & c_2 \end{matrix} \\ \hspace{-1.9cm}\text{truth}$$
the loss of predicting class $c_1$ when the truth is class $c_2$ is $L_{12} = 0.5$, and the cost of predicting class $c_2$ when the truth is class $c_1$ is $L_{21} = 1$. There is no cost for correct predictions, $L_{11} = L_{22} = 0$. The conditional risk $R$ for predicting either class $k$ is then
$$ \begin{align} R(c_1|x) &= L_{11} \Pr (c_1|x) + L_{12} \Pr (c_2|x) = L_{12} \Pr (c_2|x) \\ R(c_2|x) &= L_{22} \Pr (c_2|x) + L_{21} \Pr (c_1|x) = L_{21} \Pr (c_1|x) \end{align} $$ For a reference see these notes on page 15.
In order to minimize the risk/loss you predict $c_1$ if the cost from the mistake of doing so (that's the loss of the wrong prediction times the posterior probability that the prediction is wrong $L_{12} \Pr (c_2|x)$) is smaller than the cost of wrongfully predicting the alternative,
$$ \begin{align} L_{12} \Pr (c_2|x) &< L_{21} \Pr (c_1|x) \\ L_{12} \Pr (x|c_2) \Pr (c_2) &< L_{21} \Pr (x|c_1) \Pr (c_1) \\ \frac{L_{12} \Pr (c_2)}{L_{21} \Pr (c_1)} &< \frac{\Pr (x|c_1)}{ \Pr (x|c_2)} \end{align} $$ where the second line uses Bayes' rule $\Pr (c_2|x) \propto \Pr (x|c_2) \Pr (c_2)$. Given equal prior probabilities $\Pr (c_1) = \Pr (c_2) = 0.5$ you get $$\frac{1}{2} < \frac{\Pr (x|c_1)}{ \Pr (x|c_2)}$$
so you choose to classify an observation as $c_1$ is the likelihood ratio exceeds this threshold. Now it is not clear to me whether you wanted to know the "best threshold" in terms of the likelihood ratios or in terms of the attribute $x$. The answer changes according to the cost function. Using the Gaussian in the inequality with $\sigma_1 = \sigma_2 = \sigma$ and $\mu_1 = 0$, $\mu_2 = 1$, $$ \begin{align} \frac{1}{2} &< \frac{\frac{1}{\sqrt{2\pi}\sigma}\exp \left[ -\frac{1}{2\sigma^2}(x-\mu_1)^2 \right]}{\frac{1}{\sqrt{2\pi}\sigma}\exp \left[ -\frac{1}{2\sigma^2}(x-\mu_2)^2 \right]} \\ \log \left(\frac{1}{2}\right) &< \log \left(\frac{1}{\sqrt{2\pi}\sigma}\right) -\frac{1}{2\sigma^2}(x-0)^2 - \left[ \log \left(\frac{1}{\sqrt{2\pi}\sigma}\right) -\frac{1}{2\sigma^2}(x-1)^2 \right] \\ \log \left(\frac{1}{2}\right) &< -\frac{x^2}{2\sigma^2} + \frac{x^2}{2\sigma^2} - \frac{2x}{2\sigma^2} + \frac{1}{2\sigma^2} \\ \frac{x}{\sigma^2} &< \frac{1}{2\sigma^2} - \log \left(\frac{1}{2}\right) \\ x &< \frac{1}{2} - \log \left(\frac{1}{2}\right) \sigma^2 \end{align} $$ so a prediction threshold in terms of $x$ as you search for can only be achieved if the losses from false predictions are the same, i.e. $L_{12} = L_{21}$ because only then can you have $\log \left( \frac{L_{12}}{L_{21}} \right) = \log (1) = 0$ and you get the $x_0 < \frac{1}{2}$.