Define a random variable $C\in\{1,2\}$ with prior distribution $\mu_C$ given by
$$
\mu_C(A) = P\{C\in A\} = \frac{1}{2} I_A(1) + \frac{1}{2} I_A(2) \, ,
$$
where $A$ is any subset of $\{1,2\}$.
Use the notation $X=(X_1,X_2)$ and $x=(x_1,x_2)$. Suppose that
$$X\mid C=1\sim N(\mu_1,\Sigma_1)\, ,$$
$$X\mid C=2\sim N(\mu_2,\Sigma_2)\, ,$$
where $\mu_1=(2, 2)^\top$, $\Sigma_1=\textrm{diag}(2,1)$, $\mu_2=(2,4)^\top$ and $\Sigma_2=\textrm{diag}(4,2)$.
Now, study this
http://en.wikipedia.org/wiki/Multivariate_normal_distribution
to understand that
$$
f_{X\mid C}(x\mid 1) = \frac{1}{2\pi\sqrt{2}} \exp\left(-\frac{1}{2}\left(\frac{(x_1-2)^2}{2} + \frac{(x_2-2)^2}{1} \right)\right) \, ,
$$
$$
f_{X\mid C}(x\mid 2) = \frac{1}{4\pi\sqrt{2}} \exp\left(-\frac{1}{2}\left(\frac{(x_1-2)^2}{4} + \frac{(x_2-4)^2}{2} \right)\right) \, .
$$
Using Bayes Theorem, we have
$$
P\{C=1\mid X=x\} = \frac{\int_{\{1\}} f_{X\mid C}(x\mid c) \,d\mu_C(c)}{\int_{\{1,2\}} f_{X\mid C}(x\mid c)\, d\mu_C(c)} = \frac{\frac{1}{2} f_{X\mid C}(x\mid 1)}{\frac{1}{2} f_{X\mid C}(x\mid 1) + \frac{1}{2} f_{X\mid C}(x\mid 2)} \, .
$$
The idea is to decide for the first classification if
$$
P\{C=1\mid X=x\} = \frac{1}{1+\frac{f_{X\mid C}(x\mid 2)}{f_{X\mid C}(x\mid 1)}} > \frac{1}{2} \, ,
$$
which is equivalent to
$$
\frac{f_{X\mid C}(x\mid 2)}{f_{X\mid C}(x\mid 1)} < 1 \, ,
$$
or
$$
\log f_{X\mid C}(x\mid 2) - \log f_{X\mid C}(x\mid 1) < 0 \, ,
$$
which gives us
$$
\log \frac{1}{2} - \frac{(x_1-2)^2}{8} - \frac{(x_2-2)^2}{4} + \frac{(x_1-2)^2}{4} + \frac{(x_2-4)^2}{2} < 0 \, . \qquad (*)
$$
Therefore, you decide that the point $x$ belongs to classification $1$ if it is inside the ellipse defined by
$$
\frac{(x_1-2)^2}{8(2+\log 2)} + \frac{(x_2-6)^2}{4(2+\log 2)} = 1 \, ,
$$
otherwise, you decide for classification $2$.
Best Answer
Your comments above lead me to believe that you are asking about the distribution of $X$ marginalized over the classes. This marginal distribution is Gaussian in the first dimension but not in the second.
Since the first and second dimensions are independent they can be treated separately. The means and variances conditional on class in the first dimension are the same for both classes, so mean and variance marginalized over class are the same those conditional on class $(\mu_1 = 1,\,\sigma_{1}^2 = 2)$. In the second dimension the marginal distribution is a mixture of Gaussians, which is not itself Gaussian:
$p(x_2) = \frac{1}{2}\text{N}(1,1) + \frac{1}{2}\text{N}(3,1),$
in which $\text{N}(\mu, \sigma^2)$ is the probability density of the Gaussian (aka normal) distribution with mean $\mu$ and variance $\sigma^2$.
The mean of the second dimension variable is
$\text{E}(X_2) = \Pr(C_1)\text{E}(X_2|C_1) + \Pr(C_2) \text{E}(X_2|C_2)$
$\text{E}(X_2) = \frac{1}{2} \cdot 1 + \frac{1}{2} \cdot 3 = 2.$
And now the variance. In general,
$\text{Var}(Y) = \text{E}(Y^2) - \left[\text{E}(Y)\right]^2.$
We'll find it useful to rearrange this as
$\text{E}(Y^2) = \text{Var}(Y) + \left[\text{E}(Y)\right]^2.$
So
$\text{E}(X_2^2) = \Pr(C_1)\text{E}(X_2^2|C_1) + \Pr(C_2) \text{E}(X_2^2|C_2)$
$\text{E}(X_2^2) = \frac{1}{2}(\text{Var}(X_2|C_1) + \left[\text{E}(X_2|C_1)\right]^2$ $+\text{Var}(X_2|C_2) + \left[\text{E}(X_2|C_2)\right]^2)$
$\text{E}(X_2^2) = \frac{1}{2}(1 + 1^2 + 1 + 3^2) = 6.$
Now we can get
$\text{Var}(X_2) = \text{E}(X_2^2) - \left[\text{E}(X_2)\right]^2 = 6 - 2^2 = 2.$
Here's what the distribution of $X_2$ looks like, along with a Gaussian distribution with the same mean and variance.