Define a random variable $C\in\{1,2\}$ with prior distribution $\mu_C$ given by
$$
\mu_C(A) = P\{C\in A\} = \frac{1}{2} I_A(1) + \frac{1}{2} I_A(2) \, ,
$$
where $A$ is any subset of $\{1,2\}$.
Use the notation $X=(X_1,X_2)$ and $x=(x_1,x_2)$. Suppose that
$$X\mid C=1\sim N(\mu_1,\Sigma_1)\, ,$$
$$X\mid C=2\sim N(\mu_2,\Sigma_2)\, ,$$
where $\mu_1=(2, 2)^\top$, $\Sigma_1=\textrm{diag}(2,1)$, $\mu_2=(2,4)^\top$ and $\Sigma_2=\textrm{diag}(4,2)$.
Now, study this
http://en.wikipedia.org/wiki/Multivariate_normal_distribution
to understand that
$$
f_{X\mid C}(x\mid 1) = \frac{1}{2\pi\sqrt{2}} \exp\left(-\frac{1}{2}\left(\frac{(x_1-2)^2}{2} + \frac{(x_2-2)^2}{1} \right)\right) \, ,
$$
$$
f_{X\mid C}(x\mid 2) = \frac{1}{4\pi\sqrt{2}} \exp\left(-\frac{1}{2}\left(\frac{(x_1-2)^2}{4} + \frac{(x_2-4)^2}{2} \right)\right) \, .
$$
Using Bayes Theorem, we have
$$
P\{C=1\mid X=x\} = \frac{\int_{\{1\}} f_{X\mid C}(x\mid c) \,d\mu_C(c)}{\int_{\{1,2\}} f_{X\mid C}(x\mid c)\, d\mu_C(c)} = \frac{\frac{1}{2} f_{X\mid C}(x\mid 1)}{\frac{1}{2} f_{X\mid C}(x\mid 1) + \frac{1}{2} f_{X\mid C}(x\mid 2)} \, .
$$
The idea is to decide for the first classification if
$$
P\{C=1\mid X=x\} = \frac{1}{1+\frac{f_{X\mid C}(x\mid 2)}{f_{X\mid C}(x\mid 1)}} > \frac{1}{2} \, ,
$$
which is equivalent to
$$
\frac{f_{X\mid C}(x\mid 2)}{f_{X\mid C}(x\mid 1)} < 1 \, ,
$$
or
$$
\log f_{X\mid C}(x\mid 2) - \log f_{X\mid C}(x\mid 1) < 0 \, ,
$$
which gives us
$$
\log \frac{1}{2} - \frac{(x_1-2)^2}{8} - \frac{(x_2-2)^2}{4} + \frac{(x_1-2)^2}{4} + \frac{(x_2-4)^2}{2} < 0 \, . \qquad (*)
$$
Therefore, you decide that the point $x$ belongs to classification $1$ if it is inside the ellipse defined by
$$
\frac{(x_1-2)^2}{8(2+\log 2)} + \frac{(x_2-6)^2}{4(2+\log 2)} = 1 \, ,
$$
otherwise, you decide for classification $2$.
For a copula that corresponds to a known multivariate distribution, you can simulate from that distribution and then make the margins uniform (e.g. Gaussian copula, t-copula).
More generally if you can work out the conditional (either $C(u|v)$ or $c(u|v)$), you can simulate from a uniform for $V$ and then from the conditional, perhaps via inverse-cdf (if you know $C(u|v)$) or perhaps via say accept-reject (maybe an adaptive accept-reject, some version of ziggurat, etc, if you know $c(u|v)$).
In the case of bivariate Archimedean copulas, following Nelsen (1999) or Embrechts et al., (2001), we have a mechanism for then generating from them as follows. Suppose $(U_1,U_2)$ has a two-dimensional Archimedean copula with generator $\phi$. Then:
Simulate two independent $U(0,1)$ random variables, $v_1$ and $v_2$
Set $t=K_C^{-1}(v_2)\,$, where $K_C(t)=t-\phi(t)/\phi'(t)$
The desired simulated values are $u_1=\phi^{-1}(v_1\,\phi(t))$ and
$u_2=\phi^{-1}((1-v_1)\phi(t))$.
There are other methods; for example in some cases it might sometimes be practical to do some version of bivariate accept-reject, say, or via transformation to some convenient bivariate distribution on which accept-reject might be applied.
Best Answer
Assuming that the only thing that you have is an empirical distribution, the simplest way to go is to draw values from $\hat F$ and reject if $X_1 \leq c_1$ and $X_2 \leq c_2$. Less naive implementation would be to subset $(X_1, X_2)$ values so to drop the values below threshold and draw from $\hat F_\text{trunc}$, the same way as you would do with any other discrete distribution. Simple example in R of such approach can be find below.
The above example assumes that you have the full data, however if the only thing that you have is the empirical distribution tables with probabilities for $(x_1,x_2)$, than the procedure is the same but you draw the $(x_1,x_2)$ pairs with $\hat F(x_1,x_2)$ probabilities as in the example below.
Drawing from bivariate distribution does not differ in here from drawing from univariate distribution, the values to be drawn are pairs, or more precisely indexes for those pairs.