Let we have continuous function $f(x,y)$ in all $(x,y)\in(-\infty,\infty)$, and we would take the finite integration of $f$:
$$\int_{a}^{b}{\int_{p}^{q}{f(x,y) \,dx \,dy}}$$
and we take the integration by using Gaussian quadrature, e.g. $n$ points quadrature in each directions:
$$\int_{a}^{b}{\int_{p}^{q}{f(x,y) \,dx \,dy}} \simeq {\frac{(b-a)(q-p)}{4} \sum_{i=1}^{n}{\sum_{j=1}^{n}{w_i w_j f\left({\frac{b-a}{2} \xi_i + \frac{a+b}{2}},\frac{q-p}{2}\xi_j+\frac{p+q}{2}\right)}}}.$$
Of course the integration have a numerical error, for one dimensional integration, Gauss-Legendre quadrature would have degree $2n-1$, what degree of accuracy of $n$ points quadrature in this multidimensional case?
Order/Degree of accuracy for multidimensional integration using Gauss-Legendre quadrature
integrationquadrature
Related Solutions
Quadrature for polynomials
Let's focus on quadratures with $2n$ nodes that integrate polynomials (up to some degree) precisely.
$$ \int_{-1}^1 f(x) dx \approx \sum_{i=1}^{2n} w_i f(x_i), \quad |x_i| < \Delta = 0.85. $$
Due to symmetry one can take $x_{i} = x_{2n+1 - i}$, so the quadrature becomes $$ \int_{-1}^1 f(x) dx \approx \sum_{i=1}^{n} w_i \left[f(x_i) + f(-x_i)\right], \quad 0 \leq x_i < \Delta = 0.85. $$
Assume the quadrature is precise for polynomials of degree up to $2p + 1$, then $$ \frac{1}{2}\int_{-1}^1 x^{2k} dx = \frac{1}{2k+1} = \sum_{i=1}^{n} w_i x_i^{2k}, \qquad k = 0, 1, \dots, p. $$ Introduce $\xi_i = \frac{x_i^2}{\Delta^2} \in [0, 1]$. Then we have a problem for $\xi_i, w_i$ $$ \sum_{i=1}^n w_i \xi_i^k = \frac{\Delta^{-2k}}{(2k+1)}, \quad k = 0, 1, \dots, p $$
Let's investigate if this problem admits solutions with strictly positive $w_i$.
Case when all $w_i > 0$
If this problem has a solution then $$ \mu(\xi) = \sum_{i=1}^n w_i \theta(\xi - \xi_i) $$ is a solution Hausdorff moment problem $$ \int_0^1 \xi^k d\mu(\xi) = m_k, k = 0, 1, \dots $$ with $$ m_k = \begin{cases} \displaystyle \frac{\Delta^{-2k}}{(2k+1)}, &k = 0, 1, \dots, p\\ \displaystyle \int_0^1 \xi^k d\mu(\xi), &k > p \end{cases}, $$ thus $m_k$ should be completely monotonic. $$ \begin{array}{c|c} k & m_k\\\hline 0 & 1\\ 1 & 0.461361\\ 2 & 0.383137\\ 3 & 0.378781\\ 4 & 0.407761\\ \end{array} $$ It can easily be seen that for $p > 3$ the sequence is not even monotonic, so there is no solution with positive $w_i$ for $p > 3$. The solution (one of) for $p = 3$ is simply the 4-point Gaussian quadrature rule $$ w_{1,2} = \frac{18 \pm \sqrt{30}}{36}\\ \xi_{1,2} = \Delta^2 \left(\frac{3}{7} \mp \frac{2}{7}\sqrt\frac{6}{5}\right). $$
Case when some $w_i < 0$
Actually now one can simply solve the moment system $$ \sum_{i=1}^n w_i \xi_i^k = \frac{\Delta^{-2k}}{(2k+1)}, \quad k = 0, 1, \dots, p. $$ Note that it has $p + 1$ equation and $2n$ unknowns, so one may expect a solution to exist for $2n \geq p + 1$. The case $2n = p + 1$ give simply the Gaussian quadrature rules, for which $\xi_i$ lie outside $[0, 1]$ when $p > 3$. When $2n > p + 1$ there is no unique solution, so one may try to minimize the truncation error, proportional to $$ \delta_{p+1} = \sum_{i=1}^n w_i \xi_i^{p+1} - \frac{\Delta^{-2p-2}}{(2p+3)}. $$
This leads us to a nonlinear programming problem $$\begin{aligned} \operatorname*{minimize}_{w_i, \xi_i} &&&\delta_{p+1}\\ \operatorname*{subject to} &&&0 \leq \xi_1\\ &&&\xi_1 \leq \xi_2\\ &&&\xi_2 \leq \xi_3\\ &&&\vdots\\ &&&\xi_{n-1} \leq \xi_n\\ &&&\xi_n \leq 1\\ &&&\delta_{k} = 0, \quad k = 0, 1, \dots, p \end{aligned} $$
Also one may try to minimize the absolute values of the weights, since the previous approach may lead to big negative weights $$\begin{aligned} \operatorname*{minimize}_{w_i, \xi_i} &&&\sum_{i=1}^n |w_i|\\ \operatorname*{subject to} &&&0 \leq \xi_1\\ &&&\xi_1 \leq \xi_2\\ &&&\xi_2 \leq \xi_3\\ &&&\vdots\\ &&&\xi_{n-1} \leq \xi_n\\ &&&\xi_n \leq 1\\ &&&\delta_{k} = 0, \quad k = 0, 1, \dots, p \end{aligned} $$
Using GAMS/BARON I've found some quadratures of this kind (you may also use NEOS server to solve GAMS models)
For $p = 4$
The problem seems to be infeasible for $n \leq 3$. The $n = 4$ quadrature is $$\begin{aligned} \xi_{1,7}&=1.0000000000000000000\\ \xi_{2,6}&=0.85880349620038677732\\ \xi_{3,5}&=0.52084678062073430276\\ \xi_{4}&=0\\ x_{1,7}&=\pm 0.85000000000000000000\\ x_{2,6}&=\pm 0.78770903638639276855\\ x_{3,5}&=\pm 0.61344258003376366028\\ x_{4}&=0\\ w_{1,7}&=0.83593508805857758387\\ w_{2,6}&=-0.88789837944004855297\\ w_{3,5}&=0.74485659488394778770\\ w_{4}&=0.30710669649752318141\\ \end{aligned} $$
For $p = 5$
The problem seems to be infeasible for $n \leq 3$. The $n = 4$ quadrature is $$ \begin{aligned} \xi_{1,8}&=1.0000000000000000000\\ \xi_{2,7}&=0.95880851739522516021\\ \xi_{3,6}&=0.86778546156594895262\\ \xi_{4,5}&=0.090906828432672730508\\ x_{1,8}&=\pm 0.85000000000000000000\\ x_{2,7}&=\pm 0.83230953005360342218\\ x_{3,6}&=\pm 0.79181752694759044356\\ x_{4,5}&=\pm 0.25628145376247194956\\ w_{1,8}&=5.8486928393996124204\\ w_{2,7}&=-8.6490143669092829039\\ w_{3,6}&=3.2951628882441291627\\ w_{4,5}&=0.50515863926554132074\\ \end{aligned} $$ I suggest not using this quadrature due to big negative weight $w_{2,7}$
The $n = 5$ quadrature is $$ \begin{aligned} \xi_{1,10}&=1.0000000000000000000\\ \xi_{2,9}&=0.90991052436123162672\\ \xi_{3,8}&=0.67405321511028038136\\ \xi_{4,7}&=0.38251754788591463719\\ \xi_{5,6}&=0.14666023863496339182\\ x_{1,10}&=\pm 0.85000000000000000000\\ x_{2,9}&=\pm 0.81080845694343238659\\ x_{3,8}&=\pm 0.69785632326230130357\\ x_{4,7}&=\pm 0.52570802575914068441\\ x_{5,6}&=\pm 0.32551808308258552196\\ w_{1,10}&=1.8511750976308568442\\ w_{2,9}&=-2.6071695839739403098\\ w_{3,8}&=1.7504677140719977735\\ w_{4,7}&=-0.84051661731816264992\\ w_{5,6}&=0.84604338958924834195\\ \end{aligned} $$
Just in case if you would like to reproduce the results here is my Mathematica script.
Quadrature for polynomials with weight
One of the possibilities is to build a special quadrature rule with weight $\omega(x) = \exp\left(-\frac{x^2}{2\sigma^2}\right)$.
The case $\sigma = \infty$ brings us to Gauss-Legendre quadrature. When $\sigma \to 0$ then the quadrature nodes group around $x = 0$.
To make a weighted quadrature, we need to build an orthogonal polynomial family with respect to the inner product $$ \langle f,g\rangle = \int_{-1}^{1} f(x) g(x) \omega(x) dx. $$
Quick and dirty way is to make the Gramian matrix $$ G_{ij} = \int_{-1}^{1} x^{i+j} \omega(x) dx\\ $$ perform its Cholesky decomposition $$ G = LL^\top\\ M = L^{-1} $$ and the polynomials are $$ P_i(x) = \sum_{j=0}^{n} M_{ij} x^j $$ Note that the matrix $M$ actually depends on $\sigma$. The nodes of the quadrature are roots of $P_n(x)$ polynomial, so we want to find the smallest $\sigma$ for which $$ P_n(0.85) = 0. $$ Thus $\pm 0.85$ will be the outermost nodes.
For $n = 6$ solving the equation numerically gives $\sigma = 0.431946$, this corresponds to the quadrature $$ \int_{-1}^1 f(x) \omega(x) dx \approx \sum_{j=1}^6 w_j f(x_j)\\ x_{1,6} = \pm 0.8500000000\\ x_{2,5} = \pm 0.5142302766\\ x_{3,4} = \pm 0.1706206462\\ w_{1,6} = 0.006369511915\\ w_{2,5} = 0.0836555022 \\ w_{3,4} = 0.292371290. $$ Introducing $g(x) = f(x)\omega(x)$ $$ \int_{-1}^1 g(x) dx \approx \sum_{j=1}^6 w_j \frac{g(x_j)}{\omega(x_j)} = \sum_{j=1}^6 \tilde w_j g(x_j)\\ \tilde w_{1,6} = 0.306100114\\ \tilde w_{2,5} = 0.345153691\\ \tilde w_{3,4} = 0.341741013. $$
Note that this quadrature is not quite good when $g(x)$ is a polynomial, but is exact when $\frac{g(x)}{\omega(x)}$ is a polynomial of 11-th degree.
Appendix. For $n = 10$ the same method gives $\sigma \approx 0.2565088$ and $$ x_{1,10} = \pm 0.85000000\\ x_{2,9} = \pm 0.63553066\\ x_{3,8} = \pm 0.44273486\\ x_{4,7} = \pm 0.26175323\\ x_{5,6} = \pm 0.086652210\\ \tilde w_{1,10} = 0.22829400\\ \tilde w_{2,9} = 0.20160757\\ \tilde w_{3,8} = 0.18564405\\ \tilde w_{4,7} = 0.17726595\\ \tilde w_{5,6} = 0.17359644. $$
Best Answer
I hope this still helps you: The 2-d quadrature rule you describe can be constructed by first using the 1-d rule in x-direction, then in y-direction:
Assume $f$ to be sufficiently smooth. Then we have $$ \int_a^b \int_p^q f(x,y)\,dx\,dy = (q-p)\int_a^b \sum_{i=1}^n\omega_if\left(x,\frac{q-p}2\xi_i+\frac{q+p}2\right)+\mathcal O\left((q-p)^{2n}\right)\,dy\\ = (q-p)(b-a)\sum_{j=1}^n \sum_{i=1}^n\omega_i\omega_jf\left(\frac{b-a}2\xi_i+\frac{a+b}2,\frac{q-p}2\xi_i+\frac{q+p}2\right)+\mathcal O\left((b-a)^{2n}\right) + \mathcal O\left((q-p)^{2n}\right). $$ This means that you still have the same order of accuracy.