The number of constraints does not matter here, as long as there are fewer constraints than variables. If the objective function is strictly concave and the constraints are strictly convex in the variable $\theta$, then there is an easy uniqueness result. (There is also a more complicated result, where you only look at the behavior of these functions along the admissible set, and which involves what is called the bordered Hessian.)
Let $f(\theta)$ be the function you want to maximize, where $\theta$ will have the components $\theta=(\theta_1,\dots,\theta_n)$, and let the constraints be $g_1(\theta)=b_1$ and $g_2(\theta)=b_2$. Introduce the Lagrangian function
$$L=L(\theta,\alpha)=f(\theta)-\alpha_1(g_1(\theta)-b_1)-\alpha_2(g_2(\theta)-b_2)$$
The Lagrangian conditions are the same as
$${\partial L \over \partial \theta_i}=0,\qquad i=1,\dots,n \\ {\partial L\over \partial \alpha_j}=0, \qquad j=1,2\qquad{}$$
which means that the optimal solution $(\theta^*,\alpha^*)$ should be a stationary point for $L$.
Cases where the gradient vectors $\nabla g_1(\theta)$ and $\nabla g_2(\theta)$ are linearly dependent at some admissible point have to be checked separately, as you are not sure that Lagrange's method will find the answer then.
If $\nabla g_1(\theta)$ and $\nabla g_2(\theta)$ are linearly independent at all admissible points (this is often the case), then the Lagrange conditions above are necessary at an optimal point, and $\alpha^*$ will be uniquely determined.
Now, if $f(\theta)$ is strictly concave and each $g_j(\theta)$ is strictly convex then $L(\cdot,\alpha^*)$ will be strictly concave in $\theta$, which means that it can have at most one stationary point $\theta^*$. Hence the solution, if it exists, is unique.
The question is ill posed.
One of the first things you should do before applying Lagrange is to verify
that there is indeed a solution. There is no solution in this case.
Fix $z=-1$, then look for solutions of $x+y-1-{1 \over xy} = 4$. Multiplying across by $x$ gives
$x^2+x (y-5) -{1 \over y} = 0$, which has a solution
$x= {1 \over 2} (5-y -\sqrt{(y-5)^2+{4 \over y}})$.
Letting $y \to \infty$ there are solutions, hence the distance to the origin is unbounded.
Best Answer
The Lagrange multipliers rule holds almost everywhere. Basically, whenever the constraints satisfies a constraint qualification, the Lagrange multipliers rule holds. In the general case, the conditions are also called the Karush-Kuhn-Tucker conditions. For example, whenever any of the regularity conditions displayed in the Wikipedia website holds, the KKT conditions also hold, see the regularity conditions on Wikipedia. In your case, all regularity conditions are equivalent (just one active constraint). Basically, whenever the constraint has a nonzero derivative, the KKT conditions hold. What to do when the derivative is zero? After you tested all KKT points, test whether there is a functional value smaller than other ones associated with a zero derivative of the constraint. See the discussion here. In your case, I'll tell you in advance that the derivative never vanishes at feasible points.