An optimization problem where the objective function and constraints are multilinear (i.e. sums of products, $f(x_1,\ldots,x_n)=\sum_{I\subseteq\{1,\ldots,n\}} a_I \prod_{i\in I} x_j, a_I \in \mathbb{R}$) with respect to the decision variables are difficult to solve, as these problems are non-convex in general and there are no known ways to convexify it. Thus the general problem cannot (in my knowledge) be transformed to a linear program, for instance. However, as demonstrated by the earlier post by user p.s., in some special cases this is possible.
There exists algorithms for solving the general problem. One particular is the Reformulatio-Linearization technique by Hanif Sherali and Cihan Tuncbilek (1992) , which solves the problem as a sequence of linear programs. This method is based on relaxing the product variables. For instance, if there is a product term $x_1 x_2$, then this is replaced by a variable $X_{1,2}$. This new variable is constrained using explicit lower and upper bounds on the variables.
If the original problem would have bounds $0 \leq x_1\leq 1$ and $1 \leq x_2\leq 2$, then one could generate a constraint on the product variable $X_{1,2}$ by noting that $(x_1-0)(x_2-1)\geq0$, which in expanded form yields $x_1 x_2 -x_1\geq0$, where by substituting $x_1 x_2\rightarrow X_{1,2}$ would yield $X_{1,2}-x_1\geq 0$. By using all the bounds of the variables, one can derive more constraints on the product variable, which gives a tighter relaxation of the original problem.
The reformulated problem is linear with respect to the original variables $x_i$ and the product variables $X_.$ and can be solved with a linear programming algorithm. If the solution gained from the relaxation does not yield a feasible solution to the original problem, then this solution can be used for dividing the search space into two by branching on a variable. So continuing the previous example, if at optimum of the relaxation $x_1=0.5$, then a branching could be done by considering two problems that are the same as the original but in the first $0\leq x_1\leq0.5$ and in the second $0.5\leq x_2\leq 1$. Clearly, the optimal solution to the original problem is in either of these branches. Now, because the bounds of the variable $x_1$ has changed, also the constraints on the product variables has changed also, and it can be shown that the product variables will converge towards the product terms when continuing iteration in this fashion.
Although conceptually simple, the RLT method is not that straight forward to implement. Thus, for small problem instances, it may be worthwhile to try some commercial numerical non-linear solver (e.g. BARON) or even the Maximize command in Mathematica that solves the problem symbolically.
References
Sherali, Hanif D., and Cihan H. Tuncbilek. "A global optimization algorithm for polynomial programming problems using a reformulation-linearization technique." Journal of Global Optimization 2.1 (1992): 101-112.
Maybe it's worthwhile to talk through where the dual comes from. This will take a while, but hopefully the dual won't seem so mysterious when we're done.
Suppose we want to use the primal's constraints as a way to find an upper bound on the optimal value of the primal. If we multiply the first constraint by $9$, the second constraint by $1$, and add them together, we get $9(2x_1 - x_2) + 1(x_1 +3 x_2)$ for the left-hand side and $9(1) + 1(9)$ for the right-hand side. Since the first constraint is an equality and the second is an inequality, this implies $$19x_1 - 6x_2 \leq 18.$$
But since $x_1 \geq 0$, it's also true that $5x_1 \leq 19x_1$, and so $$5x_1 - 6x_2 \leq 19x_1 - 6x_2 \leq 18.$$
Therefore, $18$ is an upper-bound on the optimal value of the primal problem.
Surely we can do better than that, though. Instead of just guessing $9$ and $1$ as the multipliers, let's let them be variables. Thus we're looking for multipliers $y_1$ and $y_2$ to force $$5x_1 - 6x_2 \leq y_1(2x_1-x_2) + y_2(x_1 + 3x_2) \leq y_1(1) + y_2(9).$$
Now, in order for this pair of inequalities to hold, what has to be true about $y_1$ and $y_2$? Let's take the two inequalities one at a time.
The first inequality: $5x_1 - 6x_2 \leq y_1(2x_1-x_2) + y_2(x_1 + 3x_2)$
We have to track the coefficients of the $x_1$ and $x_2$ variables separately. First, we need the total $x_1$ coefficient on the right-hand side to be at least $5$. Getting exactly $5$ would be great, but since $x_1 \geq 0$, anything larger than $5$ would also satisfy the inequality for $x_1$. Mathematically speaking, this means that we need $2y_1 + y_2 \geq 5$.
On the other hand, to ensure the inequality for the $x_2$ variable we need the total $x_2$ coefficient on the right-hand side to be exactly $-6$. Since $x_2$ could be positive, we can't go lower than $-6$, and since $x_2$ could be negative, we can't go higher than $-6$ (as the negative value for $x_2$ would flip the direction of the inequality). So for the first inequality to work for the $x_2$ variable, we've got to have $-y_1 + 3y_2 = -6$.
The second inequality: $y_1(2x_1-x_2) + y_2(x_1 + 3x_2) \leq y_1(1) + y_2(9)$
Here we have to track the $y_1$ and $y_2$ variables separately. The $y_1$ variables come from the first constraint, which is an equality constraint. It doesn't matter if $y_1$ is positive or negative, the equality constraint still holds. Thus $y_1$ is unrestricted in sign. However, the $y_2$ variable comes from the second constraint, which is a less-than-or-equal to constraint. If we were to multiply the second constraint by a negative number that would flip its direction and change it to a greater-than-or-equal constraint. To keep with our goal of upper-bounding the primal objective, we can't let that happen. So the $y_2$ variable can't be negative. Thus we must have $y_2 \geq 0$.
Finally, we want to make the right-hand side of the second inequality as small as possible, as we want the tightest upper-bound possible on the primal objective. So we want to minimize $y_1 + 9y_2$.
Putting all of these restrictions on $y_1$ and $y_2$ together we find that the problem of using the primal's constraints to find the best upper-bound on the optimal primal objective entails solving the following linear program:
$$\begin{align*}
\text{Minimize }\:\:\:\:\: y_1 + 9y_2& \\
\text{subject to }\:\:\:\:\: 2y_1 + y_2& \geq 5 \\
-y_1 + 3y_2& = -6\\
y_2 & \geq 0.
\end{align*}$$
And that's the dual.
It's probably worth summarizing the implications of this argument for all possible forms of the primal and dual. The following table is taken from p. 214 of
Introduction to Operations Research, 8th edition, by Hillier and Lieberman. They refer to this as the SOB method, where SOB stands for Sensible, Odd, or Bizarre, depending on how likely one would find that particular constraint or variable restriction in a maximization or minimization problem.
Primal Problem Dual Problem
(or Dual Problem) (or Primal Problem)
Maximization Minimization
Sensible <= constraint paired with nonnegative variable
Odd = constraint paired with unconstrained variable
Bizarre >= constraint paired with nonpositive variable
Sensible nonnegative variable paired with >= constraint
Odd unconstrained variable paired with = constraint
Bizarre nonpositive variable paired with <= constraint
Best Answer
gt6989b's work isn't correct. The dual problem involves minimizing over the Lagrange multipliers, not maximizing over $x$. Furthermore, to contruct the Lagrangian dual problem, you need Lagrange multipliers not just for the quadratic constraint but also for the two nonnegativity constraints.
Note that most texts that talk about convex duality assume the primal problem is a minimization. So the derivations below are the negatives of what you'd do if you constructed the Lagrangian from the equivalent minimization problem (with the negative objective $-3x_1-4x_2$).
The Lagrangian is $$L(x_1,x_2,\lambda_1,\lambda_2,\lambda_3) = 3x_1+4x_2+\lambda_1(25-x_1^2-x_2^2)+\lambda_2x_1+\lambda_3x_2.$$ The Lagrange multipliers $\lambda_1$, $\lambda_2$, and $\lambda_3$ are all nonnegative. The dual function is $$g(\lambda_1,\lambda_2,\lambda_3) = \max_{x_1,x_2} 3x_1+4x_2+\lambda_1(25-x_1^2-x_2^2)+\lambda_2x_1+\lambda_3x_2$$ and the dual problem is $$\begin{array}{ll} \text{minimize} & g(\lambda_1,\lambda_2,\lambda_3) \\ \text{subject to} & \lambda_1,\lambda_2,\lambda_3 \geq 0 \end{array}$$ To complete the construction of the dual problem, you'll want to eliminate $x_1$ and $x_2$ from the dual function $g(\cdot)$ using simple calculus. This will give you an expression for $g$ that is independent of $x_1$ and $x_2$. There might be a couple of division by zero issues there you'll want to be careful with.
Alternatively, instead of constructing the full dual problem, you can determine the correct values of $\lambda_1$, $\lambda_2$, and $\lambda_3$ from complementary slackness conditions. This requires solving for the optimal values of $x_1$ and $x_2$, and examining which constraints are active at the solution.