As barrycarter suggests, just set the two partial derivatives to 0:
$$\frac {\partial f}{\partial x} = 0, \qquad \frac {\partial f}{\partial y} = 0$$
This gives you a pair of linear equations in $x, y$ that you can solve. The solution will be rational functions of $a, b, c, d, e$ that may or may not satisfy $0 \le x \le 1,\ \ 0 \le y \le 1$, depending on the values. If not, then in turn set $x = 0$, then $x = 1$, then $y = 0$, then $y = 1$. In each case, mimimize the function with respect to the other variable. Then select the lowest overall value among the four.
I would use Accelerated Gradient Descent Method.
This means you'll stay only with Matrix Multiplication (And exploiting $ {A}_{i} $ being Symmetric can help with that).
But you actually need to use "Projected Sub Gradient", namely to calculate the projection of the solution onto the Domain.
The Projection onto The Constrain
Let's define a vector $ z = \left[ {{x}_{1}}^{T}, {{x}_{2}}^{T}, \ldots, {{x}_{K}}^{T} \right]^{T} $.
The constrains is equivalent of:
$$ S * z = \boldsymbol{1}_{n}, \; S = \underset{\times K}{\left [ \underbrace{{I}_{n}, {I}_{n}, \ldots, {I}_{n}} \right ]} $$
Basically the matrix $ S $ sum over the $ i $ -th element of all $ x $ vectors.
Now, given a vector $ y \in \mathbb{R}^{nk} $, its projection onto the set $ \mathcal{S} = \left\{ x \mid S * x = \boldsymbol{1}_{n} \right\} $ is given by:
$$ \arg \min_{x \in \mathcal{S}} \frac{1}{2} \left\| x - y \right\|_{2}^{2} = y - {S}^{T} \left( S {S}^{T} \right)^{-1} \left( S y - \boldsymbol{1}_{n} \right) $$
Due to the special structure of $ S $ one could see it is equivalent to:
$$ {x}_{i} - \frac{\sum_{i = 1}^{K} {x}_{i} - \boldsymbol{1}_{n}}{K}, \: i = 1, 2, \ldots, k $$
Namely spread the deviation equally on all elements.
Here is the code:
%% Solution by Projected Gradient Descent
mX = zeros([numRows, numCols]);
for ii = 1:numIterations
for jj = 1:numCols
mX(:, jj) = mX(:, jj) - (stepSize * ((2 * tA(:, :, jj) * mX(:, jj)) + (paramLambda * mC(:, jj))));
end
mX = hProjEquality(mX);
end
objVal = 0;
for ii = 1:numCols
objVal = objVal + (mX(:, ii).' * tA(:, :, ii) * mX(:, ii)) + (paramLambda * mC(:, ii).' * mX(:, ii));
end
disp([' ']);
disp(['Projected Gradient Solution Summary']);
disp(['The Optimal Value Is Given By - ', num2str(objVal)]);
disp(['The Optimal Argument Is Given By - [ ', num2str(mX(:).'), ' ]']);
disp([' ']);
The full code is in my Mathematics Q2199546 GitHub Repository (Specifically in Q2199546.m).
The code is validated using CVX.
Best Answer
There are specialized algorithms for binary quadratic programming problems, but you can also linearize the objective by replacing each square $x_j^2$ with $x_j$ and introducing a new variable $y_{ij}$ to replace each product $x_i x_j$, together with linear constraints \begin{align} y_{ij} &\le x_i \\ y_{ij} &\le x_j \\ y_{ij} &\ge x_i + x_j - 1 \\ y_{ij} &\ge 0 \end{align} Depending on your constraints $Ax=b$, other linearizations might be possible.