This isn't pretty, but it's direct. We proceed by the usual method of finding the critical points of a function followed by the critical points on the boundary. The maximum occurs at once of these points.
To find the critical points of the objective function, differentiate:
$$ df(x) = (1/2) (dx)^T A x + (1/2) x^T A dx + b^T dx = (dx)^T (Ax + b) $$
so the critical points are solutions to $Ax = -b$.
Using Lagrange multipliers to find the critical points on the boundary of your constrant $x^T x = 1$ says they occur at solutions $(x, \mu)$ to the system of equations
$$ (A + 2 \mu I) x = -b \qquad \qquad \qquad x^T x = 1$$
Most solutions can be obtained by solving the equation
$$ || (A + 2 \mu I)^{-1} b ||_2^2 = 1 $$
for $\mu$. However, we must consider separately the points $\mu$ for which $-2\mu$ is an eigenvalue of $A$.
For each of these $\mu$, obtain the general solution to
$$ (A + 2 \mu I) x = -b $$
and and search the solution space for unit vectors. Typically there won't be any solutions at all! But if there are, you have to search the solution space for unit vectors. This is just solving a polynomial equation in $n$ variables, where $n$ is the multiplicty of the eigenvalue $-2 \mu$.
I get essentially the same thing from the eigen-decomposition. Write everything in terms of an orthonormal basis of eigenvectors. Then, you are trying to maximize
$$ (1/2) \sum_i x_i^2 \lambda_i + x_i b_i$$
subject to the constraint
$$ \sum_i x_i^2 = 1 $$
The criticial points of the objective function are the solutions to
$$ x_i \lambda_i + b_i = 0$$
and Lagrange multipliers gives us a system of equations
$$ \sum_i x_i^2 = 1 \qquad \qquad
x_i( \lambda_i + 2 \mu) + b_i = 0 $$
and the solution method to this is effectively the same as the matrix version.
This is (a variant of) the trust-region problem, and the solution can be computed rather easily, although not an analytical solution
See, e.e., https://www8.cs.umu.se/kurser/5DA001/HT07/lectures/trust-handouts.pdf
I'm reparameterizing your problem slightly, you can easily change your model to be $\max x^TQx + c^Tx + b$ subject to $x^Tx\leq 1$.
At optimality, you will be at the border of feasibility, and the constraint gradient will be pointing in the same direction as the objective gradient (draw this geometrically!), i.e., you have $2Qx + c = \lambda 2x$ for some unknown $\lambda$. Solve for $x$ to obtain $x = -(2Q-\lambda I)^{-1}c$. To find lambda, solve $x^Tx = 1$ which is an equation in $\lambda$ (possibly tricky, numerical methods required, line-search, bisection etc)
Best Answer
Say instead of A, we solve for columns of A one by one.
max $u^TBu$
such that $u^Tu=1$
Construct the lagrangian, then differentiate to get:
$dL/du$=2Bu-2 $\lambda u$
Equate to zero, solve, you end up with Eigen decomposition problem of B.
Now pick the Eigen vector that corresponds to the largest Eigen value. If you want more solutions (more colums to A, pick the second and the third Eigen vectors that corresponds to second and third largest Eigen value and so on)