This is not a convex optimization problem as written, because $x^Hx=c$ is not a convex constraint. It describes the surface of a hypersphere, which is not convex.
The constraint $x^Hx\leq c$ would be convex, on the other hand. If that relaxation is acceptable to you, then just solve that instead; it's a simple quadratic program. If you need equality, then one thing you can do is solve the convex form first. If the global minimum happens to achieve $x^Hx=c$, then you're done; the solution is the same for your problem and for the convex, relaxed problem. Otherwise, you need to do something else...
What else can you do? Well, it turns out that the $x^Hx \leq c$ version of the problem can be solved globally even if $A$ is not positive semidefinite, and therefore non-convex. It is known as the trust region subproblem, so it has been studied extensively; Google is your friend here. So I think what you may be able to do is adapt the standard Lagrange multiplier technique, which is what is used for the trust region subproblem, for your case. The Lagrangian is
$$L(x,\lambda) = x^HAx + 2\Re{b^Hx} - \lambda( c - x^H x )$$
where $\lambda$ is the Lagrange multiplier. The optimality conditions are
$$\begin{gathered}
2 ( A + \lambda I ) x + 2 b = 0 \\
x^H x = c
\end{gathered}$$
So when $A+\lambda I$ is invertible,
$$x=-(A+\lambda I)^{-1}b, \quad x^Hx = \| (A+\lambda I)^{-1} b \|_2^2$$
Now you're reduced to searching over the scalar variable $\lambda$ for the value which achieves $x^Hx=c$. Once you have that, then you have your $x$.
Now just to be clear, I don't have an existence proof for you. If you successfully find a $\lambda$, then you should be good. I just don't know if this will always work. For the standard trust region subproblem, it does.
EDIT: I realized that perhaps you're daunted by the complex variables here. Well, there's no need to be; this is easily translated to a real equivalent:
$$\begin{array}{ll}
\text{minimize}_{\bar{x}} & \bar{x} \bar{A} \bar{x} + 2 \bar{b} ^T \bar{x} \\
\text{subject to} & \bar{x}^T\bar{x} = c
\end{array}$$
where
$$\bar{x}\triangleq\begin{bmatrix} \Re{x} \\ \Im{x} \end{bmatrix} \quad
\bar{A}\triangleq\begin{bmatrix} \Re{A} & \Im{A} \\ -\Im{A} & \Re{A} \end{bmatrix} \quad
\bar{b}\triangleq\begin{bmatrix} \Re{b} \\ \Im{b} \end{bmatrix} $$
$$\begin{array}{ll} \text{minimize} & f(x) + g(y)\\ \text{subject to} & xy \ge a\\ & x \ge 0\\ & y \ge 0\end{array}$$
where both $f$ and $g$ are convex quadratic functions and $a > 0$. The feasible region is convex and, since $a > 0$, also LMI-representable
$$\{ (x,y) \in \mathbb R^2 : x \geq 0 \land y \geq 0 \land x y \geq a \} = \left\{ (x,y) \in \mathbb R^2 : \begin{bmatrix} x & \sqrt{a}\\ \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_2 \right\}$$
Hence, the original optimization problem can be rewritten as follows
$$\begin{array}{ll} \text{minimize} & f(x) + g(y)\\ \text{subject to} & \begin{bmatrix} x & \sqrt{a}\\ \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_2\end{array}$$
Introducing optimization variables $s, t \in \mathbb R$, we rewrite the optimization problem in epigraph form
$$\begin{array}{ll} \text{minimize} & s + t\\ \text{subject to} & f(x) \leq s\\ & g(y) \leq t\\ & \begin{bmatrix} x & \sqrt{a}\\ \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_2\end{array}$$
Let $f$ and $g$ be
$$f (x) := f_0 + f_1 x + f_2 x^2 \qquad\qquad\qquad g (y) := g_0 + g_1 y + g_2 y^2$$
where $f_2, g_2 > 0$ (to ensure convexity). Inequality constraints $f(x) \leq s$ and $g(y) \leq t$ can be written in LMI form, as follows
$$\begin{bmatrix} 1 & \sqrt{f_2} \, x\\ \sqrt{f_2} \, x & s - f_0 - f_1 x\end{bmatrix} \succeq \mathrm O_2$$
$$\begin{bmatrix} 1 & \sqrt{g_2} \, y\\ \sqrt{g_2} \, y & t - g_0 - g_1 y\end{bmatrix} \succeq \mathrm O_2$$
These LMIs introduce inequalities $s - f_0 - f_1 x \geq 0$ and $t - g_0 - g_1 y \geq 0$, which are redundant. Note that lines $s = f_0 + f_1 x$ and $t = g_0 + g_1 y$ are tangent to the graphs of $f$ and $g$, respectively.
Hence, we obtain a semidefinite program (SDP) in variables $x, y, s, t \in \mathbb R$
$$\begin{array}{lc} \\ \text{minimize} & s + t\\\\ \text{subject to} & \begin{bmatrix} 1 & \sqrt{f_2} \, x & & & \\ \sqrt{f_2} \, x & s - f_0 - f_1 x & & & \\ & & 1 & \sqrt{g_2} \, y & \\ & & \sqrt{g_2} \, y & t - g_0 - g_1 y & \\ & & & & x & \sqrt{a}\\ & & & & \sqrt{a} & y\end{bmatrix} \succeq \mathrm O_6\\\\\end{array}$$
which can be solved numerically using any SDP solver.
Best Answer
Sure, in any problem with a single inequality constraint $$\min_x\ f(x) \quad \mathrm{s.t.} \quad g(x) \leq 0,$$ whether convex or not, the solutions will be of two types:
unconstrained minimizers of $f(x)$ with the inequality constraint inactive, $g(x)<0$;
constrained minimizers with the inequality constraint active, $g(x)=0$.
The geometry here is fairly intuitive: if $g(x)<0$ and yet you're not not at an unconstrained minimum of $f(x)$, you can descent down $-\nabla f$ until you either hit the boundary $g=0$ or reach a minimizer of $f(x)$. This argument can be made rigorous with enough regularity assumptions on $f$ and $g$, etc.
If you can prove there are no solutions of type (1) then all must be of type (2).