My simple answer would be that they are quite the same thing.
Explanation
Riccati Equations can be derived from Hamilton-Jacobi-Bellman equations in the particular case of LQR problem, an optimal control problem where the dynamics is linear and the cost is quadratic.
Consider the finite horizon LQR problem. In this particular case, the Hamilton-Jacobi-Bellman equation has the expression (for semplicity I put $N=0$)
$$ \partial_t V(x,t) + \min_u \left\{ \partial_x V(x,t) \cdot (Ax+Bu) + x^T Q x + u^T Ru \right\} = 0 $$
with the terminal condition
$$ V(x,T) = x^T Q_f x . $$
Now we look for solutions of the form $V(x,t) = x^T P(t) x$, where $P(t)$ is a symmetric matrix for each $t \in [0,T]$. If we substitute this expression in the (HJB) equation, we get
$$ x^T P'(t) x + \min_u \left\{ 2 P(t)x \cdot (Ax+Bu) + x^T Q x + u^T Ru \right\} = 0 . $$
We can explicitly find the minimum of the expression inside the curly brackets. For a given couple $(t,x)$, let us define $\Phi$ as
$$ \Phi(u) = 2 P(t)x \cdot (Ax+Bu) + x^T Q x + u^T Ru .$$
The minimum is obtained when $∇\Phi(u) = 0$, that is when
$$ 2B^T P(t) x + 2 Ru = 0,$$
so the optimal control is
$$ u^*(t,x) = -R^{-1} B^T P(t)x ,$$
with
$$ \Phi(u^*(t,x)) = 2 x^T P(t)^T \left(Ax-BR^{-1} B^T P(t)x \right) + x^T Q x + x^T P(t)^T B R^{-1} B^T P(t)x $$
So we can rewrite again the (HJB) equation without the minimization term:
$$ x^T P'(t) x + 2 x^T P(t)^T \left(Ax-BR^{-1} B^T P(t)x \right) + x^T Q x + x^T P(t)^T B R^{-1} B^T P(t)x = 0 , $$
and by grouping the $x^T$ and the $x$ term and doing some simple algebraic steps ($P(t)$ is symmetric) we get
$$ x^T \left( P'(t) + 2 P(t) A - P(t) BR^{-1} B^T P(t) + Q \right) x = 0 , $$
Since the above equation must hold for each $x$, it is equivalent to the matrix differential equation
$$ P'(t) + 2 P(t) A - P(t) BR^{-1} B^T P(t) + Q = 0 .$$
Finally, in order to satisfy the final condition, it must be
$$ P(T) = Q_f .$$
Comments:
- This is not a rigorous proof that the two equations are equivalent, but it shows that they are quite the same thing.
- I couldn't get the term $A^TP(t) + P(t)A$ of the Riccati equations, instead I found $2P(t) A$.
The Hamiltonian is convex in $p$ because it is affine (linear + constant) with respect this variable. The Hamiltonian is always affine with respect to the dual variables, regardless of the problem at hand.
Edit 1:
The supremum w.r.t. $u$ is inconsequential on the relationship w.r.t. $p$. Saying "the supremum w.r.t. $u$" is equivalent of saying "for fixed $u$".
Edit 2:
You don't need to take the supremum. You can igrone the supremum. Just take the partial derivative w.r.t. $p$. Is this derivative constant with respect to $p$? If so, then the function is affine with respect to $p$. And if a function is affine with respect to a variable, then it is convex with respect to that variable. It is a direct implication.
In the case of the Hamiltonian, the partial derivative with respect to the costates is always the right hand side of the equations of motion which are independent of the costates. Therefore the Hamiltonian is affine with respect to the costates. And therefore it is convex with respect to the costates. Always, no exception.
Best Answer
For reference, I am re-posting the answer from MO here:
On the role of verification theorem: it is an issue related to the existence-uniqueness of solutions in the classical sense for the HJB PDE. In applying the verification theorem, we ignore such issues, guess the structure of a smooth value function, formally verify (by substitution) that the guessed structural form satisfies the HJB PDE under consideration, and then use the Bellman's principle of optimality to compute the optimal control. Whether such verification is valid remains contingent on the existence-uniqueness of smooth enough classical solution (at least $C^1$ in the deterministic case and $C^2$ in the stochastic case) for the HJB PDE.
On deterministic versus stochastic: The above verification/HJB classical solution issue is for both the deterministic and the stochastic case. For example, see Ch. 4, Sec. 2 of 1, which specifically talks about verification theorems for the first order HJB PDEs in deterministic optimal control. Example 2.3 there is about an 1D deterministic optimal control problem whose HJB PDE does not admit any $C^{1}([0,T],\mathbb{R})$ solution. Ch. 4 and Ch. 5 of that book discusses details on the verification theorems for both the deterministic and stochastic optimal control problems, and also the viscosity solutions.
1 J. Yong and X.Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, vol 43. Springer, New York, 1999.