Suppose we are given a convex quadratic program (QP) in $\mathrm x \in \mathbb R^n$
$$\begin{array}{ll} \text{minimize} & \mathrm x^\top \mathrm Q \, \mathrm x + \mathrm r^{\top} \mathrm x + s\\ \text{subject to} & \mathrm A \mathrm x \leq \mathrm b\end{array}$$
where $\mathrm Q \in \mathbb R^{n \times n}$ is symmetric and positive semidefinite, $\mathrm r \in \mathbb R^n$, $s \in \mathbb R$, $\mathrm A \in \mathbb R^{m \times n}$ and $\mathrm b \in \mathbb R^m$.
The original QP can be rewritten in epigraph form as the following QP in $\mathrm x \in \mathbb R^n$ and $t \in \mathbb R$
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \mathrm x^\top \mathrm Q \, \mathrm x + \mathrm r^{\top} \mathrm x + s \leq t\\ & \mathrm A \mathrm x \leq \mathrm b\end{array}$$
Let $\rho := \mbox{rank} (\mathrm Q) \leq n$. Since $\mathrm Q$ is symmetric and positive semidefinite, there is a rank-$\rho$ matrix $\mathrm P \in \mathbb R^{\rho \times n}$ such that $\mathrm Q = \mathrm P^{\top} \mathrm P$. Using the Schur complement test for positive semidefiniteness, the (convex) quadratic inequality $\mathrm x^\top \mathrm Q \, \mathrm x + \mathrm r^{\top} \mathrm x + s \leq t$ can be rewritten as the following linear matrix inequality (LMI)
$$\begin{bmatrix} \mathrm I_{\rho} & \mathrm P \, \mathrm x \\ \mathrm x^{\top} \mathrm P^\top & t - s - \mathrm r^{\top} \mathrm x\end{bmatrix} \succeq \mathrm O_{\rho+1}$$
and the (convex) linear inequality $\mathrm A \mathrm x \leq \mathrm b$ can be written as the following LMI
$$\mbox{diag} ( \mathrm b - \mathrm A \mathrm x ) \succeq \mathrm O_m$$
Thus, the convex QP can be written as the semidefinite program (SDP) in $\mathrm x \in \mathbb R^n$ and $t \in \mathbb R$
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \begin{bmatrix} \mathrm I_{\rho} & \mathrm P \, \mathrm x & \mathrm O_{\rho \times m}\\ \mathrm x^{\top} \mathrm P^\top & t - s - \mathrm r^{\top} \mathrm x & \mathrm 0_m^\top\\ \mathrm O_{m \times \rho} & \mathrm 0_m & \mbox{diag} ( \mathrm b - \mathrm A \mathrm x )\end{bmatrix} \succeq \mathrm O_{\rho + 1 + m}\end{array}$$
Let $X_0 = \sum_{i=1}^n \lambda_i v_i v_i^T$ be the eigenvalue decomposition of matrix $X_0$ with $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n$. Assume $\lambda_n < 0$, as otherwise the projection of $X_0$ onto positive semidefinite cone would be itself. Then we have
\begin{equation}
v_i^T X_0 v_i = \sum_{j=1}^n \lambda_i v_i^T v_j v_j^T v_i = \lambda_i
\end{equation}
and
\begin{equation}
||X_0||_2 = \max\{\lambda_1, -\lambda_n\} = \max\{v_1^T X_0 v_1, -v_n^T X_0 v_n\} = \max\{\sup_{||v||_2=1} v^T X_0 v, -\inf_{||v||_2=1} v^T X_0 v\}
\end{equation}
Now let $X$ be any symmetric positive semidefinite matrix. We have
\begin{equation}
||X - X_0||_2 \geq \sup_{||v||_2 = 1} v^T (X - X_0) v \geq v_n^T (X - X_0) v_n = v_n^T X v_n - v_n^TX_0v_n \geq -\lambda_n
\end{equation}
Now if we define $X = \sum_{i=1}^n \max\{\lambda_i, 0\} v_i v_i^T$, we have $||X - X_0||_2 = -\lambda_n$. Therefore, the defined matrix is the projection of $X_0$ onto positive semidefinite cone.
Best Answer
I presume that $P$ and $M$ are input matrices. Then this is a linear SDP (a.k.a. LMI) which is convex. Because of the (positive) semidefinite constraint, it is not a quadratic program.
More specifically, don't square the norm in the objective. It can then be converted to a Second Order Cone constraint via epigraph formulation. So the problem will have one Second Order Cone constraint and one linear SDP constraint. It can be formulated via CVX, YALMIP, CVXPY, CVXR, or similar tool, and solved with a (linear) SDP solver, such as Mosek, SDPT3, SeDuMi, among others.
CVX code (automatically does epigraph reformulation):
This formulation allows X to be complex. if you want X to be real symmetric, use
symmetric
instead ofhermitian
in the variable declaration.