Suppose we are given a convex quadratic program (QP) in $\mathrm x \in \mathbb R^n$
$$\begin{array}{ll} \text{minimize} & \mathrm x^\top \mathrm Q \, \mathrm x + \mathrm r^{\top} \mathrm x + s\\ \text{subject to} & \mathrm A \mathrm x \leq \mathrm b\end{array}$$
where $\mathrm Q \in \mathbb R^{n \times n}$ is symmetric and positive semidefinite, $\mathrm r \in \mathbb R^n$, $s \in \mathbb R$, $\mathrm A \in \mathbb R^{m \times n}$ and $\mathrm b \in \mathbb R^m$.
The original QP can be rewritten in epigraph form as the following QP in $\mathrm x \in \mathbb R^n$ and $t \in \mathbb R$
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \mathrm x^\top \mathrm Q \, \mathrm x + \mathrm r^{\top} \mathrm x + s \leq t\\ & \mathrm A \mathrm x \leq \mathrm b\end{array}$$
Let $\rho := \mbox{rank} (\mathrm Q) \leq n$. Since $\mathrm Q$ is symmetric and positive semidefinite, there is a rank-$\rho$ matrix $\mathrm P \in \mathbb R^{\rho \times n}$ such that $\mathrm Q = \mathrm P^{\top} \mathrm P$. Using the Schur complement test for positive semidefiniteness, the (convex) quadratic inequality $\mathrm x^\top \mathrm Q \, \mathrm x + \mathrm r^{\top} \mathrm x + s \leq t$ can be rewritten as the following linear matrix inequality (LMI)
$$\begin{bmatrix} \mathrm I_{\rho} & \mathrm P \, \mathrm x \\ \mathrm x^{\top} \mathrm P^\top & t - s - \mathrm r^{\top} \mathrm x\end{bmatrix} \succeq \mathrm O_{\rho+1}$$
and the (convex) linear inequality $\mathrm A \mathrm x \leq \mathrm b$ can be written as the following LMI
$$\mbox{diag} ( \mathrm b - \mathrm A \mathrm x ) \succeq \mathrm O_m$$
Thus, the convex QP can be written as the semidefinite program (SDP) in $\mathrm x \in \mathbb R^n$ and $t \in \mathbb R$
$$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & \begin{bmatrix} \mathrm I_{\rho} & \mathrm P \, \mathrm x & \mathrm O_{\rho \times m}\\ \mathrm x^{\top} \mathrm P^\top & t - s - \mathrm r^{\top} \mathrm x & \mathrm 0_m^\top\\ \mathrm O_{m \times \rho} & \mathrm 0_m & \mbox{diag} ( \mathrm b - \mathrm A \mathrm x )\end{bmatrix} \succeq \mathrm O_{\rho + 1 + m}\end{array}$$
Best Answer
$\def\tr{\operatorname{tr}}\def\sc#1#2{\langle#1,#2\rangle}\def\scc#1{\sc{#1}{#1}}$I'm not sure whether the transpose in the trace was supposed to imply that these are real matrices; I'll do it for complex matrices using $\sc AB= \tr(AB^*)$ and the result for real matrices will follow. I'll assume distinct non-zero singular values to simplify things; I think you can do the same thing for degenerate and zero singular values with a bit more attention to details.
You can use the singular value decompositions of the two matrices and show that the singular vectors must be the same; it then follows that the singular values of $Z$ are as you say. So let $X=U\Sigma V^*$ and $Z=S\Lambda T^*$ with $U$, $V$, $S$, $T$ unitary, $\Sigma$, $\Lambda$ diagonal and real, $\Sigma,\Lambda\ge0$ and $\Lambda\le1$. Then
$$ \begin{align} \scc{X-Z} &=\scc{U\Sigma V^*-S\Lambda T^*} \\ &= \scc{U\Sigma V^*}+\scc{S\Lambda T^*}-\sc{S\Lambda T^*}{U\Sigma V^*}-\sc{U\Sigma V^*}{S\Lambda T^*} \\ &= \tr U\Sigma V^*V\Sigma^* U^*+\tr S\Lambda T^*T\Lambda^* S^*-\tr S\Lambda T^*V\Sigma^* U^*-\tr U\Sigma V^*T\Lambda^* S^* \\ &= \tr\Sigma\Sigma^*+\tr \Lambda\Lambda^*-2\Re\tr S\Lambda T^*V\Sigma^* U^*\;. \end{align} $$
Let $M=U^*S$ and $N=T^*V$ (so $M$ and $N$ are unitary), let $M=\mathrm e^{\mathrm iH}$ and $N=\mathrm e^{\mathrm iK}$ with $H$ and $K$ Hermitian, and vary $H$ and $K$ in $\Re\tr M\Lambda N\Sigma^*$. For each free parameter in $H$ we get an equation of the form $\Re\tr\mathrm iP_{kl}M\Lambda N\Sigma^*=0$, where $P_{kl}$ is an elementary Hermitian matrix, and similarly $\Re\tr\mathrm iP_{kl}\Sigma^*M\Lambda N=0$ from varying $K$. Forming linear combinations of these and using the assumption that the singular values in $\Sigma$ are distinct and non-zero, we can deduce from $\Re\tr\mathrm i(P_{kl}\Sigma^*\pm\Sigma^*P_{kl})M\Lambda N=0$ that $M\Lambda N$ is real and diagonal. Then it follows from the uniqueness of the singular value decomposition that $M\Lambda N=\Lambda$, so $M$ and $N$ contain only phase factors, and thus $S=U$ and $T=V$ up to phase factors that cancel in $Z$. Then
$$ \scc{X-Z} =\scc{U\Sigma V^*-U\Lambda V^*}=\scc{\Sigma-\Lambda}\;, $$
and it's clear that this is minimized by your correct guess.