Thanks to those that have already responded, you were very helpful. Here I will give the solution I have come up with, with more exposition than is provided in some of the other responses.
First we need the following lemma:
Lemma: $\lim_{\|x\| \to \infty} f(x) = \infty$. Some authors refer to this as $f$ being coercive.
Proof: Let $x_0 \in \mathbb{R}^n$ and let $v$ be a subgradient of $g$ at $x_0$, i.e. $x_0 \in \partial g(x_0)$. By equivalence of norms in finite-dimensional vector spaces, there exists a constant $c > 0$ such that $\|x\|_2 \leq c \|x\|$ for all $x \in \mathbb{R}^n$. By Cauchy-Schwarz and the trinagle inequality, for $\|x\| > 0$ we have
$$
\begin{align*}
\frac{| v^T(x - x_0) |}{\frac{m}{2}\|x\|^2}
&\leq \frac{\|v\|_2 \|x - x_0\|_2}{\frac{m}{2}\|x\|^2} \\
&\leq \frac{\|v\|_2 \|x\|_2 + \|v\|_2 \|x_0\|_2}{\frac{m}{2}\|x\|^2} \\
&\leq \frac{c\|v\|_2 \|x\| + \|v\|_2 \|x_0\|_2}{\frac{m}{2}\|x\|^2} \\
&= \frac{2c\|v\|_2 \|x\|}{m} \frac{1}{\|x\|} + \frac{2\|v\|_2 \|x_0\|_2}{m} \frac{1}{\|x\|^2}
\end{align*}
$$
The far right hand side of this inequality $\to 0$ as $\|x\| \to \infty$, which implies that $v^T(x - x_0) + \frac{m}{2} \|x\|^2 \to \infty$ as $\|x\| \to \infty$. Now we use the definition of subgradient:
$$
\begin{align*}
v^T(x - x_0) &\leq g(x) - g(x_0) \\
v^T(x - x_0) + \frac{m}{2}\|x\|^2 &\leq g(x) + \frac{m}{2}\|x\|^2 - g(x_0) \\
v^T(x - x_0) + \frac{m}{2}\|x\|^2 + g(x_0) &\leq f(x)
\end{align*}
$$
The left hand side of this $\to \infty$ as $\|x\| \to \infty$, so we conclude that $f(x) \to \infty$ as $\|x\| \to \infty$. $\square$
On to the main result. First, assume that $A$ is unbounded. If it is bounded, then it is compact, and the result follows immediately. There are 2 mutually exclusive possibilities:
Case 1: $f$ has a minimizer on $A$, in which case it is unique (see this thread).
Case 2: $f$ does not have a minimizer on $A$.
Assume we have case 2. Let $f^\star := \inf_{x \in A} f(x)$. $f^\star < \infty$ by assumption. Let $(x_k)$ be a sequence in $A$ such that $f(x_k) \to f^\star$. We now have two mutually exclusive subcases:
Subcase 2.1: $\sup_k \|x_k\| = d < \infty$. Define $B_d := \{ x \in \mathbb{R}^n \ : \ \|x\| \leq d\}$. Then for all $k$, $x_k \in \{ A \cap B_d \}$ which is closed and bounded and hence compact. Therefore $(x_k)$ converges in $A$, i.e. $x_k \to x^\star$ for some $x^\star \in A$. Continuity of $f$ then implies $f(x^\star) = f^\star$, which is a contradiction.
Subcase 2.2: $\sup_k \|x_k\| = \infty$. This implies $\|x_k\| \to \infty$, and by the Lemma this implies $f(x_k) \to \infty$ which implies $f^\star = \infty$ which contradicts $f^\star < \infty$.
Thus we conclude that Case 2 cannot occur, and therefore Case 1 must occur.
EDIT: After writing all of this out, it is clear that $f$ strongly convex is a stronger assumption than we require. $f$ strictly convex and coercive is sufficient for $f$ to have a unique global minimum on the convex set $A$.
I am not sure if this is what you are looking for:
Take $X=Y=[-1,1]$, $f(x,y) = (x-y)^2$.
$\max_x \min_y (x-y)^2 = \max_x 0 = 0 $,
$\min_y \max_x (x-y)^2 = \min_y (1+|y|)^2 = 1$.
Best Answer
$v \mapsto v^\top C v$ is a strictly convex function since its Hessian $2C$ is positive definite.
The map $u \mapsto (u-u^*)^\top C (u-u^*)$ is then strictly convex, because it is a composition of a linear map $u \mapsto u-u^*$ with the previous strictly convex map.
Since $u \mapsto \lambda_0 |u_j|$ is convex for each $j$ (it is the composition of a linear map $u \mapsto \lambda_0 u_j$ and a convex map $x \mapsto |x|$), your function $A$ is the sum of a strictly convex function with $p$ convex functions, so it is strictly convex.
Since $A$ is convex, it will have a minimizer if $\mathcal{U}$ is closed and convex. Uniqueness follows from strong convexity of $A_n$. See this question.
[I think you meant to write $Y_i$ instead of $Z_i$.] You can rewrite $B_n$ as $$B_n(u) = \| Y - Wu\|^2_2 + \frac{\lambda_n}{n} \|u\|_1.$$ As mentioned already, $u \mapsto \frac{\lambda_n}{n} \|u\|_1$ is convex since it is the sum of $p$ convex functions $u \mapsto \frac{\lambda_n}{n} |u_j|$.
$\|Y-Wu\|_2^2$ is the composition of a linear map $u \mapsto Y-Wu$ and a strictly convex map $v \mapsto \|v\|_2^2=\sum_{i=1}^n v_i^2$.
Therefore,
it is strictly convex on $\mathbb{R}^n$ as long $W$ is not zero. [This is to exclude the case where $u \mapsto Y-Wu$ maps everything to a single point, in which case $u \mapsto \|Y-Wu\|_2^2$ is constant and not strictly convex.]
More generally, if you are talking about strict convexity only on $\mathcal{U}$, then this condition generalizes to to "there exist $u,v \in \mathcal{U}$ such that $Wu\ne Wv$."
To conclude, under one of these conditions, the full map $B_n$ is strictly convex on either $\mathbb{R}^3$ or $\mathcal{U}$.
Again, there is a minimizer if $\mathcal{U}$ is closed and convex, and it is unique if strong convexity of $B_n$ holds.