Now that I understand the context better I am going to promote my own comments to an answer.
There is no general principle for artificially restricting the domain of a function when computing its convex conjugate. In fact, you should not do so unless you have a compelling and mathematically sound reason in your specific application. The domain of the conjugate function---that is, the set of $y$ for which the supremum is finite---is an important part of the conjugate itself, and should not be discarded. For example, when computing the dual of a convex optimization problem with an objective $f(x)$, the domain information imposes implicit constraints on the dual variables. If you remove that information, the dual problem is incorrect; it no longer provides bounds for the primal problem.
So what's going on in Chapter 5 of Ms. Fazel's thesis, or in the case of the so-called $\ell_0$ norm? Note that the interest is not in the conjugate of these functions, but rather in their convex envelopes. It just so happens that the convex envelope of a function can computed using the conjugate of the conjugate. Unfortunately, the convex envelopes of $f(x)=\mathop{\textbf{card}}(x)$ or $g(X)=\mathop{\textbf{rank}}(X)$ are not very interesting---in fact, I think they are identically zero. On other hand, the modified, extended-valued functions
$$\bar{f}(x) = \begin{cases} \mathop{\textbf{card}}(x) & \|x\|_\infty \leq 1 \\ +\infty &\|x\|_\infty > 1 \end{cases}, \qquad
\bar{g}(X) = \begin{cases} \mathop{\textbf{rank}}(x) & \|X\|_2 \leq 1 \\ +\infty &\|X\|_2 > 1 \end{cases}$$
have non-trivial convex envelopes.
As for why one would want these convex envelopes, it is because they provide some theoretical justification for why $\bar{f}^{**}(x)=\|x\|_1$ and $\bar{g}^{**}(X)=\|X\|_*$ are effective convex proxies for their non-convex counterparts. As you know, Ms. Fazel's thesis is entitled Matrix Rank Minimization with Applications, and it makes heavy use of trace minimization and nuclear norm minimization to find low-rank matrices that satisfy the modeling conditions. Section 5.1 is devoted to providing justifications for the convex heuristics that she uses throughout the work.
Added to clarify: you have expressed an interest in a comment above in finding the "tightest" convex envelope of a function. It's very important to note that if you truly want the tightest convex envelope for the entire function, you cannot restrict the domain in any way before taking the double conjugate. When you impose a domain restriction, you are computing the convex envelope of a different function. It will no longer serve as a lower bound for the original function. It will, however, be a tighter envelope over the domain you have selected.
You can see this for yourself: look at $f(x)=\mathop{\textbf{card}}(x)$ and $g(x)=\|x\|_1$. Clearly, there are values of $x$ for which $f(x)<g(x)$. So $g$ cannot be the convex envelope for $f$. It is, however, the convex envelope if you restrict the domain of $f$ to $\{x\,|\,\|x\|_\infty \leq 1\}$ (as we did with $\bar{f}$ above).
Since the function is radially symmetric, so is its conjugate. So you can as well consider the one-dimensional problem, with $f(x)=|x|+x^2/2$. Recall that the gradient of conjugate function is the inverse of the gradient of $f$.
Again by symmetry, it suffices to consider $x>0$ only.
Since $f'(x)=1+x$, the inverse is defined only for $x\ge 1$. Imagine that the discontinuity of $f'$ at the origin "stretches" the origin into $[-1,1]$, which the gradient of the conjugate $f^*$ will collapse back into a point. So, $(f^*)'(v)=(v-1)^+$ which integrates to
$$f^*(v) = \frac12((\|v\|-1)^+)^2 $$
As usual, $a^+=\max(a,0)$.
Best Answer
This problem will be made simpler by translating to the language of convex sets.
Consider $\operatorname{epi} f$ and $\operatorname{epi} f^{**}$, the epigraphs of $f$ and $f^{**}$.
To start, we have that both epigraphs are convex because $f$ and $f^{**}$ are closed and convex. To show that $f^{**}$ is closed and convex, consider that its epigaph is the intersection of (closed, convex) halfspaces of the form $\{z^{T} y - f^{*}(y): z \in \mathbb{R}^n\}$, because supremum of functions results in the intersections of epigraphs.
We have that $f^{**} \leq f$. From its definition, $$f^{**}(x) = \sup_{y} x^{T} y - f^{*}(y)$$ $$= \sup_{y} \{ x^{T} y - \sup_{z} \{ y^{T} z - f(z) \} \}$$ $$= \sup_{y} \, \inf_{z} \,y^{T}(z-x) + f(z) \leq \inf_{z} \, \sup_{y} \,y^{T}(z-x) + f(z) = f(x)$$ where the inequality comes from exchanging the infimum and supremum (you may also recall this maneuver from the proof of weak duality).
Now assume for contradiction that $f \neq f^{**}$. From our just-derived inequality, this means that $\exists x$ with $f^{**}(x) < f(x)$. By the closed/compact version of the hyperplane separation theorem, there must be hyperplane in $\mathbb{R}^{n+1}$ that strictly separates $\operatorname{epi} f$ from $(x, f^{**}(x))$.
This hyperplane cannot be vertical and strictly separate $\operatorname{epi} f$ from $(x, f^{**}(x))$, so we can normalize the normal vector of the hyperplane to be $1$ in the vertical component. This strict separation gives, for some $\epsilon > 0$ and non-vertical component $y \in \mathbb{R}^n$ of our hyperplane, $$f(z) - \epsilon \geq y^T(z-x) + f^{**}(x) \quad \forall z \in \mathbb{R}^{n}.$$ Some manipulations give $$y^{T}x - f^{**}(x) - \epsilon \geq y^{T} z - f(z) \quad \forall z$$ and taking the supremum in $z$ yields $$y^{T} x - f^{**}(x) - \epsilon \geq f^{*}(y).$$ Another manipulation gives $$y^{T} x - f^{*}(y) - \epsilon \geq f^{**}(x).$$
Expanding the definition of $f^{**}$, we have just shown that $$y^{T} x - f^{*}(y) - \epsilon \geq \sup_{v}\, v^{T} x - f^{*}(v).$$ Obviously the choice of $y$ on the LHS cannot exceed the supremum on the right, so we have our contradiction.