I wrote the post you linked to so I'll take a stab at this.
Short answer: In terms of $\phi$, the dual problem is to maximize $-\phi^*(0,z)$ with respect to $z$. But
\begin{align*}
-\phi^*(0,z) &= - \sup_{x,y} \, \langle 0,x \rangle + \langle z, y \rangle - \phi(x,y) \\
&= \inf_x \, \inf_y \, \phi(x,y) - \langle z, y \rangle.
\end{align*}
The "Lagrangian" is the function
\begin{equation*}
L(x,z) = \inf_y \, \phi(x,y) - \langle z, y \rangle.
\end{equation*}
The "dual function" $G(z) = -\phi^*(0,z)$ is obtained by minimizing $L(x,z)$ with respect to $x$, as we are accustomed to.
This expression for the Lagrangian appears on p. 54 of Ekeland and Temam.
That's what the Lagrangian looks like in the general setting. The significance of the Lagrangian is that for convex problems, under mild assumptions, $x$ and $z$ are primal and dual optimal if and only if $(x,z)$ is a saddle point of the Lagrangian. We sometimes think of convex optimization problems as coming in pairs (primal and dual problem), but actually they come in triples if we remember to include the saddle point problem. If we only knew about the primal problem and the dual problem, but not the saddle point problem, then we'd be missing $1/3$ of the picture.
Longer answer:
The function $\phi$ is the minimal notation we need to describe the perturbation viewpoint. The Lagrangian only appears later, when we write the dual problem more explicitly.
The perturbation viewpoint provides one possible explanation (my favorite explanation) of where the Lagrangian comes from, where the dual problem comes from, and why we expect strong duality to hold for convex problems. In the thread you linked to, note that the Lagrangian appears naturally at the very end. (The appearance of the Lagrangian should have been highlighted more explicitly.)
I'll repeat myself a bit here but I will add something new -- a derivation of the Lagrangian when we have equality constraints in addition to inequality constraints. I've been meaning to write this down anyway. I'll work in $\mathbb R^N$ rather than an abstract real vector space, and I'll use slightly different notation.
The primal problem is to minimize $\phi(x,0)$ with respect to $x$ (where $\phi:\mathbb R^n \times \mathbb R^m \to \mathbb R$). The perturbed problems are to minimize $\phi(x,y)$ with respect to $x$. We should also introduce the "value function" $v(y) = \inf_x \, \phi(x,y)$. ($v$ is called $h$ in the other thread.) So the primal problem is to evaluate $v(0)$. Now, using some highly intuitive facts about the convex conjugate, we note that $v(0) \geq v^{**}(0)$, and that when $\phi$ is convex we typically have equality. Thus, the dual problem is to evaluate $v^{**}(0)$. This is a very clear way of understanding what the dual problem is and why we care about it. And we can express this dual problem in terms of $\phi$: the dual problem is to maximize $-\phi^*(0,z)$ with respect to $z$. (Here $\phi^*$ is the convex conjugate of $\phi$. See other thread for details.)
We haven't yet mentioned the Lagrangian. But the Lagrangian will appear when we work these ideas out in detail for the optimization problem
\begin{align*}
\text{minimize} & \quad f_0(x) \\
\text{subject to} & \quad f(x) \leq 0,\\
& \quad h(x) = 0,
\end{align*}
where $f:\mathbb R^n \to \mathbb R^m$ and $h:\mathbb R^n \to \mathbb R^p$.
The inequality $f(x) \leq 0$ should be interpreted component-wise.
We can perturb this problem as follows:
\begin{align*}
\text{minimize} & \quad f_0(x) \\
\text{subject to} & \quad f(x) + y_1 \leq 0,\\
& \quad h(x) +y_2= 0.
\end{align*}
This perturbed problem can be expressed as minimizing $\phi(x,y_1,y_2)$ with respect to $x$, where
\begin{align*}
\phi(x,y_1,y_2) =
\begin{cases}
f_0(x) & \quad \text{if } f(x) + y_1 \leq 0 \text{ and } h(x) + y_2 = 0, \\
\infty & \quad \text{otherwise}.
\end{cases}
\end{align*}
To find the dual problem, we need to evaluate $-\phi^*(0,z_1,z_2)$, which is a relatively straightforward calculation.
\begin{align*}
-\phi^*(0,z_1,z_2) &= - \sup_{\substack{f(x) + y_1 \leq 0 \\ h(x) + y_2 = 0}} \langle 0,x\rangle + \langle z_1,y_1 \rangle + \langle z_2,y_2 \rangle - f_0(x) \\
&= -\sup_{q \geq 0} \, \langle z_1,-f(x) - q \rangle + \langle z_2,-h(x)\rangle - f_0(x) \\
&= \begin{cases}
\inf_x \, f_0(x) + \langle z_1,f(x) \rangle + \langle z_2, h(x) \rangle
& \quad \text{if } z_1 \geq 0 \\
-\infty & \quad \text{otherwise}.
\end{cases}
\end{align*}
In the final step, the Lagrangian
\begin{equation}
L(x,z_1,z_2) = f_0(x) + \langle z_1,f(x) \rangle + \langle z_2,h(x) \rangle
\end{equation}
has appeared, as has the usual description of the dual function (where you minimize the Lagrangian with respect to $x$). Up until this point, we didn't know what the Lagrangian was or why it would be relevant.
If strong duality does not hold, then we have no reason to believe there must exist Lagrange multipliers such that jointly they satisfy the KKT conditions. Here is an counter-example
${\bf counter-example 1}$
If one drops the convexity condition on objective function, then strong duality could fails even with relative interior condition.
The counter-example is the same as the following one.
${\bf counter-example 2}$
For non-convex problem where strong duality does not hold, primal-dual optimal pairs may not satisfy KKT condition.
Consider the optimization problem
\begin{align}
\operatorname{minimize} & \quad e^{-x_1x_2} \\
\text{subject to} & \quad x_1\le 0.
\end{align}
The domain for the problem is $D = \{ (x_1,x_2) \ge 0 \}$. The problem is not convex by calculating the Hessian matrix. Clearly, any $x_1 = 0, x_2 \in\mathbb R_+$ is a primal optimal solution with primal optimal value $1$ .
The Lagrangian is
$$
L(x_1,x_2,\lambda) = e^{-x_1x_2} + \lambda x_1.
$$
The dual function is
\begin{align}
G(\lambda) &= \inf L(x_1,x_2,\lambda) =
\begin{cases}
0& \lambda\ge 0\\
-\infty& \lambda < 0
\end{cases}
\end{align}
Thus, $\lambda \geq 0$ is dual optimal solution with dual optimal value $0$, so dual gap is $1$, strong duality fails. As for the KKT conditions, remember the domain is $D = \{ (x_1,x_2) \ge 0 \}$
\begin{align*}
&\lambda-x_2e^{-x_1x_2}=0\\
&x_1\le 0\\
&\lambda\ge 0\\
&\lambda x_1=0\\
\end{align*}
Pick any primal-dual pair satisfying $x_1 = 0, x_2\ge 0, \lambda\ge0, \lambda\ne x_2$, the KKT conditions fail.
${\bf counter-example 3}$
For a non-convex problem, even strong duality holds, solutions for KKT conditions may not be primal-dual optimal solution.
Consider the optimization problem on $\mathbb R$
\begin{align}
\operatorname{minimize} & \quad x^3 \\
\text{subject to} & -x^3-1\le 0.
\end{align}
The objective function is not convex by calculating the Hessian matrix. Clearly, $x=-1$ is the unique primal optimal solution with primal optimal value $-1$.
The Lagrangian is
$$
L(x,\lambda) = x^3 - \lambda (x^3+1).
$$
The dual function is
\begin{align}
G(\lambda) &= \inf L(x,\lambda) =
\begin{cases}
-1& \lambda=1\\
-\infty& otherwise
\end{cases}
\end{align}
Thus, $\lambda = 1 $ is dual optimal solution with dual optimal value $-1$, so dual gap is $0$, strong duality holds. While the KKT conditions are
\begin{align*}
&3x^2(1-\lambda)=0\\
&-x^3-1\le 0\\
&\lambda\ge 0\\
&\lambda (-x^3-1)=0\\
\end{align*}
Solutions for KKT conditions are $x=-1, \lambda=1$ or $x=0,\lambda=0$. Notice that $x=0,\lambda=0$ satisfies KKT conditions but has nothing to do with primal-dual optimal solutions.
The discussion indicates for non-convex problem, KKT conditions may be neither necessary nor sufficient conditions for primal-dual optimal solutions.
${\bf counter-example4}$
For a convex problem, even strong duality holds, there could be no solution for the KKT condition, thus no solution for Lagrangian multipliers.
Consider the optimization problem on domain $\mathbb R$
\begin{align}
\operatorname{minimize} & \quad x \\
\text{subject to} & \quad x^2\le 0.
\end{align}
Obviously, the problem is convex with unique primal optimal solution $x=0$ and optimal value $0$; Feasible set is $\{0\}$, therefore Slater's condition fails.
The Lagrangian is
$$
L(x,\lambda) = x + \lambda x^2.
$$
The dual function is
\begin{align}
G(\lambda) &= \inf L(x,\lambda) =
\begin{cases}
-\infty& \lambda\le 0\\
-\frac{1}{4\lambda} &\lambda >0
\end{cases}
\end{align}
Thus, dual optimal value is $0$, so dual gap is $0$, strong duality holds. However, there are no solution for dual optimal solution because the optimal value is attained as $\lambda\rightarrow \infty$. As for the KKT conditions
\begin{align*}
&1+2\lambda x=0\\
&x^2\le 0\\
&\lambda\ge 0\\
&\lambda x^2=0\\
\end{align*}
No solution for KKT conditions.
This is the convex problem where the dual problem has no feasible solution and KKT conditions have no solution but the primal problem is simple to solve.
${\bf counter-example 5}$
For a differentiable convex problem, there could be no solution for the KKT conditions, even if the primal-dual pair exists. In that case, the strong duality fails.
Consider the optimization problem on domain $D:=\{(x,y):y>0\}$
\begin{align}
\operatorname{minimize} & \quad e^{-x} \\
\text{subject to} & \quad \frac{x^2}{y}\le 0.
\end{align}
The problem can be proved to be convex with primal optimal solution $x=0, y>0$ and optimal value $1$; Feasible set is $\{(0,y): y > 0\}$, therefore Slater's condition fails.
The Lagrangian is
$$
L(x,y,\lambda) = e^{-x} + \lambda \frac{x^2}{y}.
$$
After some careful calculation, the dual function is
\begin{align}
G(\lambda) &= \inf L(x,y,\lambda) =
\begin{cases}
0& \lambda\ge 0\\
-\infty &\lambda <0
\end{cases}
\end{align}
Thus, dual optimal value is $0$, so dual gap is $1$, strong duality fails. We can pick primal-dual pair to be $x=0, y=1, \lambda=2$. As for the KKT conditions
\begin{align*}
&-e^{-x}+\frac{2\lambda x}{y}=0\\
&\frac{x^2}{y}\le0\\
&\lambda\ge 0\\
&\lambda \frac{x^2}{y}=0\\
\end{align*}
with $y>0$, thus no solution for KKT conditions.
This counter-example warns us that we have to be careful about the strong duality condition even for differentiable convex problems.
Best Answer
In general, the dual function $g$ is
$g(\lambda, \nu)=\inf_{x} L(x,\lambda,\nu)$
This is a function of $\lambda$ and $\nu$, not a function of $x$ (correcting a misstatement in the question.) The dual problem is then
$\max_{\lambda>0,\nu} g(\lambda,\nu)$
There's no completely general approach to finding the inf of $L(x,\lambda,\nu)$, but in those cases where $L$ is differentiable and convex with respect to $x$, any $x$ with $\nabla_{x} L(x,\lambda,\nu)=0$ will attain the inf. In cases where $L$ isn't differentiable with respect to $x$, you'll have to find some other way to determine the inf.
In practice, it is often the case that the inf is $-\infty$ for values of $\lambda$ and $\nu$ that don't satisfy some constraint. This gives an implicit constraint on the values of $\lambda$ and $\nu$ that provide a useful lower bound on the optimal primal objective value. Such a constraint is typically made explicit in the dual problem.
To answer your specific questions:
Yes. For problems with linear equality constraints it's also possible to use the Fenchel conjugate to find the Lagrangian dual problem, but that's a bit more advanced.
It's irrelevant whether there are multiple $x$ values at which the inf is attained, since only the value of the inf matters in $g(\lambda,\nu)$.
Not really. We've already discussed the case where $L$ is differentiable with respect to $x$.