The spectral theorem states that $A$ can be factored as $A = V \Lambda V^T$, where $\Lambda \in \mathbb R^{n \times n}$ is diagonal and $V \in \mathbb R^{n \times n}$ is orthogonal.
\begin{align}
\inf_{d \neq 0} \, \frac{d^T A d}{d^T d} &= \inf_{d \neq 0} \, \frac{d^T V \Lambda V^T d}{d^T d} \\
&= \inf_{d \neq 0} \, \frac{d^T V \Lambda V^T d}{d^T V V^T d} \tag{$\heartsuit$}\\
&= \inf_{y \neq 0} \, \frac{y^T \Lambda y}{y^T y} \tag{
$\spadesuit$}\\
&= \inf_{\|y\|=1} \, y^T \Lambda y.
\end{align}
(In step ($\heartsuit$), we used the fact that $V V^T = I$. In step ($\spadesuit$), we made the change of variable $y = V^T d$.)
Let the diagonal entries of $\Lambda$ be
$\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$. Note that
\begin{align}
y^T \Lambda y &= \lambda_1 y_1^2 + \cdots + \lambda_n y_n^2 \\
&\geq \lambda_n y_1^2 + \cdots + \lambda_n y_n^2 \\
&= \lambda_n,
\end{align}
assuming that $\|y\| = 1$.
Moreover, when $y = \begin{bmatrix} 0 & \cdots & 0 & 1 \end{bmatrix}^T$,
we have that $y^T \Lambda y = \lambda_n$.
Thus,
\begin{equation}
\inf_{d \neq 0} \, \frac{d^T A d}{d^T d} = \lambda_n.
\end{equation}
Here is a slightly different way to look at it. Let $v_1,\ldots, v_n$ be an orthonormal basis of eigenvectors of $A$, with corresponding eigenvalues $\lambda_1 \geq \cdots \geq \lambda_n$.
Given $d \in \mathbb R^n, d \neq 0$, decompose $d$ as
$d = c_1 v_1 + \cdots + c_n v_n$. Then
\begin{align}
\frac{d^T A d}{d^T d} &= \frac{(c_1 v_1 + \cdots + c_n v_n)^T(c_1 \lambda_1 v_1 + \cdots + c_n \lambda_n v_n)}{c_1^2 + \cdots + c_n^2} \\
&= \frac{\lambda_1 c_1^2 + \cdots + \lambda_n c_n^2}{c_1^2 + \cdots + c_n^2} \\
&\geq \frac{\lambda_n c_1^2 + \cdots + \lambda_n c_n^2}{c_1^2 + \cdots + c_n^2} \\
&= \lambda_n.
\end{align}
It follows that
\begin{equation}
\inf_{d \neq 0} \,\frac{d^T A d}{d^T d} \geq \lambda_n.
\end{equation}
Moreover, we have equality when $d = v_n$.
Best Answer
It sounds like you're interested in why "equation problems" $Ax=b$ are called linear while "eigenvalue problems" $Ax=\lambda\cdot x$ are called nonlinear. There are likely many reasons to use this language, but here's one that invokes the notion of a linear combinations.
Suppose that $x_1$ and $x_2$ are two solutions to "equation problems" $Ax_1=b_1$ and $Ax_2=b_2$. Consider an arbitrary linear combination $x$ of $x_1$ and $x_2$, so $x=c_1\cdot x_1+c_2\cdot x_2$. If we define $b=c_1\cdot b_1+c_2\cdot b_2$, then $$ Ax = A(c_1\cdot x_1+c_2\cdot x_2) = c_1\cdot Ax_1+c_2\cdot Ax_2 = c_1\cdot b_1+c_2\cdot b_2 = b $$ This illustrates that "linear combinations of solutions to equation problems are solutions to equation problems."
Now, suppose that $x_1$ and $x_2$ are two solutions to "eigenvalue problems" $Ax_1=\lambda_1\cdot x_1$ and $Ax_2=\lambda_2\cdot x_2$. Again consider an arbitrary linear combination $x=c_1\cdot x_1+c_2\cdot x_2$. This linear combination $x$ solves an eigenvalue problem if $Ax=\lambda\cdot x$ for some $\lambda$. However, we have $$ Ax = A(c_1\cdot x_1+c_2\cdot x_2) = c_1\cdot Ax_1+c_2\cdot Ax_2 = c_1\cdot\lambda_1\cdot x_1+c_2\cdot\lambda_2\cdot x_2 \overset{?}{=} \lambda\cdot(c_1\cdot x_1+c_2\cdot x_2)=\lambda\cdot x $$ The $?$ in this equation indicates that a suitable $\lambda$ might not exist. Indeed, it isn't difficult to find exmamples where such a $\lambda$ does not exist. For instance, consider the data \begin{align*} A &= \left[\begin{array}{rr} 23 & 32 \\ -16 & -25 \end{array}\right] & \lambda_1 &= 7 & x_1 &= \left[\begin{array}{r} 2 \\ -1 \end{array}\right] & \lambda_2 &= -9 & x_2 &= \left[\begin{array}{r} 1 \\ -1 \end{array}\right] \end{align*} Note that $Ax_1=\lambda_1\cdot x_1$ and $Ax_2=\lambda_2\cdot x_2$. However, for $x=x_1+x_2$, we have $$ \overset{A}{\left[\begin{array}{rr} 23 & 32 \\ -16 & -25 \end{array}\right]}\overset{x}{\left[\begin{array}{r} 3 \\ -2 \end{array}\right]} = \left[\begin{array}{r} 5 \\ 2 \end{array}\right] \neq \lambda\cdot x $$
Side Note. As mentioned in the comments, there are other reasons to call the eigenvalue problem $Ax=\lambda\cdot x$ nonlinear. For example, if one uses the characteristic polynomial $\chi_A(t)=\det(t\cdot I_n-A)$ to solve for the eigenvalues of $A$, then one ends up factoring an $n$th degree polynomial, which is a nonlinear problem.
It's also worth noting that taking any linear combination of two eigenvectors $x_1$ and $x_2$ corresponding to the same eigenvalue $\lambda$ does yield a solution to an eigenvalue problem. This is precisely the statement that eigenvectors corresponding to an eigenvalue $\lambda$ are organized into the eigenspace $E_\lambda=\operatorname{Null}(\lambda\cdot I_n-A)$.