As to the depth and significance of the existence of eigenvectors for finite-dimensional linear operators over $\mathbb{C}$, I think you have already identified the core algebraic reason: it is that the field $\mathbb{C}$ is algebraically closed. I think it is fair to call this a "deep fact" -- the task of giving a rigorous proof was the topic of Gauss's thesis work, and in fact by modern standards Gauss's (first) proof is still not completely rigorous. (Some feel this way, anyway -- there is not universal agreement here.) Whole books have been written on various proofs of this result.
Of course the depth here lies in the fact that the definition of the complex field is topological / analytic, ultimately relying on the completeness of $\mathbb{R}$. If you start with a field $K$, the following is not very deep:
Proposition: For a field $K$, the following are equivalent:
(i) Every linear endomorphism of a finite dimensional $K$-vector space has an eigenvector.
(ii) $K$ is algebraically closed.
To see (i) ⇒ (ii), use the fact that every monic polynomial of degree $n$ is the chracteristic and minimal polynomial of its companion matrix.
To answer your second question: no, there are linear operators on infinite dimensional $\mathbb{C}$-vector spaces without eigenvalues, even bounded linear operators. A fundamental example is the Hilbert space $L^2([0,1])$ of square integrable $\mathbb{C}$-valued functions on the unit interval. Then the multiplication operator $M: f \mapsto xf$ has norm $1$ but is easily seen to have no eigenvectors, since the equation $xf = \lambda f$ forces $f = 0$ almost everywhere.
In functional analysis, there is a suitable generalization of the notion of the set of eigen*values* of a linear operator on certain infinite dimensional spaces, namely the spectrum.
Addendum: inspired by damiano's comment, here is what is in some sense the simplest possible example of a linear operator on an infinite dimensional space without an eigenvector: Let $K$ be any field, and let $V$ be a vector space of countably infinite dimension with basis ${e_n}_{n=1}^{\infty}$. Then consider the shift operator $T$
on $V$, i.e., the unique linear operator such that for all $n \in \mathbb Z^+$, $T(e_n) = e_{n+1}$. $T$ is very simple and easy to visualize: it just happens never to "cycle back" on itself. Indeed, if $v = \sum_{n=1}^{\infty} a_n e_n \in V$ (with $a_n = 0$
for all but finitely many $n$), then assuming that $Tv = \lambda v$ gives
$\sum_{n=1}^{\infty} \lambda a_n e_n = \sum_{n=2}^{\infty} a_{n-1} e_n$.
In particular $\lambda a_1 = 0$. It is clear that the kernel of $T$ is $0$, so if $v \neq 0$, $\lambda \neq 0$ and thus $a_1 = 0$ and then the above equation implies that $a_n = 0$ for all $n$, i.e., $v = 0$. (If $V = K[x]$ is the space of polynomials with $K$-coefficients, then with respect to the natural basis $e_n = x^{n-1}$, multiplication by $x$ is the shift operator $T$.)
Let $C[0,1]$ be the linear space of continuous functions on $[0,1]$; this linear space is a complete normed linear space (i.e., a Banach Space) when given the max norm $\|f\|=\max_{t \in [0,1]}|f(t)|$. The Volterra operator
$$
(Vf)(x) = \int_{0}^{x}f(t)dt
$$
maps continuous functions to continuous functions. $V$ has trivial null space because $(Vf)(x)=0$ for all $x$ implies $f(x)=0$ for all $x$ by the Fundamental Theorem of Calculus. So $V$ does not have $0$ as an eigenvalue. If $Vf=\lambda f$ for a non-zero $\lambda$, then $f=\frac{1}{\lambda}Vf$ is continuously differentiable with $f'=\frac{1}{\lambda} f$ and $f(0)=0$, which has only the trivial solution $f\equiv 0$.
Best Answer
Why does the finite dimension of $V$ imply that there exists such a polynomial exists?
Say $V$ has dimension $n$. The space of endomorphisms of $V$ has finite dimension $m$ (namely $m=n^2$ will do). Let $A$ be an endomorphism. Then $\{I, A, A^2, \dots A^{m}\}$ is linearly dependent since it consists of $m+1$ elements. So some linear combination of these is zero. That is a polynomial $f$ so that $f(A) = O$.
Why is there an eigenvector?
Suppose $f(A) = O$, and the degree of the polymomial $f$ is as small as possible, say $q$. WLOG, $f$ is monic. The complex polynomial $f(z)$ factors $(z-a_1)\dots(z-a_q)$ [possibly with repeated factors]. That is, $(A-a_1I)(A-a_2I)\dots(A-a_qI)=O$. Because $q$ is as small as possible, the smaller prduct $(A-a_2I)\dots(A-a_qI)$ is not $O$, so there is a vector $v$ with $u := (A-a_2I)\dots(A-a_qI)v \ne 0$. But then $(A-a_1I)u = 0$ and $a_1$ is an eigenvector for $A$.