Briefly, holomorphic functions are immensely useful because, on the one hand, they are surprisingly common (since any power series, for example, whose coefficients grow reasonably slowly defines a holomorphic function), and on the other hand one can prove very strong theorems about them. There is a web of results including Cauchy's integral formula and the identity theorem which assert that holomorphic functions are astonishingly rigid: given information about a holomorphic function in a very small part of its domain, one can extract information about the function's behavior in other a priori unrelated parts of their domain (and this is what allows things like contour integration to work).
For that reason, holomorphic functions are a powerful tool to apply to a problem when they do apply. For example, analytic number theorists frequently construct holomorphic or meromorphic functions which carry number-theoretic information, such as the Riemann zeta function, to prove theorems like the prime number theorem. Since you say you have a background in combinatorics, you might enjoy reading Flajolet and Sedgewick's Analytic Combinatorics, a thorough exposition of (among other things) ways to use complex analysis to provide asymptotics for combinatorial sequences.
Here is a simple example. Let $f_n$ denote the number of ways that $n$ horses can win a race, with ties. It turns out that this sequence has generating function
$$F(z) = \sum_{n \ge 0} \frac{f_n}{n!} z^n = \frac{1}{2 - e^z}.$$
This function is meromorphic with poles at $z = \log 2 + 2 \pi i k, k \in \mathbb{Z}$, each of which has residue $-\frac{1}{2}$. In fact, it turns out that $F(z)$ admits an infinite partial fraction decomposition
$$F(z) = \sum_{k \in \mathbb{Z}} \frac{1}{2(\log 2 + 2 \pi i k - z)}.$$
And by expanding the terms on the RHS in a geometric series, this gives an asymptotic expansion for $\frac{f_n}{n!}$ with leading term $\frac{1}{2 (\log 2)^{n+1}}$. In other words,
$$f_n \sim \frac{n!}{2 (\log 2)^{n+1}}.$$
The pole at $z = \log 2$ dominates the asymptotic expansion: the leading term in the error of the above expression is given by the other poles nearest the origin, which occur at $z = \log 2 \pm 2 \pi i$. Because these poles have nonzero imaginary part, if you plot the error in the above approximation you'll find that it oscillates. It is not so easy to explain why this should be the case without complex analysis.
A famous example is Hardy and Ramanujan's asymptotic formula for the partition function
$$p(n) \sim \frac{1}{4n \sqrt{3}} e^{ \pi \sqrt{ \frac{2n}{3} } }$$
which is proven using a much more sophisticated version of the above argument.
But really, there is too much to say about holomorphic functions, so again I suggest that you read a textbook. Besides Needham's book, I also personally enjoyed Stein and Shakarchi, which is very user-friendly and has good applications.
this is a little late, and so I’m sure you have come across an answer by now! But I shall write the below nonetheless, as it may help others. Firstly we need to realize what the Cauchy-Riemann equations tell us. If you derive them (you can look this derivation up it comes from the definition of the derivative actually), you will see that they come from the assumption that a complex function is differentiable at the point of interest. So, if we are differentiable, then we satisfy Cauchy-Riemann. Which also tells us that the contrapositive is true, so that the statement “if we do not satisfy Cauchy-Riemann, then we are not differentiable” is true. Now onto analyticity, for a function to be analytic at the point P in the complex plane, it must be differentiable in a neighborhood of P. So Cauchy-Riemann must be satisfied in that neighborhood of P. It is a stricter condition, as we can be differentiable at a point but not necessarily analytic/holomorphic if we are not differentiable in the neighborhood.
So now let’s analyze your question. You have asked “can a function be analytic if it doesn’t satisfy CR?” Let’s do a proof of falsity by contradiction: assume a function can be analytic and not satisfy CR at the point P (I’m making your question slightly more specific). This would mean that we are differentiable in a neighborhood of a point P but do not satisfy CR at the point P. If we do not satisfy CR at the point P, then we are not differentiable at P (as per the contrapositive statement above). But that would mean that we are not differentiable in a neighborhood of P since the neighborhood of P includes itself! So we arrive at our contradiction, that we started as differentiable and became non-differentiable in a neighborhood of P which contradict one another, so we abandon the assumption that we are analytic. Thus, if we do not satisfy CR at a point P, then we cannot be analytic at the point P.
Now suppose that your question became “can we be analytic if we do not satisfy Cauchy-Riemann at some point or set of points in the neighborhood of P?” Observe that this is as general as we can make your question! The proof to this is just the same as above, assume you can be analytic at P and do not satisfy CR at some point or set of points in a neighborhood of P, then we are not differentiable at some point or set of points in a neighborhood of P, so we are not differentiable in the neighborhood of P, and so we are lead to the same contradiction as above. To be specific, it is the contradiction that we started as differentiable but had information (CR not satisfied) to show that we are not differentiable and so we are lead to the contradiction and therefore must abandon the assumption.
So we can never be analytic at a point P without satisfying the Cauchy-Riemann equations in a neighborhood of P. This is the ultimate punch line of the demonstration above. I hope things have been made more clear from this, it is good practice in mathematics to try and prove claims that you make or potential answers to questions you have, it allows you to practice what you’ve learned as well and maybe even reinforce your knowledge or learn something new!
Have a great day.
Best Answer
Consider the function $f(x) = \mathrm{e}^{-1/x^2}$ on the reals. Although not defined at zero, it clearly has a limit at zero and that limit is zero. Now look at its derivatives: $$f'(x) = \frac{2}{x^3} f(x)$$ $$f''(x) = \frac{4-6x^2}{x^6} f(x)$$ $$f'''(x) = \frac{8-36x^2+24x^4}{x^9} f(x)$$ and so on, with the exponential overwhelming the rational function as we take limits at zero. We find, in fact, that this limiting process gives us zeroes for every derivative at zero.
Is this the zero function? No. Therefore, its power series expansion is not faithful at zero.
Why? Take a step back and look at the complex plane and notice the essential singularities pressed up against the real line at zero and discover that I "cheated" by taking limits at zero. In the reals, the limits exist, but in the complexes they do not. In short, the real line can be blind to essential features of functions. (Especially if the person posing functions to investigate is aware of this deficiency.) However, having looked at a neighborhood in the complexes, there is nothing left to know since there is nowhere for a feature to hide.
(I spent some time trying to clean up what amounts to random moire pattern in the center of the argument plot. The best I could manage follows, but is misleading about the violent flailing of the complex angle near the origin.)
(What we should really be seeing is infinitely many thinner and thinner strips as we get closer to the origin. Of course, the strips are already one pixel wide in the given image -- if we attempt to thin them further, all we render is a continuous blob of contour lines.)