this is a little late, and so I’m sure you have come across an answer by now! But I shall write the below nonetheless, as it may help others. Firstly we need to realize what the Cauchy-Riemann equations tell us. If you derive them (you can look this derivation up it comes from the definition of the derivative actually), you will see that they come from the assumption that a complex function is differentiable at the point of interest. So, if we are differentiable, then we satisfy Cauchy-Riemann. Which also tells us that the contrapositive is true, so that the statement “if we do not satisfy Cauchy-Riemann, then we are not differentiable” is true. Now onto analyticity, for a function to be analytic at the point P in the complex plane, it must be differentiable in a neighborhood of P. So Cauchy-Riemann must be satisfied in that neighborhood of P. It is a stricter condition, as we can be differentiable at a point but not necessarily analytic/holomorphic if we are not differentiable in the neighborhood.
So now let’s analyze your question. You have asked “can a function be analytic if it doesn’t satisfy CR?” Let’s do a proof of falsity by contradiction: assume a function can be analytic and not satisfy CR at the point P (I’m making your question slightly more specific). This would mean that we are differentiable in a neighborhood of a point P but do not satisfy CR at the point P. If we do not satisfy CR at the point P, then we are not differentiable at P (as per the contrapositive statement above). But that would mean that we are not differentiable in a neighborhood of P since the neighborhood of P includes itself! So we arrive at our contradiction, that we started as differentiable and became non-differentiable in a neighborhood of P which contradict one another, so we abandon the assumption that we are analytic. Thus, if we do not satisfy CR at a point P, then we cannot be analytic at the point P.
Now suppose that your question became “can we be analytic if we do not satisfy Cauchy-Riemann at some point or set of points in the neighborhood of P?” Observe that this is as general as we can make your question! The proof to this is just the same as above, assume you can be analytic at P and do not satisfy CR at some point or set of points in a neighborhood of P, then we are not differentiable at some point or set of points in a neighborhood of P, so we are not differentiable in the neighborhood of P, and so we are lead to the same contradiction as above. To be specific, it is the contradiction that we started as differentiable but had information (CR not satisfied) to show that we are not differentiable and so we are lead to the contradiction and therefore must abandon the assumption.
So we can never be analytic at a point P without satisfying the Cauchy-Riemann equations in a neighborhood of P. This is the ultimate punch line of the demonstration above. I hope things have been made more clear from this, it is good practice in mathematics to try and prove claims that you make or potential answers to questions you have, it allows you to practice what you’ve learned as well and maybe even reinforce your knowledge or learn something new!
Have a great day.
The Extreme Value Theorem sits in the middle of a chain of theorems, where each theorem is proved using the last.
LUB Property $\rightarrow$ Monotone bounded convergence $\rightarrow$ Bolzano-Weierstrass $\rightarrow$ EVT $\rightarrow$ IVT $\rightarrow$ Rolle's Theorem $\rightarrow$ MVT, Cauchy's MVT $\rightarrow$ Integral MVT
Each theorem in the chain is useful by itself, but the main use of EVT is to act as a stepping stone in this chain.
Best Answer
I would contend that what makes complex analytic (holomorphic) functions so special is the structure of the complex numbers themselves. The fact that the complex numbers are essentially $\mathbb{R}^2$ and are also a field is a small miracle. This miracle is at the heart of the special behavior of complex analytic functions.
It is the two dimensional nature of the plane and the field multiplication allow us to see the C-R equations. My favorite way to see the C-R equations is to consider $$f'(z) =\lim_{\Delta z\to 0} \frac{f(z+\Delta z)-f(z)}{\Delta z} $$ Taking in turn $\Delta z =\Delta x$ and $\Delta z =i \Delta y$ the C-R equations just appear. This requires two dimensions for the two approaches and utilizes that $\mathbb{C}$ is a field when dividing by $\Delta z$. (Other chain rule type proofs of the C-R use both these facts, but are somewhat more subtle about it.)
Surely the Cauchy-Integral Formula, which tells us that a holomorphic function is in fact $C^\infty$ deserves mention, in fact it too is a consequence (less directly) of the two dimensional nature of $\mathbb{C}$ and field structure. The standard proof is to use Cauchy's Integral Theorem, which says that if $f$ is a holomorphic on a simply connected domain, then $$\int_\gamma f =0$$ for any sufficiently nice closed $\gamma$ curves in the domain. This itself is seen by applying Green's (2-d real structure again) and the C-R (field structure).
There are many, many more special and cool behaviors of holomorphic functions, but all of them seem to sit on the miracle of the two structures of the complex numbers. Just to advertise a few, Identity Principle (being 2-d connected is easier than being 1-d connected), zero counting theorems like Rouche's (line integrals sometimes have cool answers), the open mapping theorem (zero counting theorems are super cool), Liouville's Theorem (Cauchy integral theorem again), on and on and on.