Mandelbrot set perturbation theory: When do I use it

complex-dynamicsfractalsperturbation-theory

I have read the post on Perturbation of Mandelbrot set fractal. I will also be referring to the PDF by K.I. Martin on this topic.

My question is to do with the precision laid out in the Martin paper. I understand that in the equation $\Delta_n=A_n\delta+B_n\delta^2+C_n\delta^3+o(\delta^4)$, Martin uses $o(\delta^4)$ for the small terms, since those terms are small and do not contribute much to the equation.

So, we can use the iterative approximation given in the paper for all the $Y_n$ of the points surrounding the initial point $X_0$. He states that "the approximation should be good as long as the $\delta_3$ term has a magnitude significantly smaller then the $\delta_2$ term."

First Question:
But, how much smaller is "significantly smaller?" If I want to generate an image where the $\Delta_x=\Delta_y=0.1$, would it be okay to just start in the center use this method for the surrounding points? Or does it depend on the center point?

Second Question:
Since, I am going to be generating images with these results, is it possible that I will need to generate them with "seeds?" In other words, I could start with some center point $X$. Then calculate all of the surrounding points, expanding outward from $X$ until the $\delta$ is too large (i.e. the $\delta_3$ term no longer has a magnitude significantly smaller than the $\delta_2$ term). Then go to another $X$ seed, following the same process, and so on until the entire image is generated. Will that ever be neccessary?

Note: Since I am going to be implementing this, I could just go ahead and try things out, but I like to understand all of the theory before I start.

Best Answer

First answer: "the $\delta^3$ term remains significantly smaller than the $\delta^2$ term" means that $\left|C_n \delta^3\right| << \left|B_n \delta^2\right|$, usually a few orders of magnitude is a good amount (factor of $10^3$ or so). What the final $n$ will be for this stage depends on both the $X_0$ and the largest $\delta$ for the pixels in the image.

Often it will be obvious how many per-pixel iterations it is safe to "skip" by this series approximation technique, the $\left|C_n\right|$ will suddenly increase, but sometimes you get more subtle image distortion first - a common technique is to use regular perturbed iterations for some "probe points" in the image and stop the series approximation iterations once they deviate too much.

The final $n$ in the series approximation step is usually significantly smaller than the minimum iteration count for the first escaping pixel in the view. So you combine it with perturbed iterations for the remaining count.

Second answer: I'm not sure what you mean by seed. Usually it works like this:

  1. pick a reference point $X_0$ and some probe points $\Delta_0$ in the image
  2. while the series approxmation is accurate, determined by the worst relative error among all the probe points $|e| << 1$
    1. step the high precision reference one iteration using $$X_{n+1} = X_n^2 + X_0$$
    2. step the series approximation coefficients one iteration using $$\begin{aligned}A_{n+1} &= 2 X_n A_n + 1 \\ B_{n+1} &= 2 X_n B_n + A_n^2 \\ C_{n+1} &= 2 X_n C_n + 2 A_n B_n \end{aligned}$$
    3. step the probe points one iteration using $$\Delta_{n+1} = 2 X_n \Delta_n + \Delta_n^2 + \Delta_0$$
  3. initialize all the image points from their $\Delta_0$ with last good series approximation coefficients using $$\Delta_n = A_n \Delta_0 + B_n \Delta_0^2 + C_n \Delta_0^3$$
  4. step the reference $X_n$ until it escapes or maximum iteration count is reached using $$X_{n+1} = X_n^2 + X_0$$
  5. step all image points $\Delta_n$ using stored reference iterations $X_n$ using $$\Delta_{n+1} = 2 X_n \Delta_n + \Delta_n^2 + \Delta_0$$ If you detect "glitches" in the perturbed iterations by $|X_n + \Delta_n| << |X_n|$, set these pixels aside
  6. if there are glitched pixels remaining (or the reference escaped too early), repeat this whole process for the glitched pixels (pick a different $X_0$)

"Glitches" occur when the dynamics of a pixel are too different from the dynamics of the reference. They can (usually) be detected by Pauldelbrot's criterion1 $\left|z_n+\delta_n\right| << \left|z_n\right|$ and corrected by picking a better reference.

This is all a bit adhoc and seems to work well, but rigourous numerical analysis proofs are still lacking as far as I know. There are two arbitrary thresholds (for series vs probe point relative error tolerance, and for glitch detection) which is very unsatisfactory.

Asymptotics

Suppose $M$ iterations are skipped by series approximation, and $N$ iterations are needed in total. The image size is $W \times H$. Suppose the cost of high precision operations is $K$ times the cost of low precision operations. Then the cost of the traditional method using high precision for all pixels is $O(K \times N \times W \times H)$. With perturbation, the cost is $O(K \times N + N \times W \times H)$. With perturbation and series approximation the cost is $O(K \times N + (N - M) \times W \times H)$.

Related Question