How iterated exponential $\exp^{[\circ x]}(y)$, $y\neq 1$, defined based on tetration

power-towerstetration

Background:

The tetration
\begin{equation}
^xe = \exp^{[\circ x]}(1) = \underbrace{e^{e^{\cdot^{\cdot^e}}}}_{x \text{ times}}
\end{equation}

is well defined when $x \in \mathbb{Z}$. The extension of tetration to real height $x \in \mathbb{R}$ can also be understood (though not unique). For instance $^xe \approx 1+x$ for $-1 < x \leq 0$, and this can be iterated to interpret $^xe$ in whole $x \in \mathbb{R}$.


Motivation:

I am wondering how $\exp^{[\circ x]}(y)$ is defined? Again, if $x \in \mathbb{N}$, this is just
\begin{equation}
\exp^{[\circ x]}(y) = e^{\cdot^{\cdot^{e^y}}}.
\end{equation}

Also if $y = \exp^{[\circ n]}(1)$ for some $n \in \mathbb{Z}$, then
\begin{equation}
\exp^{[\circ x]}(y) = \exp^{[\circ x+n]}(1),
\end{equation}

which I can interpret for any $x' = x+n \in \mathbb{R}$.


Question:

In contrast, when $y \neq \exp^{[\circ n]}n(1)$ for any $n \in \mathbb{Z}$, how is $\exp^{[\circ x]}(y)$ defined for $x \in \mathbb{R}$? I assume some initial condition must be set, like $\exp^{[\circ 0]}(y) = \operatorname{id}(y) = y$ and $\exp^1(y) = e^y$, but I do not know how to interpolate this in the interval $0 < x < 1$.


An attempt:

Is it possible to define any arbitrary interpolation? Like

\begin{equation}
\exp^{[\circ x]}(y) = (1-x)y + x e^y, \quad 0 \leq x \leq 1
\end{equation}

and recreate $\exp^{[\circ x]}(y)$ on $x \in \mathbb{R}$ by iteration? Is there a unique interpolant? Or are they not unique but depend on regularity we impose?

Best Answer

If you assume $y=\exp^{x_1}(1)$ then you can determine $z=\exp^{x_2}(y) = \exp^{x_2}(\exp^{x_1}(1))=\exp^{x_1+x_2}(1)$ .

To do such "arithmetic" in the iteration-numbers one has to find $x_1$ given $y$. Many call but see a reservation of mine 1 the function slog() or superlog() such that $x_1 = \text{slog}(y) - \text{slog}(1)$ where (voluntarily, but surely optimally) $\text{slog}(1)=0$ must be defined once and for all.

To find $x_1$ if $y$ is not on the orbit $0,1,e,e^e,...$ the slog() function must recover your beforehand defined interpolating method for $-1 \le x \lt 0$ .

On interpolation-methods Your method of interpolation ("linear") is a very simple one and is considered casually even here in MSE (find some links later) and also mentioned in wikipedia, coined there by some author Hooshmand.

A somehow more sophisticated method for the "slog()" has been proposed by P. Walker in the 90'ies and has also been discovered by the tetrationforum -foundingmember A. Robbins. This uses the idea of the construction of a powerseries by extending the linear approximation (as mentioned by you) using a 2x2 Carlemanmatrix towards polynomial interpolations of higher orders. First by increasing the matrix to 3x3 giving a quadratic polynomial interpolation, then increasing again to 4x4 to get a cubic interpolation, and increasing further so far as possible (and numerically meaningful) assuming that increasing the size of the powerseries/matrix towards infinity leads to some convergence (see this in the tetrationforum with heavy numerical optimizations by Jay D. Fox).

Another even more sophisticated method for the construction of a powerseries-based solution is the better-known Schröder-"mechanism". Unfortunately(?) this provides only complex interpolations for the exponentiation to base $e$ . A (nearly intractable) improvement starting from the the Schröder-mechanism but coming back to a real-to-real solution proposed by H. Kneser exists. In spite of the degree of abstraction of Kneser's explanations this has been recently implemented by some members of the tetration-forum for the software Pari/GP for public use.

Additional thoughts (inserted)
- Concerning your last (added) question on "uniqueness" or whether there is some interpolation-method preferable over the other you might like this page, which shows the effect of a "good choice" of an interpolation-value vs a "bad choice" . It is an excel-sheet and has tabs at the bottom which are clickable. The first three pictures give an idea of the effect on the curve of the $\exp^{[0.5]}(x)$ when the initially assumed interpolation is varying. The next three pages show the effect even more drastically. The last pages are working material and contain data and are not meant to be shown to a visitor of the pages. Another small essay gives images about the different interpolations of various methods, but with exponential basis $4$ and complex initial values $z_0$. See here (pdf)
- Concerning your second idea for an interpolation-function: While we can assume, that the functional extension of your interpolation from $-1 .. x .. 0$ is continuous at the boundaries (meaning $y$ at $x=-1 \pm \epsilon$ and $x =0 \pm \epsilon$ exist), the next question would be, whether it is edgy there. This can be checked when the derivatives of some orders at that boundary-values are also continuous or smooth. I didn't check that property of your second proposal (in "attempt"). But note that this question of smoothness at the bounds of the unit interval has been the guiding idea for A. Robbins to develop his ansatz for finding a powerseries which should be (ideally) infinitely differentiable.


Technicalities(?) The Walker/Robbins matrix-method is relatively simple and gives consistent results to $10$,$20$ digits precision and even more by the work of J.D.Fox. I don't know whether you want technicalities here, but see a Pari/GP-solution at the end 2.
The Schroeder-mechanism used conjugacy at fixpoints and besides that can easily be formulated in matrix-notation, but as well I don't assume you want such technicalities here (but it has been used and basically explained here in MSE as well as the linear interpolation, perhaps I can later add links).


Appendix
1 Short excurse: I don't like every mathematical term which is builded from some mathematical root and combination of super - because that super is only usable once and does not fit in any hierarchy like "tetration" , "pentation", "hexation" ... to which is "superlog()" meaningfully the inverse operation?
I'd propose to use the name "height()" meaning the extraction of the required iteration-"height" from any operation which is basically defined by iteration. An advantage is that "height()" is not in use elsewhere and even alludes to the "power-tower()" imagination, which is somehow common for iterated exponentiation.
So instead of "slog()" I myself got used to write "hgh(y)" and more precisely "$x_1 = \text{hgh}(y) - \text{hgh}(1) $" and propose to strengthen that use too.

2 On the Walker/Robbins "slog()"
This uses so called "Carleman-matrices" of appropriate size (ideally of infinite size) to get some high-order polynomial for the basic interpolation from $0 \le y \le 1$ to $-1 \le x \le 0$

  {slog_init(lsize=3) = local(tmp);
     size=1+lsize; \\ define size of matrices globally for our functions
        \\ make a finite size Carlemanmatrix for exp(x) , 
        \\ coefficients for truncated powerseries in columns!
     CarlM = matrix(size,size,r,c, (c-1)^(r-1)/(r-1)!)*1.0 ; 
     tmp = CarlM - dV(1,size);   \\ subtract diagonal unit-matrix
     tmp = VE(tmp,size-1,1-size); \\ to be inverted first column and
                                  \\ last row must be discarded
     tmp = tmp ^-1;
       \\ coefficients of truncated powerseries (=polynomial) are now in the
       \\ first column. In first row the default value for slog(0) =-1 must be 
       \\ prepended
     c_SLOG = concat([-1],tmp[,1]);
       \\ coefficients now in vector c_SLOG
     return(slog('y)+ O(y^size)); \\ display the explicite interpolating polynomial
     }


    \\ define now slog(y) as function evaluating polynomial in coefficients
   {slog(y)=local(w); 
       w= sum(k=0, size-1, y^k * c_SLOG[1+k]);
       w= mytrunc(w); \\ user-def fnkt, makes spurious values |1e-200| to zero
      return(w);}

Now after we have the matrix-initial procedure and the function call we can do some examples.

slog_init(3);
-1 + 0.923076923077*y + 0.230769230769*y^2 - 0.153846153846*y^3 + O(y^4)
slog(0.200000000000)=-0.807384615385   with pol.order 3
slog(0.500000000000)=-0.500000000000   with pol.order 3
slog(0.800000000000)=-0.192615384615   with pol.order 3


slog_init(4);
-1 + 0.923076923077*y + 0.246153846154*y^2 - 0.184615384615*y^3 + 0.0153846153846*y^4 + O(y^5)
slog(0.200000000000)=-0.806990769231   with pol.order 4
slog(0.500000000000)=-0.499038461538   with pol.order 4
slog(0.800000000000)=-0.192221538462   with pol.order 4


slog_init(5);
-1 + 0.917535115541*y + 0.244676030811*y^2 - 0.123470774807*y^3 - 0.0747621205256*y^4 + 0.0360217489805*y^5 + O(y^6)
slog(0.200000000000)=-0.807801794291   with pol.order 5
slog(0.500000000000)=-0.499044234255   with pol.order 5
slog(0.800000000000)=-0.191415242411   with pol.order 5


slog_init(8);
-1 + 0.916442956621*y + 0.248504958942*y^2 - 0.113958591126*y^3 - 0.0931973763153*y^4 + 0.0201541151938*y^5 + 0.0406141346690*y^6 - 0.0216815606532*y^7 + 0.00312136266791*y^8 + O(y^9)
slog(0.200000000000)=-0.807823215761   with pol.order 8
slog(0.500000000000)=-0.498614724280   with pol.order 8
slog(0.800000000000)=-0.191095327861   with pol.order 8


slog_init(16);
-1 + 0.915958619892*y + 0.249218563581*y^2 - 0.110611477420*y^3 - 0.0935821868946*y^4 + 0.0105881557811*y^5 + 0.0356292063641*y^6 + 0.00548414456262*y^7 - 0.0125884075400*y^8 - 0.00517134216135*y^9 + 0.00423745685851*y^10 + 0.00284562060086*y^11 - 0.00170857390955*y^12 - 0.00120243298370*y^13 + 0.00125737586909*y^14 - 0.000400783376306*y^15 + 0.0000460607752082*y^16 + O(y^17)
slog(0.200000000000)=-0.807868452510   with pol.order 16
slog(0.500000000000)=-0.498515184258   with pol.order 16
slog(0.800000000000)=-0.190985606855   with pol.order 16

One can see, how the first few coefficients of the polynomials seem to converge to some "final" value, thus allowing Walker/Robbins to assume, that this would also give an accurate power-series if the size would increase towards infinity. However it seems, Walker/Robbins, Schroeder and Kneser-slogs differ numerically. (Schroeder gives even complex values)

Related Question