Background:
The tetration
\begin{equation}
^xe = \exp^{[\circ x]}(1) = \underbrace{e^{e^{\cdot^{\cdot^e}}}}_{x \text{ times}}
\end{equation}
is well defined when $x \in \mathbb{Z}$. The extension of tetration to real height $x \in \mathbb{R}$ can also be understood (though not unique). For instance $^xe \approx 1+x$ for $-1 < x \leq 0$, and this can be iterated to interpret $^xe$ in whole $x \in \mathbb{R}$.
Motivation:
I am wondering how $\exp^{[\circ x]}(y)$ is defined? Again, if $x \in \mathbb{N}$, this is just
\begin{equation}
\exp^{[\circ x]}(y) = e^{\cdot^{\cdot^{e^y}}}.
\end{equation}
Also if $y = \exp^{[\circ n]}(1)$ for some $n \in \mathbb{Z}$, then
\begin{equation}
\exp^{[\circ x]}(y) = \exp^{[\circ x+n]}(1),
\end{equation}
which I can interpret for any $x' = x+n \in \mathbb{R}$.
Question:
In contrast, when $y \neq \exp^{[\circ n]}n(1)$ for any $n \in \mathbb{Z}$, how is $\exp^{[\circ x]}(y)$ defined for $x \in \mathbb{R}$? I assume some initial condition must be set, like $\exp^{[\circ 0]}(y) = \operatorname{id}(y) = y$ and $\exp^1(y) = e^y$, but I do not know how to interpolate this in the interval $0 < x < 1$.
An attempt:
Is it possible to define any arbitrary interpolation? Like
\begin{equation}
\exp^{[\circ x]}(y) = (1-x)y + x e^y, \quad 0 \leq x \leq 1
\end{equation}
and recreate $\exp^{[\circ x]}(y)$ on $x \in \mathbb{R}$ by iteration? Is there a unique interpolant? Or are they not unique but depend on regularity we impose?
Best Answer
If you assume $y=\exp^{x_1}(1)$ then you can determine $z=\exp^{x_2}(y) = \exp^{x_2}(\exp^{x_1}(1))=\exp^{x_1+x_2}(1)$ .
To do such "arithmetic" in the iteration-numbers one has to find $x_1$ given $y$. Many call but see a reservation of mine 1 the function slog() or superlog() such that $x_1 = \text{slog}(y) - \text{slog}(1)$ where (voluntarily, but surely optimally) $\text{slog}(1)=0$ must be defined once and for all.
To find $x_1$ if $y$ is not on the orbit $0,1,e,e^e,...$ the slog() function must recover your beforehand defined interpolating method for $-1 \le x \lt 0$ .
On interpolation-methods Your method of interpolation ("linear") is a very simple one and is considered casually even here in MSE (find some links later) and also mentioned in wikipedia, coined there by some author Hooshmand.
A somehow more sophisticated method for the "slog()" has been proposed by P. Walker in the 90'ies and has also been discovered by the tetrationforum -foundingmember A. Robbins. This uses the idea of the construction of a powerseries by extending the linear approximation (as mentioned by you) using a 2x2 Carlemanmatrix towards polynomial interpolations of higher orders. First by increasing the matrix to 3x3 giving a quadratic polynomial interpolation, then increasing again to 4x4 to get a cubic interpolation, and increasing further so far as possible (and numerically meaningful) assuming that increasing the size of the powerseries/matrix towards infinity leads to some convergence (see this in the tetrationforum with heavy numerical optimizations by Jay D. Fox).
Another even more sophisticated method for the construction of a powerseries-based solution is the better-known Schröder-"mechanism". Unfortunately(?) this provides only complex interpolations for the exponentiation to base $e$ . A (nearly intractable) improvement starting from the the Schröder-mechanism but coming back to a real-to-real solution proposed by H. Kneser exists. In spite of the degree of abstraction of Kneser's explanations this has been recently implemented by some members of the tetration-forum for the software Pari/GP for public use.
Additional thoughts (inserted)
- Concerning your last (added) question on "uniqueness" or whether there is some interpolation-method preferable over the other you might like this page, which shows the effect of a "good choice" of an interpolation-value vs a "bad choice" . It is an excel-sheet and has tabs at the bottom which are clickable. The first three pictures give an idea of the effect on the curve of the $\exp^{[0.5]}(x)$ when the initially assumed interpolation is varying. The next three pages show the effect even more drastically. The last pages are working material and contain data and are not meant to be shown to a visitor of the pages. Another small essay gives images about the different interpolations of various methods, but with exponential basis $4$ and complex initial values $z_0$. See here (pdf)
- Concerning your second idea for an interpolation-function: While we can assume, that the functional extension of your interpolation from $-1 .. x .. 0$ is continuous at the boundaries (meaning $y$ at $x=-1 \pm \epsilon$ and $x =0 \pm \epsilon$ exist), the next question would be, whether it is edgy there. This can be checked when the derivatives of some orders at that boundary-values are also continuous or smooth. I didn't check that property of your second proposal (in "attempt"). But note that this question of smoothness at the bounds of the unit interval has been the guiding idea for A. Robbins to develop his ansatz for finding a powerseries which should be (ideally) infinitely differentiable.
Technicalities(?) The Walker/Robbins matrix-method is relatively simple and gives consistent results to $10$,$20$ digits precision and even more by the work of J.D.Fox. I don't know whether you want technicalities here, but see a Pari/GP-solution at the end 2.
The Schroeder-mechanism used conjugacy at fixpoints and besides that can easily be formulated in matrix-notation, but as well I don't assume you want such technicalities here (but it has been used and basically explained here in MSE as well as the linear interpolation, perhaps I can later add links).
Appendix
1 Short excurse: I don't like every mathematical term which is builded from some mathematical root and combination of super - because that super is only usable once and does not fit in any hierarchy like "tetration" , "pentation", "hexation" ... to which is "superlog()" meaningfully the inverse operation?
I'd propose to use the name "height()" meaning the extraction of the required iteration-"height" from any operation which is basically defined by iteration. An advantage is that "height()" is not in use elsewhere and even alludes to the "power-tower()" imagination, which is somehow common for iterated exponentiation.
So instead of "slog()" I myself got used to write "hgh(y)" and more precisely "$x_1 = \text{hgh}(y) - \text{hgh}(1) $" and propose to strengthen that use too.
2 On the Walker/Robbins "slog()"
This uses so called "Carleman-matrices" of appropriate size (ideally of infinite size) to get some high-order polynomial for the basic interpolation from $0 \le y \le 1$ to $-1 \le x \le 0$
Now after we have the matrix-initial procedure and the function call we can do some examples.
One can see, how the first few coefficients of the polynomials seem to converge to some "final" value, thus allowing Walker/Robbins to assume, that this would also give an accurate power-series if the size would increase towards infinity. However it seems, Walker/Robbins, Schroeder and Kneser-slogs differ numerically. (Schroeder gives even complex values)