My personal opinion is that the exponential is not naturally regarded as the next step in the progression from addition to multiplication, so there's no reason to expect it to share properties with the other two.
Notice what happens if you demand that all of your quantities have units. Addition is an operation you do to two quantities with the same units: for example, you can add two distances to get another distance. Multiplication is an operation you do to two quantities with units $a$ and $b$ to get a quantity with units $ab$; for example, you can multiply two distances to get an area.
However, you cannot exponentiate two unital quantities. In computing $a^b$, the quantity $b$ needs to be unitless or else there's no sensible way to assign a value to the result. This tells us that $a$ and $b$ are being treated very differently, so there's no reason to expect that switching them is a physically meaningful operation.
As I've discussed on math.SE before, the exponential has many generalizations in mathematics, and in most of them the base and the exponent are very different objects: in fact, in most of them the base is just $e$.
When I studied the various known matrices of combinatorical numbers I also looked at the following idea: what if we discuss functions with a set of results, not only one number? So for instance the concept of sine and cosine gets some special charme if we look not only at f(x) = sin(x), g(x)=cos(x) but at 2x2-matrices containing cos() and sin() and the input of matrix-multiplications has two parameters and the output as well. This helps much to understand algebra with complex numbers, for instance.
After that I thought: what if we look at matrices which allow not only to look at a function $f(x)=a_0 + a_1x + a_2x^2 + a_3x^3 + ...$ involving all the powers of $x$ as input, but also spit out all powers of $f(x)$ in the same manner. Then for instance
$$ V(x) \cdot P = V(x+1) $$
where $V(x)$ means the vector $V(x)=[1,x,x^2,x^3,x^4,...]$ and $P$ means the upper triangular Pascal matrix. Of course $$(V(x) \cdot P )\cdot P = V(x+2)$$ and of course it is easily to see that
$$V(x) \cdot P^h = V(x+h)$$ first for integral powers of $P$ which any programmer can implement - and if you download Pari/GP you can just use the matrix-language of it.
It needs a certain ingenious mind to find out how to define fractional powers of $P$ to make the fractional operation of addition possible...
After that one can find, that a very well known combinatorical matrix $B$ performs
$$ V(x) \cdot B = V(exp(x))$$
and logically
$$ V(x) \cdot B^h = V(exp^{oh}(x))$$
for integer $h$ - but which means implementing tetration to integer iterates $h$.
To find now the fractional-h power of $B$ -which is your question- is not so simple, and using matrices practically means to truncate them to finite size.
Well, anyway meaningful approximations can be done and truncated versions of $B$ can be diagonalized and general powers of it can then be found by simply computing the general powers on each of the scalar numbers in the diagonal - and so one can introduce fractional powers of $B$ in the formula
$$ V_{32}(x) \cdot B_{32x32}^h \approx V_{32}(exp^{oh}(x))$$
(for for instance $32$ and $32x32$ sized vectors and matrix) and this implements then the fractional iteration of the exponential, aka tetration.
The matrices $P$ and $B$ of which I talk here are known as "Carleman-matrices" and this whole concept can be applied to fractional iteration of many functions for which you have a power series. (Well, we'll have convergence issues and much more complication and non-uniqueness and what not - but that's not the focus of this answer)
So using "mateigen" in Pari/GP
for the diagonalization allows the following Pari/GP
code:
n=32
V(x,dim=n) = vector(dim,r,x^{r-1}) \\ define the function for generating V(x)
b = sqrt(2)
bl = log(b)
B = matrix(n,n,r,c,bl^(r-1)*(c-1)^(r-1)/(r-1)!) \\ define the Carlemanmatrix
\\ for V(x) * B = V(b^x)
Y = V(1) * B \\ in Y we'll have approximately V(b^x)
\\where the first two entries have the best approximation to the expected
\\ value
\\ now diagonalize
M = mateigen(B) \\ needs high precision of say 800 or 1600 digits for all operations
W = M^-1
D = diag(W*B*M)
\\ after this we can do B^h = M * D^h * W
\\ and D^h can be done by d_k,k^h on the elements of the diagonal
dpow(A,h=1)= for (r=1,#A, A[r,r]=A[r,r]^h);return(A) \\ define a function for powers of D
BPow (h)= M*dpow(D,h)*W \\ define a function for the fractional power of B
\\ after this we can do
Y = V(1) * BPow(0.5)
\\ and have in Y[2] a nice approximation for the half-iterate or b^^0.5
\\where b=sqrt(2)
I you want to experiment with this I recommend to use dimension $n=16$ first (with much poorer approximation) but already you'll need default(realprecision,200)
or so. To do mateigen
on matrix $B$ of size $32x32$ one must compute $b$, $bl$, $B$ and so on already to even $1600$ or $2000$ internal digits precision and the mateigen
can need several minutes to complete. I tried this also with size $n=64 $ and this was really a full afternoon computation... but is a nice exercise anyway.
Remark: I've discussed this a bit in the tetrationforum and have made an article which compares a couple of such naive approaches to fractional exponential-towers. The above method I've called there "polynomial approach" because using finite matrices without taking care for compatibility with the theoretically infinite sized matrices and their fractional powers means just involve polynomials for that approximative solutions. See the index and the link to the essay
Best Answer
There is (at least) one option - however depending on whether a solution for the problem of "fractional iteration of logarithm" is available (aka: "Tetration". Note that in the tetration-forum there is an exchange on various approaches to this problem).
Assume for the moment that a version of that fractional iteration is available, and let us denote $ \log_b^{\circ h}(x) $ for the $h$-times iterated logarithm to some given base $b$, including the opportunity to have $h$ also fractional.
Assume also that we have this well-defined for the interval $h=-1 \cdots +1$ (so for $h=-1$ this means actually exponentiation) then $$ f_b(x,y,0) = \exp_b^{\circ 0} ( \log_b^{\circ 0} (x)+\log_b^{\circ 0} (y)) = x+y \\ f_b(x,y,1) = \exp_b^{\circ 1} ( \log_b^{\circ 1} (x)+\log_b^{\circ 1} (y)) = x \cdot y $$ and for $0<h<1$ we have then some operation in between well defined: $$ f_b(x,y,h) = \exp_b^{\circ h} ( \log_b^{\circ h} (x)+\log_b^{\circ h} (y)) = x \{\circ _h \} y $$ where the operation-symbol of the circle and the index $h$ means that interpolated operation between "+" (for $h=0$) and "*" (for $h=1$).
Now let's see, whether our assumption of the existence of such fractional iterations is justified/can be realized:
(Note: this is just one ad-hoc-approach; it does not, for instance, provide a solution comparable with one which might be expected, when we aproach the problem via the interpolation of the Ackermann-function as indicated by Daniel Geisler)
Addition, multiplication and "half multiplication" tables for $x,y=1 \ldots 10$ . I used the base $b=t^{1/t} =\sqrt2 \approx 1.414$ with $t_l=2$ resp $t_u=4$ for the logarithm and the exponentiation. For the half-exponentials/half-logarithms the formulae relate on lower fixpoint $t_l=2$ for $x \le 3$ and on upper fixpoint $t_u=4$ for $x>3$ and use the well known Schröder-mechanism for the half-iterate.
[update]: In the comment MphLee provided a link into wikipedia, where this "hyperoperation"-scheme is mentioned as due to A. Bennet (but not then generalized to fractional orders / fractional iteration heights of the $\log()$ and the $\exp()$ )
addition $h=0$ ($\text{"+"}=\circ _0$)
"half multiplication" $h=0.5$ ($\circ _{0.5}$) with base $b=\sqrt2$ using Schröder-mechanism
multiplication $h=1$ ($\text{"*"}=\circ _{1}$)
halfway between multiplication and commutative(!) exponent $h=1.5$($\text{"??"}=\circ _{1.5}$)
Commutative(!) expon $h=2$($\text{"??"}=\circ _{2}$)
(Remark: because double-log of number $1$ would be negative infinity, I replaced this by approximation and insertion of $1+1e-20$ instead of $1$)
We can even define negative-half index of operation:
and the negative $-1$- operation
And to see that different ways to define the fractional iterate of exponentiation lead to different multiplication-tables I show here the "half-multiplication" taken by base $b=2$ and the implementation via something which I called "polynomial tetration" (which is based on eigendecomposition of the truncated carlemanmatrices, and seems to approximate the Kneser-solution when the truncation allows larger matrix-sizes)
[update]: Appendix.
In the (in WP) cited article of A Bennett (1915) for the interpolation to fractional orders (there $n$ here $h$) A.Bennet misses the possibility to take another base $b$ for iterated logarithm and exponentiation, where $b \ne e$ and actually $1<b<e^{1/e}$ where we can iterate infinitely and also have a "regular" method for interpolation to fractional orders/iterates using E. Schröder's method which I've used here. See the screenshot of some paragraph in A. Bennett's short paper: