Let $\beta_n$ denote the flag $h$-vector (as defined in EC1, Section
3.13) of the partition lattice $\Pi_n$ (EC1, Example 3.10.4). Then
$$ \mathrm{mag}_{n,{n-1\choose 2}-j} = \sum_S \beta_n(S), $$
where $S$ ranges over all subsets of $\lbrace 1,2,\dots,n-2\rbrace$ whose
elements sum to $j$. An explicit formula for $\beta_n(S)$ is given by
$$ \beta_n(S) = \sum_{T\subseteq S} (-1)^{|S-T|}
\alpha_n(T), $$
where if the elements of $T$ are $t_1<\cdots < t_k$, then
$$ \alpha_n(T) = S(n,n-t_1)S(n-t_1,n-t_2)
S(n-t_2,n-t_3)\cdots S(n-t_{k-1},n-t_k). $$
Here $S(m,j)$ denotes a Stirling number of the second kind.
Addendum. A combinatorial description of the mag numbers is
somewhat complicated. Consider all ways to start with the $n$ sets
$\lbrace 1 \rbrace,\dots, \lbrace n \rbrace$. At each step we take two
of our sets and replace them by their union. After $n-1$ steps we will
have the single set $\lbrace 1,2,\dots,n \rbrace$. An example for
$n=6$ is (writing a set like $\lbrace 2,3,5\rbrace$ as 235)
1-2-3-4-5-6, 1-2-36-4-5, 14-36-2-5, 14-356-2, 14356-2, 123456. At the
$i$th step suppose we take the union of two sets $S$ and $T$. Let
$a_i$ be the least integer $j$ such that $j$ belongs to one of the
sets $S$ or $T$, and some number smaller than $j$ belongs to the other
set. For the example above we get $(a_1,\dots,a_5)=(6,4,5,3,2)$. If
$\nu$ denotes this merging process, then let $f(\nu) = \sum i$,
summed over all $i$ for which $a_i>a_{i+1}$. For the above example,
$f(\nu) = 1+3+4=8$. (The number $f(\nu)$ is called the major index
of the sequence $(a_1,\dots,a_{n-1})$.) Then
$\mathrm{mag}_{n,{n-1\choose 2}-j}$ is the number
of merging processes $\nu$ for which $f(\nu)=j$. This might look
completely contrived to the uninitiated, but it is very natural within
the theory of flag $h$-vectors.
Not sure if this is an answer but a good suggestion of how to show (1) and (2) (and perhaps an actual solution), and a good start for (3). Consider the Newton series formula for tetration. This series converges for all $\Re(z) > -2$, and is very similar to your formula
$$^za = \sum_{n=0}^\infty \sum_{k=0}^n (-1)^{n-k} \binom{z}{n}\binom{n}{k}(^k a)$$
(The op claims this expression diverges, an alternate expression is
$$\frac{1}{\Gamma(1-z)}\Big{(}\sum_{n=0}^\infty (^{n+1}a)\frac{(-1)^n}{n!(n+1-z)} + \int_1^\infty \big{(}\sum_{n=0}^\infty (^{n+1}a) \frac{(-x)^n}{n!}\big{)}x^{-z}\,dx\Big{)}$$
of which the rest of this discussion still applies.)
Now, since this solution is bounded in the half plane $\Re(z) > 0$ it does have a uniqueness criterion--quite a good one at that too. It is the only solution to the functional equation to do such. In fact, if a solution $F$ exists where $F(z) = O(e^{\rho|\Re(z)| + \tau|\Im(z)|})$ for $\tau < \pi/2$ then $F$ is still the above solution.
What you have presented is a q-analog of the Newton series I just put above. Sadly I cannot find a rigorous paper detailing these types of series; I don't remember where I have seen them, but I have seen them. This suggests to me, that really you are just interpolating $(^na)$ in the same manner the Newton series does, except you are using q-analogs. It is necessary though that $q$ be fixed to the value you put (explanation below).
Consider the rather beautiful fact that your solution is bounded in the right half plane.
Observe if
$[z]_q = \frac{1-q^z}{1-q}$, $[z]_q$ is periodic in $z$.
$${z \brack n}_q = \frac{[z]_q!}{[n]_q![z-n]_q!} = \frac{[z]_q[z-1]_q...[z-n+1]_q}{[1]_q[2]_q...[n]_q}$$
And each of these functions have a period, namely $2\pi i/\log(q)$..
Therefore your final expansion has an imaginary period, it tends to $-W(\log(a))/\log(a)$ as $\Re(z) \to \infty$; thus, it is bounded in the right half plane.
From this it follows your function is the standard tetration function (If it converges).
So $t$ is bounded. The first obvious fact is $t(n) = (^n a)$. Next consider the wondrous power of a little known identity theorem for functions bounded in the right half plane. Namely, if $F(z)$ and $G(z)$ are bounded in the right half plane and $F\Big{|}_{\mathbb{N}} = G\Big{|}_{\mathbb{N}}$ then $F = G$. This follows because of Ramanujan's master theorem.
I'll explain the identity theorem for a second, and then explain how this applies to your case. If $F$ and $G$ are bounded, then surely
$$f(x) = \sum_{n=0}^\infty F(n+1)\frac{(-x)^n}{n!}$$
$$g(x) = \sum_{n=0}^\infty G(n+1)\frac{(-x)^n}{n!}$$
are both entire functions, and by Ramanujan both satisfy
$$\int_0^\infty f(x)x^{z-1}\,dx = \Gamma(z)F(1-z)$$
$$\int_0^\infty g(x)x^{z-1}\,dx = \Gamma(z)G(1-z)$$
But $f = g$ and therefore $F = G$.
Look at $t(z+1)$ and $a^{t(z)}$, both of these functions are bounded and agree on the naturals ($t(n+1) = a^{t(n)}$), therefore they are equal everywhere.
Now as to the fact that this function is completely monotone: On the tetration forum Sheldon made a great point. I'm not sure of this fact, but I'm betting it's true. If $0 < \lambda < 1$ then $f(z) = \lambda^z$ for $\Re(z) > 0$ is the only completely monotone function on $\mathbb{R}^+$ such that $\lambda f(z) = f(z+1)$. Another function would have to be $f(z+\theta(z))$ for $\theta$ a 1-periodic function, but this will most assuredly destroy the completely monotone structure.
What's great about the standard solution (the one that is periodic, the one that is the Schroder iteration, the one that is bounded), is that it can be expanded in an exponential series for $\Re(z) > 0$. This is just Fourier series, essentially.
$$^za = F(z) = \sum_{k=0}^\infty a_k \lambda^{kz}$$
$$F'(x) \sim C\lambda^x$$
Since another solution is just $F(z+\theta(z))$ for $\theta$ periodic, we can most likely form a contradiction by using our knowledge of the uniqueness of the exponential function under these conditions.
All in all, I mostly just had to weigh in my two cents, but I think all the conjectures you put are true and that your solution is in fact the 'standard' solution, which is: the periodic one, the bounded one, the Schroder one, and hopefully, the completely monotone one. Hopefully the rough proof I gave of (1) and (2) is good enough for you.
Best Answer
There is an implementation around (Pari/GP; in the tetration-forum) which claims to have a Kneser-implementation. It is a bit difficult to handle, so I'll show here a simpler version (essentially polynomial) of a tetration-function which seems to approximate that Kneser function when the polynomial's order is increased.
With a polynomial of order 32 I got for your problem a value of about $a=1.2329216$
Added: I got the mentioned Kneser-implementation work; it gives a taylorseries in $h$ (where the spurious imaginary parts of the coefficients are deleted) as
where the coefficient at $h^1$ is the first derivative with respect to $h$ which I've approximated by my method.
To compare, the method described below gives (when the same parameters are inserted)
[end addition]
This is how I got my approximate solution:
pc
as the vector of first 32 coefficients of the power series for $f(x)=2^x$F
for the function $f(x)$.The vector
pc
and thusF
should have infinite size, but of course we must cut it to something we can handle and which gives not too much crap, at least in a certain range of data.F
give iterates of $f(x)$. Fractional powers give fractional iterates- of course, practically this is always only approximated.F
can be achieved, when we diagonalizeF = M * D * W
(where I writeW
forM^-1
for simplicity) and compute $F^h = M*D^h*W$The approximation runs now
$z_h$ is now (approximately) the $h$'th iterate from $z_0$ to $z_h$.
Letting $h$ go downwards to $h=1$ it seems that we approximate the value I've given above. And because I've tested earlier this method against the (claimed) Kneser's method and have empirically found improving convergence to that results when increasing the size of the matrix (which means increased order of the underlying polynomial) I think, that we are near the true Kneser's solution here.
Remark: the matrices
M
andW
look ugly in the view of giving coefficients for a polynomials (ideally we should have power series with infinitely many coefficients) and should surely be evaluated only for $z_0$ and $h$ near zero. But tests with $z_0=1$ and various heights $h$ gave meaningful results such that $z_{h_1+h_2} = (z_{h1})_{h_2}$ .{update
A much better way to compute the derivative of d/dh F^h is to use the derivative of the diagonalmatrix $D^h$ with repect to h. The code
$\hspace{120 px}$ represents the derivative of
F
with respect to $h$ and gives immediately the first derivative with respect to $h$ at $z_0=1$ and $h=1$ }This Q&D-method can easily be reconstructed using Pari/GP. Note that the diagonalization needs large value for the internal decimal precision, I just used prec=800 (dec digits) for the matrixsize 32x32.
Short protocol ("a" means your sought value $a={z_h-2 \over h-1}$):
I did not find some closed form for this via google, W|A and inverse symbolic calculator, so possibly the same coefficient, just for the base $e$ instead, might be better known:
The computation for the base $b=e$ instead of $b=2$ gives the following approximation:
But no success for finding a closed form representation so far.
Added: With that idea of a derivative with respect to the height-parameter the idea of constructing a powerseries from the sequence of higher derivatives comes to mind. With the adapting of my derivative-formula (see par. "update") I construct a powerseries in $h$ which gives the $h$'th iterates starting from $z_0=0$. This are the first 32 coefficients: $$ g(h)= \small \begin{array} {rl} 1 & \\ +0.8893649182531790 & *h \\ +0.008676688896772925 & *h^{2} \\ +0.09523868310780171 & *h^{3} \\ -0.005752348511600163 & *h^{4} \\ +0.01296662374986303 & *h^{5} \\ -0.002196108907293045 & *h^{6} \\ +0.001996803031014021 & *h^{7} \\ -0.0005633994435825211 & *h^{8} \\ +0.0003482750168520301 & *h^{9} \\ -0.0001285546189885988 & *h^{10} \\ +0.00006709638266032520 & *h^{11} \\ -0.00002830779288402342 & *h^{12} \\ +0.00001380563705669089 & *h^{13} \\ -0.000006205175034368587 & *h^{14} \\ +0.000002957461339961192 & *h^{15} \\ -0.000001369764406300663 & *h^{16} \\ +0.0000006496675894767698 & *h^{17} \\ -0.0000003055056195438239 & *h^{18} \\ +0.0000001451337771274319 & *h^{19} \\ -0.00000006884715898369447 & *h^{20} \\ +0.00000003282131706893498 & *h^{21} \\ -0.00000001565967274219413 & *h^{22} \\ +0.000000007493152321891950 & *h^{23} \\ -0.000000003590500482888162 & *h^{24} \\ +0.000000001723905030908289 & *h^{25} \\ -0.0000000008288804738313791 & *h^{26} \\ +0.0000000003991564395933366 & *h^{27} \\ -1.924678487785590E-10 & *h^{28} \\ +9.292387572960278E-11 & *h^{29} \\ -4.491503965074198E-11 & *h^{30} \\ +2.173332915755159E-11 & *h^{31} \end{array} $$
This is then good at least for $h= 0 \ldots 1$ and should reproduce the Kneser-solution by $z_h=\exp_2^{\circ h}(0)=g(h-1)$ .
(This part is just current experimenting and might have errors)