

Version 2. Some changes in formulas were made for convenience, which resulted in different signsconvention for the Amatrices and denominators (no overall effect of changes)
Gottfried Helms,11.1.2010
first version: 9.8.2008
The reason for the computation of the eigendecomposition of the U_{t}matrices is, that fractional iterates of U_{t}tetration (x>t^{x} 1) can be computed using fractional powers of the matrix(operator) U_{t}.
This concept is not new; it is known in the context of Bell and Carlemanmatrices for iteration of functions. However, the diagonalization method is only one nethod to make use of these matrices to arrive at fractional iteration as meaningful interpolation of the integer iteration. Another method involves the use of the binomial expansion instead.
The general question of a methodindependent, external appropriatenesscriterion for the selection of an interpolationmethod for fractional iteration of the exponentialfunction (exp_{b}(x)) or of the "decremented exponentialfunction" (dxp_{t}(x) or U_{t}(x)as I call it here) is not yet answered and the decision for the "unique" appropriate interpolationmethod is thus still open.
An "external" argument could be, that the interpolations for fractional iterates should be invariant across different fixpointshifts. I should explain this a bit.
Assume the usual tetrational function
T_{b}(x) = b^{x}
T_{b}°^{1}(x) = T_{b}(x)
T_{b}°^{h}(x)
= T_{b}°^{h1}(T_{b}(x))
then T_{b}(x)has inifinitely many fixpoints t_{0},t_{1},t_{2},... (almost all are complex), such that
T_{b}°^{h}(t_{k}) = t_{k} or: b^{t}^{k} = t_{k} or: b = t_{k}^{1/t}^{k}
Then we may translate the problem
T_{b}^{oh}(x) = (U_{tk}^{oh}(x/t_{k} – 1 ) +1)*t_{k} .
The advantage of this fixpointshift is then, that the powerseries for U_{tk}(x) has no constant term (then matrixoperator for U_{tk}() is triangular) such that it can be –for instance – inverted. Also fractional iterates can then formally be computed using matrixlogarithm or diagonalization.
The problem with this is that it is possible (there was made an argument in a discussion), that these fixpointshifts give different results using different fixpoints. If this is indeed the case, then a unique definition of tetration must include the convention which fixpoint has to be taken (besides of the question, whether this method is appropriate at all for the complete mathematical context).
However; the matrixbased methods and also diagonalization has been, and is, in use in the research, so I leave this problem open and I'm going to discuss the method for the cases where no fixpointshift is needed further to exhibit some deeper details.
The diagonalization gives consistent results in the sense that, for instance,
U_{t}^{o0.5}(U_{t}^{o0.5}(x)) = U_{t}^{o0.5+0.5}(x) = U_{t}^{o1}(x)
or more generally
U_{t}^{oa}(U_{t}^{ob}(x)) = U_{t}^{oa+b}(x) .
as far as always the same method is applied.
Here I give technical details, how to compute the diagonalization matrices (eigenmatrices) for such triangular matrixoperators and some sample values for comparision, which I haven't seen elsewhere.
Further links: a more detailed description can be found at
http://go.helmsnet.de/math/tetdocs/ContinuousfunctionalIteration.pdf
A different approach, based on polynomial interpolation which however gives the same result can be found at
http://go.helmsnet.de/math/tetdocs/ExponentialPolynomialInterpolation.pdf
For an intro in tetration see also [1]; a list of valuable references can be found in [2] and [3].
If we denote
V(x)~ * U_{t}^{h} = V(U_{t}^{oh}(x))~
(see all matrixdefinitions in appendix 1) then fractional powers of U_{t} can be computed by fractional powers of the diagonal matrix D of the eigendecomposition of U_{t}:
V(x)~ * U_{t}^{h} = V(x)~ * (W * D^{h}*W^{1}) = V(U_{t}^{oh}(x))~
where D is the diagonalmatrix of eigenvalues and W is the triangular matrix of eigenvectors. (Note, that W and W^{1} are only unique besides columnscaling of W if they are assumed to be triangular)
In my computations I use a columnnormed version of W having unitdiagonal; then W seems to be also a matrixoperator. It provides then the coefficients of the Schröderfunction σ which decomposes the iteration of a function into
U_{t}^{oh}(x) = σ^{1}(u^{h}*σ(x))
(see a short definition of the Schröderfunction at [1](superlogarithm); Schröders article at [4])
In the light of Schröder's result, that
lim_{h>inf} f°^{h}(x)/f'(0)^{h} = σ(x)
which is also
lim_{h>inf} f°^{h}(x)/u^{h} = σ(x)
and where we find the relevant 2'nd column of the matrix W just being that limit, the assumed property, that W is in fact a matrixoperator, is then evident.
The matrix D is diagonal, containing the consecutive powers of u = log(t).
Note: the given eigendecomposition can only be computed, if t≠exp(1), u≠1, so we discuss only iterations of U_{t}(x) = t^{x} –1 ≠ exp(x) – 1here.
Besides numerical matrices Pari/GP can handle symbolic coefficients in a basic way, so we can feed the following procedure with the U_{t}matrix containing u as unevaluated symbol as well to get the symbolic solution. However, the symbolic description of terms is much consumptive with increasing index; matrixsize of 16x16 gives the result in a blink; 32x32 needs a bit of time, for 64x64 matrices computation time and memory requirements are already excessive (see a zoomed image coefficientsmatrix at a_{16} in appendix 2)
The input for the following procedure is some triangular matrix (here U_{t}, where u=log(t)≠1 ). The matrices W and W^{1} are initialized with the unitdiagonal and then new entries are computed using the previously computed ones. Here in W we can compute the columns and in W^{1} the rows separately, always beginning at the diagonalelement (which is 1) proceeding to the element at position r,c vertically resp. horizontally righttoleft.
{ TRI_EW(U) = local(u=U[2,2], W, D, Wi, t=exp(u)); dim=rows(U); D=diag(U); \\ for operatormatrices U_{t} this is 1,u,u^{2},u^{3},...


W=matid(dim); for(c=1,dim1, for(r=c+1,dim, W_{r,c}=sum(k=c,r1,U_{r,k}*W_{k,c}) / (D_{c}D_{r}) )) 

Wi=matid(dim); \\ means matrix W^{1} for(r=2,dim, forstep(c=r1,1,1, Wi_{r,c}=sum(k=0,r1c,Wi_{r,rk}*U_{rk,c}) / (D_{r}D_{c}) )) 

return([[u,t,exp(u/t)],W,D,Wi]); } 

Then the powerseries for U_{t}(x) = t^{x}1 and its h'th iterate U_{t}°^{h}(x) due to the eigendecomposition
U_{t}^{h} = W * ^{d}V(u^{h}) * W^{1}
is
The denominators d_{k} are products
d_{k} = 

The a_{h;k} are polynomials in u and u^{h}; in the following the a_{h;k} are expressed as matrices of coefficients A_{k} of these polynomials:
A_{1} = 

=> a_{h,1}= 1*u^{h} d_{1} = 1 

A_{2} = 

=> a_{h;2} = (1* u^{h} – 1*u^{2h}) d_{2}= (u1) 

A_{3} = 

=> a_{h;3} = (1* u^{h} – 3*u^{2h}+
2*u^{3h}) = (1 + 2u) u^{h
} – (3 + 3u)
u^{2h} 

A_{4} = 

d_{4}= (u1)(u^{2}1)(u^{3}1) 

A_{5} = 



A_{6} = 



A_{7} = 



A_{8} = 



The number of rows at a_{h;k} are binomial(k1,2)+1, the number of columns is k+1.
The highest exponent to which a power of u appears is thus binomial(k1,2) due to the rows plus k*h due to the columns.
For instance in a_{h;8} for height h=1 is thus (binomial(7,2))+8*h=21+8 = 29 and for h=2 is 37.
However, for the complete coefficient at x^{k} this is increased by the coefficient u^{k1} and also decreased by exponents of u in the denominator d_{8}; here the highest exponent is binomial(k,2)=28 so the remaining most significant power of u at x^{8} is u^{21+8*h+(81)28} which is u^{8} for h=1 and u^{16} for h=2 . In general, the ratio of most significant powers of u in numerator and denominator for each k is simply u^{k*h} .
If u is in the near of 1, and h is fractional, then additional hyperbolic effects occur since the denominators approach zero if integer powers u^{n} approach 1, and these are not evenly cancelled by the numerators if h is noninteger.
For integer heights the coefficients a_{h;k} are evenly divisible by the denominators and only few different powers of u are involved in the final coefficient a_{h;k}/d_{k}; for fractional h this is not the case. In that case we have a much more complicated structure of powers of u (however the most significant power is still u^{k*h}.)
The explicite computation of the coefficients a_{k} is
a_{h;k} = V(u)~*A_{k}*V(u^{h})= 

where the index of matrices are beginning at 1 as used in Pari/GP
Heights h introduce higher powers of u into the coefficients. For the coefficientsmatrices A_{k} this means an increasing rowshift of each column according to height h. Let now D denote the coefficients at powers of u of the denominator, which are independent of the heightparameter. Then if the A_{k} matrices are rewritten according to integer heights, they occur to be linear combinations of (shifted) Dvectors. This shifting expresses multiplications with powers of u.
The rows of the following matrices agree to the resp. powers of u, so I added the accordingly vectors of upowers in the mostleft column to make this explicit. Since the rowshifting "renormalized" the powers of u depending on the hparameter, we can simply use the rowsums to arrive at the final values for the scalar coefficient a_{h,k}.

A_{4} dep. on h 
rowsums 
decomposition into multiples of D_{4} 
a_{h,k} = 
~ * 



= V(u)~ * D_{4} * 0 
~* 



= u * V(u)~ * D_{4} * [1] 
~* 



= u^{2} *V(u)~ * D_{4} *[1, 7u, 6u^{2}, 1u^{3}] 
The coefficients of the powerseries for the Schröderfunction for U_{t}(x) , σ_{Ut} , occur in the second column of W, so that
V(x)~ * W = V(σ_{Ut}(x))~
and for the inverse
V(x)~ * W^{1} = V(σ_{Ut}^{1}(x))~
so that finally
V(x)~ * W =
V(σ_{Ut}(x))~
V(σ_{Ut}(x))~ *
^{d}V(u^{h}) = V(u^{h} * σ_{Ut}(x))~
V(u^{h} * σ_{Ut}(x))~ * W^{1} = V(σ_{Ut}^{1}(u^{h}
σ_{Ut}(x)))~
=
V(U_{t}^{°h}(x))~
The powerseries is
The denominators d_{k} are the same as the in the previous
d_{k} = 

The s_{k} are polynomials in u only; in the following they are expressed as rowvectors of their coefficients (S_{k} ), where the leftmost entry is associated with u^{0}:
S_{1} = 

=> s_{1}= 1 d_{1} = 1 

S_{2} = 

=> s_{2} = 1 d_{2}= (u1)/u 

S_{3} = 

=> s_{3} = (1 + 2u) d_{3}= (u1)(u^{2}1)/u^{2} 

S_{4} = 



S_{5} = 



S_{6} = 



S_{7} = 



The number of columns at s_{k} are binomial(k1,2)+1, so the highest exponent to which a power of u appears in s_{k} is thus binomial(k1,2).
Note, that these coefficients occur also in the corresponding Amatrices in the second column, which agrees to the fact, that the schröderfunction is assumed to be the limit of the case h>inf: then all other columns of that Amatrices get shifted to infinity and –if u<1 – vanish by approximation of u^{h} to zero.
We have from this
s_{k} = S_{k} *V(u) = 

To see the ratio of the highest exponent of u in numerator and denominator we add binomial(k1,2)+1 and (k1) according to the main formula and subtract binomial(k,2) according to the denominator which comes out to be zero for each k.
The coefficients for the inverse of the Schröderfunction are in the same way taken from W^{1} instead.
For Utetration the matrices U and U_{t}are the triangular Stirlingmatrix scaled as the following:
U = ^{d}F^{1}*St2
* ^{d}F //
performs x  >exp(x)1
U_{t} = ^{d}V(u)*
U // where
u=log(t); performs x > t^{x }– 1
where the involved (infinite size assumed) matrices are (only the topleft segment is displayed):
V(u) = ^{d}V(u) = diag(V(u)) 
columnvector([1, u, u^{2},u^{3},u^{4},...]) diagonal arrangement 
^{d}F = 
diag(0!,1!,2!,3!,...) 
St2 = 
Stirlingnumbers 2^{nd} kind 
^{ d}F^{1}*St2 * ^{d}F = U = 

U_{t} = 
the numerical value for u=log(t) is actually used here 
Note: the related Bellmatrix is just another convention to use U_{t}: it is a similarityscaling of U_{t}, where the factorials are removed
BELL_{t} = ^{d}V(u) * St2
The Carlemanmatrix is the transposed of the Bellmatrix.
Here is a zoomed image of the coefficientmatrix A_{16} for a_{16} at x^{16}:
Gottfried Helms; 9. Aug. 2008
11.1.2010: added info about identity of Wcolumn and Amatrix (in context to schröderfunction)
Further references:
[1] wikipedia : "tetration", superlogarithm, iteration of functions, Bell/Carlemanmatrix
[2] Andrew Robbins, "home of tetration" references in article
[3] Ioannis Galidakis, section "LambertW, tetration..." references
The article of
[4] Ernst Schröder, "Über iterirte Funktionen"; in: Mathematische Annalen, Band 3, Heft 296, 1871
An extensive discussion is led at
[5] Henryk Trappmann, Henryk Trappmann et al. at: tetrationforum
My own projectindex
[6] Gottfried Helms, technical articles on Tetration