Seeking coefficients of the superfunction of an analytic function
Preliminaries: Assume an analytic function defined by a power series without constant term:
f(x)=
for plaintextwriting I use also the easier notation f(x)=a x + b x^{2} + c x^{3}+…
Use of matrices for functional composition: Composition and selfcomposition of analytic functions can be expressed elegantly using a matrix notation (also known as Bell/Carlemanmatrix). Consider the vectorial representation of f(x) as the dotproduct of a vector V(x) = [1, x, x2, x3,…] containing the powers of x and another vector, A_{1} , containing the coefficients, such that
V(x)~ * A_{1} = 1*a_{0} + x*a_{1} + x2*a_{2} + … = f(x)
The idea of composition/iteration of functions can then be expressed if we collect all vectors A_{k} into a matrix, where the A_{k} are defined that V(x)*A_{k} = f(x)k or said differently, that the generating function for A_{k} is f(x)k . Denote such a matrix A = [A_{0}, A_{1}, A_{2}, A_{3},…] and we will have
V(x) ~ * A = [1, x, x2, x3,…] * A = [1, f(x), f(x)2, f(x)3,…] = V(f(x))~
Because the "input" and the "output" of that matrixformula have the same type (a vector containing the consecutive powers of its parameter x, I call this a "vandermondevector") we can express iteration simply
V(x)~ * A = V(f(x))~
V(f(x))~ * A = V(f(f(x)) ~
and as long as the matrixproducts A*A , A*A*A do not run into singularities we can even write
V(x)~ * Ah = V(f°h(x))~
where f°h is the h'th iterate of f.
For analytic functions having no constant term, as for instance exp(x)1 , that matrices are lower triangular, the matrixpowers do not introduce singularities (each inner product has finitely many terms) and they even allow fractional powers and thus fractional iterates of the associated function.
For compactness of notation I introduce the symbolic notation for a power divided by the agreeing factorial: 
Then the Bellmatrix A for a general analytic function f without constant term has infinite size, ist triangular and its topleft segment begins with
(The top row includes a factorial which must be multiplied to each entry of the column.) Here the sum of the exponents agree to the columnindex, and the sum of exponent*index of each productterm agrees with the rowindex. It is just a complete partinioning scheme.
Example: the entry a_{3}°1a_{2}°2 is in column 3 because the sum of the exponents is 1+2=3, and it is in row 7 because exponents and indexes give 3*1 +2*2 = 7 . All combinations giving 3 and 7 this way are summands of that matrixentry.
For easier reading written out in consecutive letters f(x)=ax + bx2 + cx3 + …
Application to the expfunction: applied this to the exponential function requires
1! a = 2! b = 3! c = 4! d … = 1 and f(x) = exp(x)1 .
Then we have
Note, that the resulting entries are just factorially scaled Stirlingnumbers 2^{nd} kind.
We check one element:
A_{4,2} = (1/3! + (1/2!)²/2!)*2! = 1/3 + (1/2!)² = 7/12 = stirling2(4,2)*2!/4!
Further, we want this matrix to be the Bellmatrix for f(x)=tx –1 = exp(u*x) –1 for some base t other than e so we have to premultiply each row by a power of log(t) (which we call u):
A_{t}=
The coefficients of the superfunction
The matrix of eigenvectors: What we're discussing in the MOthread is the explicite description of the coefficients of the power series of the superfunction of exp(x) such that F(1+h) = exp(F(h))) . The expression for the linearizing function in Schröder's groundsetting article uses the infinite iterate as a limit to compute the coefficients of the powerseries for the "schröderfunction". This is exactly the same functionality which is captured by the matrix of eigenvectors of a diagonalizable matrix: the matrix of eigenvectors can be seen as a scaled version of the infinite power of the diagonalizable matrix. Thus the diagonalization provides us with the coefficients for the powerseries of the Schröderfunction for regular iteration.
We have to determine the eigenmatrix W such that
W^{1 }* D * W = A_{t}
and D is diagonal and contains the eigenvalues. For triangular matrices the eigenvalues are just the entries of the diagonal, so the the process of determining the eigenmatrix W (also normed to have diagonal 1) is very simple and straightforward. See the following Pari/GPcode:
{ EigenW(A) =
local(W, dim); \\ diagonalization of a lower triangular matrix
dim = rows(A);
W = matid(dim); \\ note that in
Pari/GP the row/colindexes begin at 1
for(r = 2,dim,
forstep(c = r1,1,1,
W[r,c] = sum(k = 0,r1c,
W[r,rk] * A[rk,c]) / (A[r,r] – A[c,c]) ))
return([W^1,diag(A),W]); } \\ return
all three matrices
Example: the computation of the topleft segment of W according to EigenW(A): (using onelevel of recursion)
1 










1*A_{2,1} 




1*A_{3,1}+W_{3,2}*A_{2,1} 
1*A_{3,2} 



1*A_{4,1}+W_{4,3}*A_{3,1}+W_{4,2}*A_{2,1} 
1*A_{4,2}+W_{4,3}*A_{3,2} 
1*A_{4,3} 

Removing recursion: This definition contains recursion, but it is not deep and easily removeable.
Example entry W_{4,1}: After expanding the recursive references W_{4,1} is explicitely:
It may be helpful to make a common denominator and expand the resulting numerator to possibly see a more closedform pattern.
Also we may introduce the current parameter u (the log of the base t here) and its powers into the formula:
Now expand the remaining references A_{r,c} in the numerator. For the tx –1 – function these are just the factorially scaled Stirling numbers 2^{nd}kind and the appropriate power of log(t) ( =u ):
where s_{r,c} = stirling2_{r,c}*c!/r!
Even more explicite we could use the detailed decompositions of the A_{r,c} by factorials:
Everything expanded and like powers of u collected:
Here we see the four coefficients of your list in the fourth row: (6,6,5,1)
Gottfried Helms, 8.3.2011