Here is some explanation in terms of Pari/GP-code, how the serial alternating iteration sums asum(x) can be expressed/computed with the help of power series.

Preliminaries: Globale variable for base, log of the base and dim for dimension. Then the stdcall for the exponential/logarithm with a variable (but integer) heightparameter:

First, the formula for the alternating iteration series; this is "asum(x)". This is simply the summa-tion of the iterated exponentials/logarithms.

Because we want to compare this with a true power series solution (via the Carleman-matrix-concept) where we have to separate this in two parts, we do this here too; the alternating iteration series towards fp0 is asump(x), and that towards the fp1 it is asumn(x). After adding both two segments we have to reduce the joint sum by x because it was doubly accounted for:

Now we create Carleman-matrices for that sums, that means we shall get power series for that sums. First we need the Carleman-matrices for tb^x-1 developed around fp0 and that developed around fp1. These are the "Carl0" and the "Carl1"-matrices.

Then the matrices, which provide the coefficients for the power-series for the alternating sums, are created using the Neumann-series-expression for that Carlemanmatrices (CarlAsp,CarlAsn)

Note the small difference in the formula for the CarlAsn compared to that of CarlAsp.

\\ Series and Carleman-matrix related to fixpoint fp0 (= 0)

Let us ignore the integer-height iterations to shift an argument x sufficiently towards the fixpoints, then the formulae for the power-series-solutions are

where the coefficients p are taken from the coeffs_asp-array.

where the coefficients n are taken from the coeffs_asn-array and t denotes the upper fixpoint fp1.

Now we test these functions:

The errors with power series truncated to 64 terms are far smaller than 1e-100. So we should accept this as very good approximation and I think it is a reasonable hypothese to assume it a true analyti-cal solution.

However, for me it is somehow unusual to have two power series to include in the computation, and also the initial "shift" of the x-argument towards the resp. fixpoints by integer iterations, such that the power series converge. I have no idea, how, for instance, could this process then be inverted and get a power series (even if only a formal power series) assigned to...

Here are the first few coefficients for the power series for reference for you, if you want to check this. Column 1 and 2 for the b^x-1 resp log(1+x)/log(b)-expressions, then column 3 for asump_mat and column 4 for asumn_mat .

Preliminaries: Globale variable for base, log of the base and dim for dimension. Then the stdcall for the exponential/logarithm with a variable (but integer) heightparameter:

Code:

`[tb = 1.3, tl=log(tb)]`

dim=64 \\ for power series expansion to "dim" terms, and matrix-size

\ps 64

\\ for direct access to the iteration height

exph(x,h=1) = for(k=1,h, x= exp(x*tl)-1);for(k=1,-h,x=log(1+x)/tl);return(x);

glx = 1.0 \\ for sequential access to the iteration by one step

{nextx(x,h=0)= if (h==0, glx=x;return(glx));

glx=if(h>0,exph(glx,1)

,exph(glx,-1));

return(glx); }

First, the formula for the alternating iteration series; this is "asum(x)". This is simply the summa-tion of the iterated exponentials/logarithms.

Because we want to compare this with a true power series solution (via the Carleman-matrix-concept) where we have to separate this in two parts, we do this here too; the alternating iteration series towards fp0 is asump(x), and that towards the fp1 it is asumn(x). After adding both two segments we have to reduce the joint sum by x because it was doubly accounted for:

Code:

`asump(x)= sumalt(k=0,(-1)^k*nextx(x, k))`

asumn(x)= sumalt(k=0,(-1)^k*nextx(x,-k))

asum(x) = asump(x)+asumn(x) - x

\\ -------------------------------------------------------------

Now we create Carleman-matrices for that sums, that means we shall get power series for that sums. First we need the Carleman-matrices for tb^x-1 developed around fp0 and that developed around fp1. These are the "Carl0" and the "Carl1"-matrices.

Then the matrices, which provide the coefficients for the power-series for the alternating sums, are created using the Neumann-series-expression for that Carlemanmatrices (CarlAsp,CarlAsn)

Note the small difference in the formula for the CarlAsn compared to that of CarlAsp.

\\ Series and Carleman-matrix related to fixpoint fp0 (= 0)

Code:

`coeffs0 = polcoeffs(exp(tl*(x+fp0))-1 - fp0, dim);coeffs0[1]=0; `

print( coeffs0) \\ that statement to display coeffs0 in the gp-dialogue

Carl0 = matfromser(coeffs0); \\ Carlemanmatrix for tb^x-1 by powerseries around fp0 (=0)

CarlAsp = (matid(dim) + Carl0)^-1 \\ Newtonseries-matrix for Carl0 giving series-coefficients for "asump" (= the altern. series towards fp0)

coeffs_asp =CarlAsp[,2] \\ that coefficients in a vector, for computation of "asp_mat(x-fp0)+fp0" = "asp_mat(x)" because fp0=0

\\ Series and Carleman-matrix related to fixpoint fp1

coeffs1 = polcoeffs(exp(tl*(x+fp1))-1 - fp1, dim);coeffs1[1]=0;

print( coeffs1 ) \\ that statement to display coeffs1 in the gp-dialogue

Carl1= matfromser(coeffs1); \\ Carlemanmatrix for tb^(x+fp1)-1-fp1 by powerseries around fp1 (=???)

CarlAsn = (matid(dim) + Carl1^-1)^-1 \\ Newtonseries-matrix for Carl1^-1 giving coefficients for "asumn" (= the altern. series towards fp1)

coeffs_asn = CarlAsn[,2] \\ that coefficients in a vector, for computation of "asn_mat (x-fp1)+fp1"

{asump_mat(x)=local(h,su);

while(abs(x-fp0)>0.5, \\ move the argument x for the powerseries towards fp0

su=su+x; x=exph(x,1);

su=su-x; x=exph(x,1);

);

x = x-fp0;

su= su + sum(k=0,dim-1,x^k*coeffs_asp[1+k]) ; \\ evaluate asp_mat at x_h

su= su + fp0/2;

return(su); }

{asumn_mat(x)=local(h,su);

while(abs(x-fp1)>0.5, \\ move the argument x for the powerseries towards fp1

su=su+x; x=exph(x,-1);

su=su-x; x=exph(x,-1);

);

x = x-fp1;

su= su + sum(k=0,dim-1,x^k*coeffs_asn[1+k]) ; \\ evaluate asn_mat at x_(-h)

su= su + fp1/2;

return(su); }

asum_mat(x)=asump_mat(x)+asumn_mat(x)-x

\\ ==============================================

Let us ignore the integer-height iterations to shift an argument x sufficiently towards the fixpoints, then the formulae for the power-series-solutions are

where the coefficients p are taken from the coeffs_asp-array.

where the coefficients n are taken from the coeffs_asn-array and t denotes the upper fixpoint fp1.

Now we test these functions:

Code:

`\\ test that at x0=1.0`

x0=1.0

[aser=asump(x0),amat=asump_mat(x0),err=aser-amat]

[aser=asumn(x0),amat=asumn_mat(x0),err=aser-amat]

[aser=asum (x0),amat=asum_mat (x0),err=aser-amat]

\\ serial matrix-based difference

%1269 = [0.764698042872, 0.764698042872, 7.30924528016 E-132]

%1270 = [0.245204215222, 0.245204215222, 1.43545677556 E-118]

%1271 = [0.00990225809450, 0.00990225809450, 1.43545677556 E-118]

The errors with power series truncated to 64 terms are far smaller than 1e-100. So we should accept this as very good approximation and I think it is a reasonable hypothese to assume it a true analyti-cal solution.

However, for me it is somehow unusual to have two power series to include in the computation, and also the initial "shift" of the x-argument towards the resp. fixpoints by integer iterations, such that the power series converge. I have no idea, how, for instance, could this process then be inverted and get a power series (even if only a formal power series) assigned to...

Here are the first few coefficients for the power series for reference for you, if you want to check this. Column 1 and 2 for the b^x-1 resp log(1+x)/log(b)-expressions, then column 3 for asump_mat and column 4 for asumn_mat .

Gottfried Helms, Kassel