03/24/2010, 01:11 AM

Hi.

Here I'll present some results of my playing around with the Borel summation method to extend tetration to fractional, real, and complex heights.

Borel sums

More information:

http://en.wikipedia.org/wiki/Borel_summation

http://mathworld.wolfram.com/Borel-RegularizedSum.html

http://arxiv.org/pdf/hep-th/9206074

(the third also discusses generalizations of the method)

Take

a formal power series. The Borel sum is given by

(Caveat here: Wikipedia mentions instead of , however MathWorld and the other paper use the latter. The latter will be used here as well.)

provided the sum inside can be continued analytically to and it decays fast enough there to yield convergence of the integral.

It is possible to set up a higher-order method with order via repeated application, yielding

Application to tetration

We take Faulhaber's formula for the continuum sum:

where

and is an analytic function, whose continuum sum we desire. The problem with this formula is it does not converge for any analytic function . It only converges if is entire and is of exponential type of less than .

However, in some cases, it may still be possible to interpret the above formula as giving a continuum sum, through the employment of divergent summation techniques, such as the Cesaro summation, Euler summation, Abel summation, Borel summation, Ramanujan summation (not sure about that one since it would seem to imply using fractional derivatives which are nasty stuff), etc. Here though, we'll try it with Borel summation, giving

Now we can use Ansus' continuum sum formula:

.

Numerical algorithm

We could evaluate the Borel summation numerically via numerical integration, however this is a very expensive task. Instead, we use the following approach:

.

Thus we get the formula

which is much easier to compute with. For higher orders,

.

Thus for Faulhaber's formula, we have

.

To create a numerical estimate for tetration, we apply the iterating Ansus formula

.

to a Taylor series at 0. The coefficient is acquired by normalizing after each iteration so that the approximation equals 1 at .

Investigation

To test the method, I implemented the above algorithm in PARI/GP (which has good power series manipulation and arbitrary precision systems), and ran it. The tests were done with base .

Using 28 terms and 38 decimals of precision, first-order Borel summation and , I get a residual (, where is the approximation) of magnitude and . If I bump up the terms and decimals to 48 and to 24, it seems I need a higher order Borel summation, using order 2 gives a residual of and . Taking it to 128 terms, 128 decimals, and , I get a residual of order and . Precision/term requirements seem to get steep fast, not sure if I can push more digits out of this one, if I need higher order Borel summation, or even if it "really" does converge (in that case, there's those "generalized" Borel methods mentioned in that one paper but these are much, much slower numerically.).[/tex].

Experiment suggestion: The best value I got out of this, 1.64635424 for looks like that obtained from Kouznetsov's Cauchy-integral approach, and maybe also Robbins' tetration (Abel solution). I'd like to be able to have tons of Taylor terms precomputed from that approach (maybe 100 or more) to run though the Borel summation to see if the tetration really can be Borel-continuum-summed, however my current Cauchy integral code isn't fast enough to compute them all.

With the best approximation, I get

Program

The following PARI/GP code implements this numerical approximation technique. Feel free to play around with this.

Here I'll present some results of my playing around with the Borel summation method to extend tetration to fractional, real, and complex heights.

Borel sums

More information:

http://en.wikipedia.org/wiki/Borel_summation

http://mathworld.wolfram.com/Borel-RegularizedSum.html

http://arxiv.org/pdf/hep-th/9206074

(the third also discusses generalizations of the method)

Take

a formal power series. The Borel sum is given by

(Caveat here: Wikipedia mentions instead of , however MathWorld and the other paper use the latter. The latter will be used here as well.)

provided the sum inside can be continued analytically to and it decays fast enough there to yield convergence of the integral.

It is possible to set up a higher-order method with order via repeated application, yielding

Application to tetration

We take Faulhaber's formula for the continuum sum:

where

and is an analytic function, whose continuum sum we desire. The problem with this formula is it does not converge for any analytic function . It only converges if is entire and is of exponential type of less than .

However, in some cases, it may still be possible to interpret the above formula as giving a continuum sum, through the employment of divergent summation techniques, such as the Cesaro summation, Euler summation, Abel summation, Borel summation, Ramanujan summation (not sure about that one since it would seem to imply using fractional derivatives which are nasty stuff), etc. Here though, we'll try it with Borel summation, giving

Now we can use Ansus' continuum sum formula:

.

Numerical algorithm

We could evaluate the Borel summation numerically via numerical integration, however this is a very expensive task. Instead, we use the following approach:

.

Thus we get the formula

which is much easier to compute with. For higher orders,

.

Thus for Faulhaber's formula, we have

.

To create a numerical estimate for tetration, we apply the iterating Ansus formula

.

to a Taylor series at 0. The coefficient is acquired by normalizing after each iteration so that the approximation equals 1 at .

Investigation

To test the method, I implemented the above algorithm in PARI/GP (which has good power series manipulation and arbitrary precision systems), and ran it. The tests were done with base .

Using 28 terms and 38 decimals of precision, first-order Borel summation and , I get a residual (, where is the approximation) of magnitude and . If I bump up the terms and decimals to 48 and to 24, it seems I need a higher order Borel summation, using order 2 gives a residual of and . Taking it to 128 terms, 128 decimals, and , I get a residual of order and . Precision/term requirements seem to get steep fast, not sure if I can push more digits out of this one, if I need higher order Borel summation, or even if it "really" does converge (in that case, there's those "generalized" Borel methods mentioned in that one paper but these are much, much slower numerically.).[/tex].

Experiment suggestion: The best value I got out of this, 1.64635424 for looks like that obtained from Kouznetsov's Cauchy-integral approach, and maybe also Robbins' tetration (Abel solution). I'd like to be able to have tons of Taylor terms precomputed from that approach (maybe 100 or more) to run though the Borel summation to see if the tetration really can be Borel-continuum-summed, however my current Cauchy integral code isn't fast enough to compute them all.

With the best approximation, I get

Program

The following PARI/GP code implements this numerical approximation technique. Feel free to play around with this.

Code:

`base = exp(1);`

pdegree = 48;

\ps 49;

/* Bernoulli number B_n */

bernoulli(n) = if(n==0, 1, (-1)^(n+1) * n * zeta(1 - n));

/* Faulhaber kth term sum term */

faulhterm(n, k, polyn) = polcoeff(polyn, n-1)/n * binomial(n, k) * bernoulli(n-k);

/* Do Borel summation of truncated series polyn. ubound1 and ubound 2 should be "close to real infinity". */

borelsum(terms, ubound, order) = {

local(maxterms = matsize(terms)[2]);

return(sum(k=0,maxterms-1,(terms[k+1]*incgamc(k+1, ubound)^order)/((k!)^order)));

}

/* Use Borel summation to build up continuum sum */

ContSumPoly(polyn, ubound, order) = {

local(outpolyn = 0);

local(nterms = poldegree(polyn)+1);

local(bernterms = vector(nterms));

for(k=0,poldegree(polyn),\

for(n=1,poldegree(polyn),bernterms[n]=faulhterm(n, k, polyn));\

outpolyn=outpolyn+(borelsum(bernterms, ubound, order)*x^k);

);

return(outpolyn - subst(outpolyn, x, 0));

}

/* Compute exp of a power series */

exppoly(p) = truncate(base^p);

/* Multiply by log(base)^x */

mullogb(p) = { local(multpoly = truncate(log(base)^x)*p, outp = 0);

for(i=0,pdegree,outp=outp+polcoeff(multpoly,i,x)*x^i);

return(outp); }

/* Compute definite integral from -1 to x of a power series */

defint(p) = { local(integ=intformal(p)); return(integ - subst(integ, x, -1)); }

/* Do one iteration of the formula */

iterate(p, ubound, order) = defint(mullogb(exppoly(ContSumPoly(p, ubound, order))));

/* Normalized iteration */

iternorm(p, ubound, order) = { local(pi=iterate(p, ubound, order)); return(pi/polcoeff(pi, 0)); }

/* N iterations on an initial guess */

niters(n, p, ubound, order) = { local(result=p); for(i=1,n,result=iterate(result,ubound,order)); return(result); }

/* N normalized iterations on an initial guess */

nnormiters(n, p, ubound, order) = { local(result=p); for(i=1,n,result=iternorm(result,ubound,order)); return(result); }

/* Use polynomial */

sumpoly(p, evalpoint) = subst(p, x, evalpoint);

/* Compute residual of tetration */

residual(p) = log(sumpoly(p, 0.5))/log(base) - sumpoly(p, -0.5);

/* Real tetration */

tetcrit(X) = sumpoly(tetpoly, X)

realtet(X) = {

if(X < -0.5, return(log(realtet(X+1))/log(base)));

if(X > 0.5, return(base^realtet(X-1)));

return(tetcrit(X));

}