# Tetration Forum

Full Version: Road testing Ansus' continuum product formula
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5
Hi.

http://math.eretrandre.org/tetrationforu...273&page=3

the following integral+continuum product iterative formula was mentioned for tetration to base e:

So I put together a Pari/GP code to road-test it numerically, to see if maybe it would converge on something. I use the formula

to convert the product to a sum (which also has the nice effect of changing the x+1 to x), and then the Faulhaber's formula (expressed using Bernoulli polynomials/coefficients) is used to do the continuous sum on a set of Taylor coefficients. The Bell polynomials can be used to compute the exponential exp of the series.

When I run the code, I notice something strange. For a small number of terms in the test array (I think around 15), it seems to converge and give like 4-5 digits of accuracy. For a larger number (say 28 or more), however, it starts converging but then it blows up.... Not sure if this is a problem with the code, or what... That's what I want to find out. As ultimately I'd like to be able to explore the general formula for other bases, including the really intriguing case of ...

Code:
/* Continuous sum transformation. */ bernoulli(n) = if(n==0, 1, (-1)^(n+1) * n * zeta(1 - n)); contsumcoef(series, k) = if(k==0, 0, k!*sum(n=1,matsize(series), series[n]/(n!)*binomial(n, k)*bernoulli(n-k))); sumoperator(series) = { local(v=series); for(i=1,matsize(series),v[i] = contsumcoef(series, i-1)); return(v); } tsum(series, x) = sum(k=0,matsize(series)-1,series[k+1]/k! * x^k); /* Compute exponential of a Taylor series. */ bellcomplete(coefs) = {             local(n=matsize(coefs));             local(M=matrix(n,n));             for(i=1,n,\                 for(j=1,n,\                     M[i, j] = 0;                     if(j-i+1 > 0, M[i, j] = binomial(n-i, j-i)*coefs[j-i+1]);                     if(j-i+1 == 0, M[i, j] = -1);                 );             );             return(matdet(M)); } expoperator(series) = {            local(v = series);            for(i=1,matsize(series)-1,                v[i+1] = bellcomplete(vector(i,n,series[n+1])));            v = 1;            return(v*exp(series)); } /* Compute integral of a Taylor series at lower bound a. */ intoperator(series, a) = {            local(v = series);            for(i=1,matsize(series)-1,                v[i+1] = series[i]);            v = -sum(i=2,matsize(series),v[i]*(a^(i-1))/((i-1)!));            return(v); } /* Tetration iteration */ tetiter(f0, series) = f0*intoperator(expoperator(sumoperator(series)), -1);

To use the code, run tetiter several times over a vector containing coefficients for an initial guess (all zeroes will work), and then use tsum to evaluate it at a given x-value. f0 should be set equal to a numerical value for the derivative at 0 (see the original thread for the value I used).
(09/15/2009, 06:32 PM)Ansus Wrote: [ -> ]How many iterations did you do?

Also note that the coefficient is not exactly f'(0) because the indefinite product implyes indefinite multiplyer. And the indefinite sum inplyes indefinite term. So the constant coefficient can be found only after completing all iterations. If you do these iterations and find that it grows with any iteration, simply normalize the last result according the requirement f(0)=1.

Hmm. Normalizing it after each iteration seems to stop it from growing up, but if I use, say, 28 terms,then after around 6 iterations it still break away from converging toward the tetration.

Also, how did you come up with that initial value for f'(0)? Did you grab it from someone else's proposed extension from tetration (Kounzetsov? Cauchy Integral?) And does this formula also generalize and work for other bases, incl. ? Can you graph the (probably complex-valued, so 2 graphs) tetration of, say, b = 0.04?
However that doesn't really seem to say how you obtain that numerical value for the derivative at 0, which is what I was asking about. Where does that come from? I.e. how did you come up with that number 1.0917... for base e? What would it be for base 2? For base 0.04 (should be some complex number)?
(09/15/2009, 10:47 PM)Ansus Wrote: [ -> ]This is not the numerical value of the derivative at 0 though because I used indefinite formula. If you want the canstant to be the derivative at 0 you should use the definite formula

i.e. slighly modify your sum operator.

Well I'm trying out this formula (though I'm still not quite sure as to how "indefinite" sum yields the derivative constant...). Here's my new code for it (Pari/GP again):

Code:
/* Bernoulli number B_n */ bernoulli(n) = if(n==0, 1, (-1)^(n+1) * n * zeta(1 - n)); /* Bernoulli polynomial B_n(x) */ bernpoly(n) = sum(k=0,n,binomial(n,k)*bernoulli(k)*x^(n-k)); /* Continuum sum sum_{n=1...inf} f^(n-1)(0)/(n!) (B_n(x) - B_n(0)), * where f^(n-1) = (n-1)th derivative of f */ rhssumpoly(p) = sum(n=1,poldegree(p)+1,polcoeff(p,n-1)*(bernpoly(n) - subst(bernpoly(n), x, 0))/n); /* Compute exp of a power series */ exppoly(p) = truncate(exp(p)); /* Compute definite integral from -1 to x of a truncated power series polynomial */ defint(p) = { local(integ=intformal(p)); return(integ - subst(integ, x, -1)); } /* Do one iteration of the formula */ iterate(p) = 1.0917673512583217*defint(exppoly(rhssumpoly(p))); /* Normalized iteration */ iternorm(p) = { local(pi=iterate(p)); return(pi/polcoeff(pi, 0)); } /* Calculation loop. */ numiter = 32; \p 128; /* arithmetic precision (decimals) */ \ps 32; /* series term limit */ p = 1 for(i=1,numiter,p=iternorm(p)); /* change iternorm to iterate for unnormalized iteration */ /* Now you can run subst(p, x, y) to evaluate series at value y, or polcoeff(p, n) to get nth coefficient */

if I use 16 terms of series it seems to converge to a truncated series giving 5-6 decimals of natural tetration. If I use 32 (as the above is set to), it collapses to a truncated series with -1 in the highest-order coefficient and 1 in the lowest-order with all the rest zero. Why is that? Why can't I get more accuracy? Is it due to the lack of precision in the derivative constant (the 1.0917... part)? (Yet *dropping* the accuracy of that part, to, say just 1.091 doesn't make it diverge with 16 terms... so I'm mystified by this whole thing) If so, how can I get that to higher precision? If not, what's going on? Also, you mentioned the "indefinite" version of the formula which doesn't require preknowing (or lets you obtain) the value of f'(0) but I'm still not sure how to make that go as that introduces various spurious constants (constant of summation/product, constant of integration too?). Could you write out the indefinite formula in a form with f(x) on the left and the operations to be performed on the right? I.e. like how the definite formula (the one I quoted out of your post) can be written as (for base e):

?
(09/16/2009, 09:05 AM)Ansus Wrote: [ -> ]
Quote:If I use 32 (as the above is set to), it collapses to a truncated series with -1 in the highest-order coefficient and 1 in the lowest-order with all the rest zero.

This is surely something strange and does not depend on which formula do you use. You definitely should not get "all zeros".

Also note that , i.e. Bernoulli number.

So what could be happening here? Have you actually tested the formula yourself before (with your own code) with iteration on polynomial approximations/truncated powerseries?
(09/16/2009, 05:25 PM)Ansus Wrote: [ -> ]Yes, but only three iterations. You've already seen the graphics. The second iteration I simply found symbolically and the third I found numerically.

So you've never tried a numeric procedure up to lots and lots of iterations, then? I think, however, that I may have figured out what the problem is.

First off, we start with the formula:

We can write

,

and

.

So the rhs of the formula becomes

We can reverse the summation signs to get this in the form of a power series in x^k:

So each coefficient of the power series for the continuum sum is itself a series, which may or may not converge. It turns out that for some functions, the coefficients for the continuum sum do not converge even if there the other series has nonzero radius of convergence. I suspect, that for any function that is not entire, i.e. whose power series has finite radius of convergence, and this includes tetration (note the presence of a singularity at z = -2), this will not work. If you try the above sum formula with the coefficients for log(1 + x), you'll see it won't work, despite the fact that the continuum sum for log(1 + x) exists and has a power series expansion at 0.

I'd guess that to make it work one would need to employ some sort of divergent summation method to extend the range of applicability of the formula (i.e. to sum up when the coefficients decay slowly), or else find an alternate method of computing solutions to the continuum-sum equation (any ideas? How could Mma's NDSolve have done it???).
(09/17/2009, 12:34 AM)mike3 Wrote: [ -> ]We can reverse the summation signs to get this in the form of a power series in x^k:

So each coefficient of the power series for the continuum sum is itself a series, which may or may not converge. It turns out that for some functions, the coefficients for the continuum sum do not converge even if there the other series has nonzero radius of convergence. I suspect, that for any function that is not entire, i.e. whose power series has finite radius of convergence, and this includes tetration (note the presence of a singularity at z = -2), this will not work. If you try the above sum formula with the coefficients for log(1 + x), you'll see it won't work, despite the fact that the continuum sum for log(1 + x) exists and has a power series expansion at 0.

I'd guess that to make it work one would need to employ some sort of divergent summation method to extend the range of applicability of the formula (i.e. to sum up when the coefficients decay slowly), or else find an alternate method of computing solutions to the continuum-sum equation (any ideas? How could Mma's NDSolve have done it???).
Hmm, if I didn't understand something completely wrong, the inner sums should be convergent for a constant f°(). (the fectorially scaled bernoulli-numbers sum up to something exp(1)/exp(1)-1 (see exp. generation-function). If we have a base 1<b<=exp(exp(-1)) the progression of consecutive f°k(0) approaches zero, because the values approach a limit, so the inner sums should be convergent.
I've computed the inner sums with 64 terms, without need of any divergent summation. Please crosscheck the first 8 inner sums, maybe I have some index-error (because I use simply my hard-wired standard-matrices)
Code:
[0.615466537512, 0.920328004678, 0.395774629333, 0.110989289616, 0.0236364701557, 0.00409053392671, 0.000598419579850, 0.0000759705275454]
Gottfried
(09/17/2009, 12:34 AM)mike3 Wrote: [ -> ]We can reverse the summation signs to get this in the form of a power series in x^k:

Yes, I did a similar derivation here and Ansus confirmed the formula here. Can you check whether our derivations coincide? (I did not consider the constant term though.)

Quote:So each coefficient of the power series for the continuum sum is itself a series, which may or may not converge. It turns out that for some functions, the coefficients for the continuum sum do not converge even if there the other series has nonzero radius of convergence.

Good that you raise this topic. I already asked Ansus about it, but it seems he is not so interested in the convergence stuff, or simply has no answers.
It also seems that Markus Mueller defines his fractional sum in a different way. Maybe this is due to exactly those convergence issues.

Quote:I suspect, that for any function that is not entire, i.e. whose power series has finite radius of convergence, and this includes tetration (note the presence of a singularity at z = -2), this will not work. If you try the above sum formula with the coefficients for log(1 + x), you'll see it won't work, despite the fact that the continuum sum for log(1 + x) exists and has a power series expansion at 0.

Would be interesting to have some theorem on when the coefficients of the extended/continuum sum converge. Can you prove your claim about the sum of entire functions?
(09/17/2009, 07:43 AM)Gottfried Wrote: [ -> ]If we have a base 1<b<=exp(exp(-1)) the progression of consecutive f°k(0) approaches zero, because the values approach a limit, so the inner sums should be convergent.
Not sure whether I understand you. Where does the iteration of f occur in the inner sum?
In the moment I have not so much time to dedicate to this topic.
But in analysis of convergence of the infinite sums, the asymptotic formula (found in wikipedia) may be useful:
Pages: 1 2 3 4 5