• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Road testing Ansus' continuum product formula Gottfried Ultimate Fellow     Posts: 784 Threads: 121 Joined: Aug 2007 09/17/2009, 09:49 AM (This post was last modified: 09/17/2009, 09:50 AM by Gottfried.) (09/17/2009, 09:08 AM)bo198214 Wrote: (09/17/2009, 07:43 AM)Gottfried Wrote: If we have a base 1 The estimate for B_n/n! is about 2*(2pi)^n which is approached early with very good accuracy. Here is the list of ratios B_n/n! * 2*(2pi)^n (only nonzero entries listed) : Code:0.500000000000    1.64493406685   -1.08232323371    1.01734306198   -1.00407735620    1.00099457513   -1.00024608655    1.00006124814   -1.00001528226    1.00000381729   -1.00000095396    1.00000023845   -1.00000005961    1.00000001490   -1.00000000373    1.00000000093   -1.00000000023    1.00000000006   -1.00000000001    1.00000000000   -1.00000000000    1.00000000000This determines the range of convergence for some function, if it is applied to the bernoulli-numbers in the form of the inner sum, depending on their own Taylor/Maclaurin-coefficients and its parameter x. If - for instance - the function exp(x) is applied, then the Maclaurin-coefficients are all 1. These are cofactored with the bernoulli-numbers divided by factorials: thus the convergence-radius for this function in this application is x<=2*Pi. If for another function the coefficients grow with some geometric rate, say q, the convergence-radius for that function in this application is x < 2*pi/q If for another function the Maclaurin-coefficients grow with some hypergeometric rate, then the convergence-radius for that series in this form of application is zero. Additional remark: for the powerseries of fractional iterates for regular iteration I found in 2007 and 2008[*1,*2], that the growthrate of the maclaurin-coefficients beginning at some finite index k is roughly binomial(n-k,2) (hope I recall right), so asymptotically for n-> inf about n(n-1)/2 =n^2/2-n/2 so their convergence-radius wr to its parameter x is also zero (as originally stated by I.N.Baker , and is also not even Borel-summable, since this grows faster than the factorials) [update] well, with a second read of this I think one more remark is useful. If we see the above double sum as two-dimensional table (or matrix), where the inner sum runs over a column, then my discussion here concerns only one column (and namely the first one, while the original problem concerns the row-sums and then the column-sum of the rowsums). For the convergence of the whole sum (including the outer sum over all columns) we'll have to extend this consideration about the range of covergence. [/update]. Gottfried [*1] see coefficients fractional height 1 [*2] see also coefficients fractional height 2 Gottfried Helms, Kassel mike3 Long Time Fellow    Posts: 368 Threads: 44 Joined: Sep 2009 09/18/2009, 12:01 AM (09/17/2009, 11:05 PM)Ansus Wrote: The fact is that during the calculation I never applied the sum to a function with limited radius of convergence. At first iteration - to x+1 function, at the second iteration - to the erfi(x). Both have infinite convergence radius. The vertical asymptote emerges only after infinite number of iterations. At any iteration the approximating function has no asymptote. So you were iterating it symbolically, in which case power series considerations would not apply. But I'm sure at some point the symbolic iteration would have no closed form (run out of special functions), and so could not continue past that point. So to use the formula for more accuracy and at all sorts of bases (incl. the fabled b = 0.04 and other bases between 0 and ) would require an extension of continuum sum that could work on something like a powerseries or similar. mike3 Long Time Fellow    Posts: 368 Threads: 44 Joined: Sep 2009 09/18/2009, 07:26 AM (This post was last modified: 09/18/2009, 07:26 AM by mike3.) So then, what sort of method could be used to do high levels of iteration? Obviously it won't work when power series are the thing being iterated upon, at least with conventional summation (not sure about "divergent summation" techniques, but things like Euler summation introduce extra parameters, the value of which I'm not sure how to get. A good testbed might be the coefficients given on http://en.citizendium.org/wiki/Tetration...on_at_zero for the Taylor expansion of the tetrational function to the base e at z = 0, recovered via the Cauchy integral.), because one is trying to go toward a series with limited convergence radius (and Bernoulli sum chokes on that due to the growth rate of the coefficients.). mike3 Long Time Fellow    Posts: 368 Threads: 44 Joined: Sep 2009 09/19/2009, 12:37 AM Well, I've put together something that uses the same Bernoulli formula but with a divergent summation technique thrown in to see if it could be used to make it converge on something. I had a thread on the sci.math newsgroup talking about continuum sums (as it is about more general math questions than just "Tetration") and if divergent summation could be used to extend the range of the Bernoulli formula: http://groups.google.com/group/sci.math/...ee12?hl=en Using the methods I was told about, I put together this PARI/GP code. For base e and parameters k = 1.4, h = 0.3, I got as approximately 1.646354234 (trunc from series evaluation - residual = 1.6463542340349813076692026966465493046 or series evaluation + residual = 1.6463542341294601977391298318188518176). How does that agree with other methods of tetration? Choosing all the parameters (incl. number of terms in the series) to get good accuracy seems real tricky. Especially for bases less than 1 where complex numbers are used (haven't yet figured out the right settings for the fabled range, namely b = 0.04). Any ideas on that? Can you squeeze more accuracy out (how about trying b = sqrt(2) and trying to see if it can be gotten to agree w/ the regular iteration to, say, 24 places or not)? Code:base = exp(1); /* base of tetration */ pdegree = 65; /* number of terms to use in the series minus 1 */ \ps 65 /* should equal pdegree */ dim = pdegree+2; DR = matrix(dim, dim, r, c, if(c <= r, 1, 0)); PkPow(k,h) = matrix(dim,dim,r,c,\                      if(r>=c,\                           binomial(r-1,c-1)*h^(r-c) * ((r-1)!/(c-1)!)^(k-1)\                     )  ) rownorm(A) = {        local(R=A);        for(r=1,matsize(A),            R[r,] = A[r,]/sum(c=1,matsize(A),A[r,c]));        return(R); } PkPowSum(k, h) = rownorm(PkPow(k, h)) * DR; /* Bernoulli number B_n */ bernoulli(n) = if(n==0, 1, (-1)^(n+1) * n * zeta(1 - n)); /* Fill array with sum terms for a coefficient b_k. */ MakeBernTermArray(k, p) = {                  local(nmax = dim);                  local(v=vector(nmax));                  v = 0;                  for(n=1, nmax-1, v[n+1] = polcoeff(p, n-1)/n*binomial(n,k)*bernoulli(n-k));                  return(v); } MakeBernTermColVector(k, p) = mattranspose(MakeBernTermArray(k, p)); /* Divergent sum based continuum sum */ ContSumPoly(p, k, h) = {             local(p2 = 0);             local(PPS = PkPowSum(k, h));             for(k=0,dim-1,p2=p2+((PPS*MakeBernTermColVector(k, p))[dim]*x^k));             return(p2 - subst(p2, x, 0)); } /* Compute exp of a power series */ exppoly(p) = truncate(base^p); /* Multiply by log(base)^x */ mullogb(p) = { local(multpoly = truncate(log(base)^x)*p, outp = 0);                for(i=0,pdegree,outp=outp+polcoeff(multpoly,i,x)*x^i);                return(outp); } /* Compute definite integral from -1 to x of a power series */ defint(p) = { local(integ=intformal(p)); return(integ - subst(integ, x, -1)); } /* Do one iteration of the formula */ iterate(p, k, h) = defint(mullogb(exppoly(ContSumPoly(p, k, h)))); /* Normalized iteration */ iternorm(p, k, h) = { local(pi=iterate(p, k, h)); return(pi/polcoeff(pi, 0)); } /* N iterations on an initial guess */ niters(n, p, k, h) = { local(result=p); for(i=1,n,result=iterate(result,k,h)); return(result); } /* N normalized iterations on an initial guess */ nnormiters(n, p, k, h) = { local(result=p); for(i=1,n,result=iternorm(result,k,h)); return(result); } /* Use polynomial */ sumpoly(p, evalpoint) = subst(p, x, evalpoint); /* Compute residual of tetration */ residual(p) = log(sumpoly(p, 0.5))/log(base) - sumpoly(p, -0.5); /* Real tetration */ tetcrit(X) = sumpoly(tetpoly, X) realtet(X) = {        if(X < -0.5, return(log(realtet(X+1))/log(base)));        if(X > 0.5, return(base^realtet(X-1)));        return(tetcrit(X)); } /* To use: run tetpoly = nnormiters(, , , ). For * the test run I used k = 1.4, h = 0.3, 32 iterations, initial guess = 1. residual(tetpoly) gives * the residual (how well the function solves F(x+1) = exp(F(x))). Then realtet(X) will give the approximation * of the tetration ^X b using the computed series. */ bo198214 Administrator Posts: 1,391 Threads: 90 Joined: Aug 2007 09/21/2009, 02:29 PM (This post was last modified: 09/22/2009, 02:29 AM by bo198214.) I never could imagine that this problem was not already considered by mathematicians. Indeed I found a first reference that there is a whole theory about the topic by Norlund. Look here into some free pages of the book The Calculus of Finite Differences by Louis Melville Milne-Thomson (2000). EDIT: Actually I see that the book was already published in 1933. You can download a copy here. « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post fixed point formula sheldonison 6 16,365 05/23/2015, 04:32 AM Last Post: mike3 Numerical algorithm for Fourier continuum sum tetration theory mike3 12 27,400 09/18/2010, 04:12 AM Last Post: mike3

Users browsing this thread: 1 Guest(s) 