• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Road testing Ansus' continuum product formula mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 09/13/2009, 10:55 AM Hi. On the thread http://math.eretrandre.org/tetrationforu...273&page=3 the following integral+continuum product iterative formula was mentioned for tetration to base e: $f(x)=f'(0)\int_{-1}^x \prod _x f(x+1)dx$ So I put together a Pari/GP code to road-test it numerically, to see if maybe it would converge on something. I use the formula $\prod_x f(x) = \exp\left(\sum_x \log(f(x))\right)$ to convert the product to a sum (which also has the nice effect of changing the x+1 to x), and then the Faulhaber's formula (expressed using Bernoulli polynomials/coefficients) is used to do the continuous sum on a set of Taylor coefficients. The Bell polynomials can be used to compute the exponential exp of the series. When I run the code, I notice something strange. For a small number of terms in the test array (I think around 15), it seems to converge and give like 4-5 digits of accuracy. For a larger number (say 28 or more), however, it starts converging but then it blows up.... Not sure if this is a problem with the code, or what... That's what I want to find out. As ultimately I'd like to be able to explore the general formula for other bases, including the really intriguing case of $0 < b < e^{-e}$... Code:/* Continuous sum transformation. */ bernoulli(n) = if(n==0, 1, (-1)^(n+1) * n * zeta(1 - n)); contsumcoef(series, k) = if(k==0, 0, k!*sum(n=1,matsize(series)[2], series[n]/(n!)*binomial(n, k)*bernoulli(n-k))); sumoperator(series) = { local(v=series); for(i=1,matsize(series)[2],v[i] = contsumcoef(series, i-1)); return(v); } tsum(series, x) = sum(k=0,matsize(series)[2]-1,series[k+1]/k! * x^k); /* Compute exponential of a Taylor series. */ bellcomplete(coefs) = {             local(n=matsize(coefs)[2]);             local(M=matrix(n,n));             for(i=1,n,\                 for(j=1,n,\                     M[i, j] = 0;                     if(j-i+1 > 0, M[i, j] = binomial(n-i, j-i)*coefs[j-i+1]);                     if(j-i+1 == 0, M[i, j] = -1);                 );             );             return(matdet(M)); } expoperator(series) = {            local(v = series);            for(i=1,matsize(series)[2]-1,                v[i+1] = bellcomplete(vector(i,n,series[n+1])));            v[1] = 1;            return(v*exp(series[1])); } /* Compute integral of a Taylor series at lower bound a. */ intoperator(series, a) = {            local(v = series);            for(i=1,matsize(series)[2]-1,                v[i+1] = series[i]);            v[1] = -sum(i=2,matsize(series)[2],v[i]*(a^(i-1))/((i-1)!));            return(v); } /* Tetration iteration */ tetiter(f0, series) = f0*intoperator(expoperator(sumoperator(series)), -1); To use the code, run tetiter several times over a vector containing coefficients for an initial guess (all zeroes will work), and then use tsum to evaluate it at a given x-value. f0 should be set equal to a numerical value for the derivative at 0 (see the original thread for the value I used). mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 09/15/2009, 08:01 PM (09/15/2009, 06:32 PM)Ansus Wrote: How many iterations did you do? Also note that the coefficient is not exactly f'(0) because the indefinite product implyes indefinite multiplyer. And the indefinite sum inplyes indefinite term. So the constant coefficient can be found only after completing all iterations. If you do these iterations and find that it grows with any iteration, simply normalize the last result according the requirement f(0)=1. Hmm. Normalizing it after each iteration seems to stop it from growing up, but if I use, say, 28 terms,then after around 6 iterations it still break away from converging toward the tetration. Also, how did you come up with that initial value for f'(0)? Did you grab it from someone else's proposed extension from tetration (Kounzetsov? Cauchy Integral?) And does this formula also generalize and work for other bases, incl. $0 < b < e^{-e}$? Can you graph the (probably complex-valued, so 2 graphs) tetration of, say, b = 0.04? mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 09/15/2009, 10:34 PM (This post was last modified: 09/15/2009, 10:35 PM by mike3.) However that doesn't really seem to say how you obtain that numerical value for the derivative at 0, which is what I was asking about. Where does that come from? I.e. how did you come up with that number 1.0917... for base e? What would it be for base 2? For base 0.04 (should be some complex number)? mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 09/16/2009, 04:14 AM (This post was last modified: 09/16/2009, 04:23 AM by mike3.) (09/15/2009, 10:47 PM)Ansus Wrote: This is not the numerical value of the derivative at 0 though because I used indefinite formula. If you want the canstant to be the derivative at 0 you should use the definite formula $ \log_a \frac{f'_a(x)}{f'_a(0)(\ln a)^{x}} =\sum_{n=1}^{\infty} \frac{f^{(n-1)} (0)}{n!} (B_n(x)-B_n(0))$ i.e. slighly modify your sum operator. Well I'm trying out this formula (though I'm still not quite sure as to how "indefinite" sum yields the derivative constant...). Here's my new code for it (Pari/GP again): Code:/* Bernoulli number B_n */ bernoulli(n) = if(n==0, 1, (-1)^(n+1) * n * zeta(1 - n)); /* Bernoulli polynomial B_n(x) */ bernpoly(n) = sum(k=0,n,binomial(n,k)*bernoulli(k)*x^(n-k)); /* Continuum sum sum_{n=1...inf} f^(n-1)(0)/(n!) (B_n(x) - B_n(0)), * where f^(n-1) = (n-1)th derivative of f */ rhssumpoly(p) = sum(n=1,poldegree(p)+1,polcoeff(p,n-1)*(bernpoly(n) - subst(bernpoly(n), x, 0))/n); /* Compute exp of a power series */ exppoly(p) = truncate(exp(p)); /* Compute definite integral from -1 to x of a truncated power series polynomial */ defint(p) = { local(integ=intformal(p)); return(integ - subst(integ, x, -1)); } /* Do one iteration of the formula */ iterate(p) = 1.0917673512583217*defint(exppoly(rhssumpoly(p))); /* Normalized iteration */ iternorm(p) = { local(pi=iterate(p)); return(pi/polcoeff(pi, 0)); } /* Calculation loop. */ numiter = 32; \p 128; /* arithmetic precision (decimals) */ \ps 32; /* series term limit */ p = 1 for(i=1,numiter,p=iternorm(p)); /* change iternorm to iterate for unnormalized iteration */ /* Now you can run subst(p, x, y) to evaluate series at value y, or polcoeff(p, n) to get nth coefficient */ if I use 16 terms of series it seems to converge to a truncated series giving 5-6 decimals of natural tetration. If I use 32 (as the above is set to), it collapses to a truncated series with -1 in the highest-order coefficient and 1 in the lowest-order with all the rest zero. Why is that? Why can't I get more accuracy? Is it due to the lack of precision in the derivative constant (the 1.0917... part)? (Yet *dropping* the accuracy of that part, to, say just 1.091 doesn't make it diverge with 16 terms... so I'm mystified by this whole thing) If so, how can I get that to higher precision? If not, what's going on? Also, you mentioned the "indefinite" version of the formula which doesn't require preknowing (or lets you obtain) the value of f'(0) but I'm still not sure how to make that go as that introduces various spurious constants (constant of summation/product, constant of integration too?). Could you write out the indefinite formula in a form with f(x) on the left and the operations to be performed on the right? I.e. like how the definite formula (the one I quoted out of your post) can be written as (for base e): $f(x) = f'(0) \int_{-1}^{x} \exp\left(\sum_{n=1}^{\infty} \frac{f^{(n-1)} (0)}{n!} (B_n(x)-B_n(0)) \right) dx$ ? mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 09/16/2009, 04:18 PM (09/16/2009, 09:05 AM)Ansus Wrote: Quote:If I use 32 (as the above is set to), it collapses to a truncated series with -1 in the highest-order coefficient and 1 in the lowest-order with all the rest zero. This is surely something strange and does not depend on which formula do you use. You definitely should not get "all zeros". Also note that $B_n(0)=B_n$, i.e. Bernoulli number. So what could be happening here? Have you actually tested the formula yourself before (with your own code) with iteration on polynomial approximations/truncated powerseries? mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 09/17/2009, 12:34 AM (09/16/2009, 05:25 PM)Ansus Wrote: Yes, but only three iterations. You've already seen the graphics. The second iteration I simply found symbolically and the third I found numerically. So you've never tried a numeric procedure up to lots and lots of iterations, then? I think, however, that I may have figured out what the problem is. First off, we start with the formula: $ \log_a \frac{f'_a(x)}{f'_a(0)(\ln a)^{x}} =\sum_{n=1}^{\infty} \frac{f^{(n-1)} (0)}{n!} (B_n(x)-B_n(0))$ We can write $B_n(x) = \sum_{k=0}^{n} {n \choose k} B_{n - k} x^k$, and $B_n(x) - B_n(0) = \sum_{k=1}^{n} {n \choose k} B_{n - k} x^k$. So the rhs of the formula becomes  \begin{align} \sum_{n=1}^{\infty} \frac{f^{(n-1)} (0)}{n!} (B_n(x)-B_n(0)) &= \sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} \left(\sum_{k=1}^{n} {n \choose k} B_{n - k} x^k\right)\\ &=\sum_{n=1}^{\infty} \sum_{k=1}^{n} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k} x^k \end{align} We can reverse the summation signs to get this in the form of a power series in x^k: $ \sum_{n=1}^{\infty} \sum_{k=1}^{n} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k} x^k = \sum_{k=1}^{\infty} \left(\sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k}\right) x^k$ So each coefficient of the power series for the continuum sum is itself a series, which may or may not converge. It turns out that for some functions, the coefficients for the continuum sum do not converge even if there the other series has nonzero radius of convergence. I suspect, that for any function that is not entire, i.e. whose power series has finite radius of convergence, and this includes tetration (note the presence of a singularity at z = -2), this will not work. If you try the above sum formula with the coefficients for log(1 + x), you'll see it won't work, despite the fact that the continuum sum for log(1 + x) exists and has a power series expansion at 0. I'd guess that to make it work one would need to employ some sort of divergent summation method to extend the range of applicability of the formula (i.e. to sum up $b_k = \sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k}$ when the coefficients $\frac{f^{(n-1)}(0)}{n!}$ decay slowly), or else find an alternate method of computing solutions to the continuum-sum equation (any ideas? How could Mma's NDSolve have done it???). Gottfried Ultimate Fellow Posts: 767 Threads: 119 Joined: Aug 2007 09/17/2009, 07:43 AM (09/17/2009, 12:34 AM)mike3 Wrote: We can reverse the summation signs to get this in the form of a power series in x^k: $ \sum_{n=1}^{\infty} \sum_{k=1}^{n} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k} x^k = \sum_{k=1}^{\infty} \left(\sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k}\right) x^k$ So each coefficient of the power series for the continuum sum is itself a series, which may or may not converge. It turns out that for some functions, the coefficients for the continuum sum do not converge even if there the other series has nonzero radius of convergence. I suspect, that for any function that is not entire, i.e. whose power series has finite radius of convergence, and this includes tetration (note the presence of a singularity at z = -2), this will not work. If you try the above sum formula with the coefficients for log(1 + x), you'll see it won't work, despite the fact that the continuum sum for log(1 + x) exists and has a power series expansion at 0. I'd guess that to make it work one would need to employ some sort of divergent summation method to extend the range of applicability of the formula (i.e. to sum up $b_k = \sum_{n=1}^{\infty} \frac{f^{(n-1)}(0)}{n!} {n \choose k} B_{n - k}$ when the coefficients $\frac{f^{(n-1)}(0)}{n!}$ decay slowly), or else find an alternate method of computing solutions to the continuum-sum equation (any ideas? How could Mma's NDSolve have done it???).Hmm, if I didn't understand something completely wrong, the inner sums should be convergent for a constant f°(). (the fectorially scaled bernoulli-numbers sum up to something exp(1)/exp(1)-1 (see exp. generation-function). If we have a base 1

 Possibly Related Threads... Thread Author Replies Views Last Post fixed point formula sheldonison 6 11,876 05/23/2015, 04:32 AM Last Post: mike3 Numerical algorithm for Fourier continuum sum tetration theory mike3 12 21,258 09/18/2010, 04:12 AM Last Post: mike3

Users browsing this thread: 1 Guest(s)