Quote:Wait... the series above only works for even approximants, I also got:
for odd approximants, and they can't both be right... what am I doing wrong?
Tetraseries

Quote:Wait... the series above only works for even approximants, I also got: for odd approximants, and they can't both be right... what am I doing wrong?
10/31/2009, 12:58 PM
(10/31/2009, 10:38 AM)andydude Wrote: This is probably an old result. For the function Hmmm, don't recognize this coefficients. Would you mind to show more of your computations?
Gottfried Helms, Kassel
(10/31/2009, 11:01 AM)andydude Wrote:Quote:Wait... the series above only works for even approximants, I also got: Alternating summing of coefficients requires averaging if the sequence to be summed is not converging fast enough. For example, for the coefficients at the linear term the alternating sum of iterates give 1x1x+1x1x... for the coefficient in AS and to get 0.5 x here in the limit one must employ cesaro or Eulersummation. Note, that the same problem strengthens if the sequence of coefficients at some x have also a growthrate (is divergent) Then for each coefficient you need an appropriate order for Cesaro/Eulersum. If you use powers of a *triangular* Carlemanmatrix X for the iterates, then you can try the geometric series for matrices AS = I  X + X^2  X^3 + ...  ... = (I + X)^1 in many cases and use the coefficients of the second row/column. If X is not triangular you have a weak chance, that you get a usable approximate, since the eigenvalues of AS may behave nicer than that of X, because for an eigenvalue x>1 the eigenvalue in AS is 1/(1+x) which is smaller than 1, and from a set of increasing eigenvalues (all >=1) as well from one of decreasing eigenvalues (0<all<=1) you get a set of eigenvalues in AS (0<all<1), which makes the associated powerseries for AS(x) nicebehaved.
Gottfried Helms, Kassel
If I Eulersum the list of coefficients, which I get with the formula at the superrootmsg for heights h=0..63 , then the resulting powerseries seems to begin with
Code: 0 + 1/2x 1/2 x^2 + 1/4 x^3 1/6 x^4 + 5/12 x^5 + ??? + ??? , ... ] Because this is tidy to avoid the Eulersummation at all we can do a trick: Let S(x,h) be the formal powerseries for the height h for S(x,h) = (1+x)^(1+x)^...^(1+x)  1 and S(x) the series for the limit when h> infinity, then by definition, AS(x) = S(x,0)  S(x,1) + S(x,2)  ... + // Eulersum Since the coefficients of any height converge to that of the S(x)series I compute the difference D(x,h) = S(x,h)  S(x) and rewrite AS(x) = D(x,0)  D(x,1) + D(x,2) ... + aeta(0)*S(x) where aeta(0) is the alternating zetaseries zeta(0) meaning aeta(0) = 11+11+1... = 1/2 Because the coefficients with index k<h in D(x,h) vanish, I get exact rational values in the coefficients of the formal powerseries in AS(x) Code: 0 * x^0 In floatnumerical display this is Code: 0 * x^0 Check, for base x=1/2 I get AS(x) = 0.938253002500
Gottfried Helms, Kassel
(10/31/2009, 02:40 PM)Gottfried Wrote: Check, for base x=1/2 I get AS(x) = 0.938253002500 Yes, with the average of the two series, I get AS(0.5) = 0.938253. The way that I got the coefficients is slightly different than your method. I did this: Let be a matrix defined by , and let then so I thought, if we know G (1, 1, 1, 1, ...), then and when the matrix size is even I get the first series, and when the matrix size is odd, I get the second series. (10/31/2009, 09:37 PM)andydude Wrote: Hmm, I didn't catch the actual computation, but that's possibly not yet required. I guess, you need the technique of divergentsummation ; it may be, that the implicite series, which occur by multiplication of , have alternating signs, are not converging well, or even are divergent. So you could include the process of Eulersummation. I found a very nice method to get at least an overview, whether such matrixproduct suffers from nonconvergence (but which is reparable by Cesaro/Eulersummation). I've defined a diagonalmatrix dE(o) of coefficients for Eulersummation of order o, which can simply be included in the matrixproduct. Write where o=1.0 o=1.5 or o=2. With too small o the implicite sums in matrixmultiplication begin to oscillate from some terms (order is "too weak") , for too high o the oscillation is so heavily suppressed, that with dimnumber of terms the series is not yet converging. In a matrixproduct using dimension dim the number of dim^2 of such sums occur. While likely not all that sums can be handled correctly by that same Eulervector, for some of them you will see a well approximated result and a general smoothing, making the result matrixsize independent, especially averages between size dim and size dim+1. In my implementation order o=1 means no change; simply dE(1) = I ; dE(1.7)..dE(2.0) can sum 11+11... and similar, dE(2.5)..dE(3.0) can sum divergence of type 12+48+...... and so on (details for toyimplementation in Pari/GP see below, I can send you some example scripts if this is more convenient) Gottfried Code: // Pari/GP
Gottfried Helms, Kassel
(10/31/2009, 09:37 PM)andydude Wrote: The way that I got the coefficients is slightly different than your method. I did this:Ah, now I understand, B is the matrix which transforms the f into g  coefficients, G is given and F is sought... B is not triangular here: how do you get the correct entries for its inverse, btw? But whatever: I use this idea too, frequently. However in many instances I found in our context of exponentiation and especially iterated exponentiation, that the inverse of some matrix X represents highly divergent series, such that systematically results which are correct using the noninverted matrix are not correct for the inverse problem using the (naive) inverse of X. This is especially the case for some matrix X, whose triangular LRfactors have the form of a qbinomial matrix. Such LRfactors occur by a square matrix X = x_{r,c} = base^(r*c) or X = x_{r,c} = base^(r*c)/r! or the like, and if X shall be inverted by inversion of its triangular factors. Such matrices X occur for example in the interpolation which I called "exponential polynomial interpolation" for the Ttetration (or sexp)Bellmatrices. I used that matrix X also in the example for the "false interpolation for logarithm"discussion. (But I could not yet find a workaround for the occuring inconsistencies with the inverse) Now I don't see the precise characteristics of your Bmatrix so far; I've just to actually construct one and to look into it to be able to say more. Let's see... Gottfried
Gottfried Helms, Kassel
11/03/2009, 03:56 AM
11/03/2009, 04:12 AM
O wait... I was wrong
which means sorry.
If F is all zeroes except for a one somewhere, then that represents an function. In general for integer n, the G's look like this
The first coefficient seems to have a pattern in it, but this is just because . Oh, and another weird thing: when approximated in this way. 
« Next Oldest  Next Newest »
