Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Tetra-series
#21
Quote:
Wait... the series above only works for even approximants, I also got:



for odd approximants, and they can't both be right... what am I doing wrong?
Reply
#22
(10/31/2009, 10:38 AM)andydude Wrote: This is probably an old result. For the function



I found using Carleman matrices that



is this related to the series above?

Hmmm, don't recognize this coefficients. Would you mind to show more of your computations?
Gottfried Helms, Kassel
Reply
#23
(10/31/2009, 11:01 AM)andydude Wrote:
Quote:
Wait... the series above only works for even approximants, I also got:



for odd approximants, and they can't both be right... what am I doing wrong?

Alternating summing of coefficients requires averaging if the sequence to be summed is not converging fast enough. For example, for the coefficients at the linear term the alternating sum of iterates give 1x-1x+1x-1x... for the coefficient in AS and to get 0.5 x here in the limit one must employ cesaro or Euler-summation.

Note, that the same problem strengthens if the sequence of coefficients at some x have also a growthrate (is divergent) Then for each coefficient you need an appropriate order for Cesaro-/Eulersum.

If you use powers of a *triangular* Carleman-matrix X for the iterates, then you can try the geometric series for matrices

AS = I - X + X^2 - X^3 + ... - ... = (I + X)^-1

in many cases and use the coefficients of the second row/column.
If X is not triangular you have a weak chance, that you get a usable approximate, since the eigenvalues of AS may behave nicer than that of X, because for an eigenvalue x>1 the eigenvalue in AS is 1/(1+x) which is smaller than 1, and from a set of increasing eigenvalues (all >=1) as well from one of decreasing eigenvalues (0<all<=1) you get a set of eigenvalues in AS (0<all<1), which makes the associated powerseries for AS(x) nice-behaved.
Gottfried Helms, Kassel
Reply
#24
If I Euler-sum the list of coefficients, which I get with the formula at the superroot-msg for heights h=0..63 , then the resulting powerseries seems to begin with
Code:
0 + 1/2x  -1/2 x^2 + 1/4 x^3  -1/6 x^4 + 5/12 x^5 + ??? + ??? , ... ]
where for the question-marks I needed higher Euler-orders.

Because this is tidy to avoid the Euler-summation at all we can do a trick:
Let S(x,h) be the formal powerseries for the height h for

S(x,h) = (1+x)^(1+x)^...^(1+x) - 1

and S(x) the series for the limit when h-> infinity, then by definition,

AS(x) = S(x,0) - S(x,1) + S(x,2) - ... + // Euler-sum

Since the coefficients of any height converge to that of the S(x)-series I compute the difference D(x,h) = S(x,h) - S(x) and rewrite

AS(x) = D(x,0) - D(x,1) + D(x,2) ... + aeta(0)*S(x)

where aeta(0) is the alternating zeta-series zeta(0) meaning

aeta(0) = 1-1+1-1+1-... = 1/2

Because the coefficients with index k<h in D(x,h) vanish, I get exact rational values in the coefficients of the formal powerseries in AS(x)
Code:
0 * x^0
                     1/2 * x^1
                    -1/2 * x^2
                     1/4 * x^3
                    -1/6 * x^4
                    5/12 * x^5
                  -23/80 * x^6
                  97/720 * x^7
              -1801/3360 * x^8
                619/5040 * x^9
            -4279/15120 * x^10
          106549/151200 * x^11
        2586973/5702400 * x^12
        2111317/1425600 * x^13
   777782953/1037836800 * x^14
  3321778277/4358914560 * x^15
...

In float-numerical display this is
Code:
0 * x^0
    0.500000000000 * x^1
   -0.500000000000 * x^2
    0.250000000000 * x^3
   -0.166666666667 * x^4
    0.416666666667 * x^5
   -0.287500000000 * x^6
    0.134722222222 * x^7
   -0.536011904762 * x^8
    0.122817460317 * x^9
  -0.283002645503 * x^10
   0.704689153439 * x^11
   0.453663895903 * x^12
    1.48100238496 * x^13
   0.749427032266 * x^14
   0.762065470951 * x^15
   -2.02559608779 * x^16
   -4.93868722102 * x^17
   -11.5286692883 * x^18
   -17.6563985780 * x^19
   -24.5338937285 * x^20
   -22.4594923016 * x^21
   -4.19284436502 * x^22
    53.8185412606 * x^23
    176.092085183 * x^24
    405.014519784 * x^25
    772.287054778 * x^26
    1291.34671701 * x^27
    1872.07516409 * x^28
    2213.27210256 * x^29
    1537.71737942 * x^30
   -1795.52581418 * x^31
...
Because the constant term for S(x,h)+1 = 1 its sum is the alternating sum 1-1+1-1... and we should set the constant term in AS(x) to 1/2.

Check, for base x=1/2 I get AS(x) = 0.938253002500
Gottfried Helms, Kassel
Reply
#25
(10/31/2009, 02:40 PM)Gottfried Wrote: Check, for base x=1/2 I get AS(x) = 0.938253002500

Yes, with the average of the two series, I get AS(0.5) = 0.938253.

The way that I got the coefficients is slightly different than your method. I did this:
Let be a matrix defined by , and let

then

so I thought, if we know G (1, -1, 1, -1, ...), then

and when the matrix size is even I get the first series, and when the matrix size is odd, I get the second series.
Reply
#26
(10/31/2009, 09:37 PM)andydude Wrote:
and when the matrix size is even I get the first series, and when the matrix size is odd, I get the second series.

Hmm, I didn't catch the actual computation, but that's possibly not yet required.
I guess, you need the technique of divergent-summation ; it may be, that the implicite series, which occur by multiplication of , have alternating signs, are not converging well, or even are divergent. So you could include the process of Euler-summation.

I found a very nice method to get at least an overview, whether such matrix-product suffers from non-convergence (but which is reparable by Cesaro/Euler-summation).
I've defined a diagonal-matrix dE(o) of coefficients for Euler-summation of order o, which can simply be included in the matrix-product.
Write where o=1.0 o=1.5 or o=2. With too small o the implicite sums in matrix-multiplication begin to oscillate from some terms (order is "too weak") , for too high o the oscillation is so heavily suppressed, that with dim-number of terms the series is not yet converging. In a matrix-product using dimension dim the number of dim^2 of such sums occur. While likely not all that sums can be handled correctly by that same Euler-vector, for some of them you will see a well approximated result and a general smoothing, making the result matrix-size independent, especially averages between size dim and size dim+1.

In my implementation order o=1 means no change; simply dE(1) = I ; dE(1.7)..dE(2.0) can sum 1-1+1-1... and similar, dE(2.5)..dE(3.0) can sum divergence of type 1-2+4-8+...-... and so on
(details for toy-implementation in Pari/GP see below, I can send you some example scripts if this is more convenient)

Gottfried


Code:
//  Pari/GP
\\ a vector of length dim returning coefficients for Euler-summation of
\\ order "order" (E(1) gives the unit-vector:direct summation
{E(order, dim=n) = local(Eu);
Eu=vector(dim);Eu[1]=order^(dim-1);
for(k=2,dim,Eu[k]=Eu[k-1]-(order-1)^(dim-k+1)*binomial(dim-1,k-2));
Eu=Eu/order^(dim-1);
return(Eu);}

\\ returns this as diagonal-matrix
dE(order,dim=n) = return( matdiagonal(E(order,dim)) )
Gottfried Helms, Kassel
Reply
#27
(10/31/2009, 09:37 PM)andydude Wrote: The way that I got the coefficients is slightly different than your method. I did this:
Let be a matrix defined by , and let

then

so I thought, if we know G (1, -1, 1, -1, ...), then

and when the matrix size is even I get the first series, and when the matrix size is odd, I get the second series.
Ah, now I understand, B is the matrix which transforms the f- into g - coefficients, G is given and F is sought...
B is not triangular here: how do you get the correct entries for its inverse, btw?

But whatever: I use this idea too, frequently.
However in many instances I found in our context of exponentiation and especially iterated exponentiation, that the inverse of some matrix X represents highly divergent series, such that systematically results which are correct using the non-inverted matrix are not correct for the inverse problem using the (naive) inverse of X.
This is especially the case for some matrix X, whose triangular LR-factors have the form of a q-binomial matrix.
Such LR-factors occur by a square matrix X = x_{r,c} = base^(r*c) or X = x_{r,c} = base^(r*c)/r! or the like, and if X shall be inverted by inversion of its triangular factors.
Such matrices X occur for example in the interpolation which I called "exponential polynomial interpolation" for the T-tetration (or sexp)-Bell-matrices. I used that matrix X also in the example for the "false interpolation for logarithm"-discussion. (But I could not yet find a workaround for the occuring inconsistencies with the inverse)

Now I don't see the precise characteristics of your B-matrix so far; I've just to actually construct one and to look into it to be able to say more. Let's see...

Gottfried
Gottfried Helms, Kassel
Reply
#28
(11/01/2009, 07:45 AM)Gottfried Wrote: B is not triangular here: how do you get the correct entries for its inverse, btw?

Right, B is not triangular, but is triangular.
Reply
#29
O wait... I was wrong

which means

sorry.
Reply
#30
If F is all zeroes except for a one somewhere, then that represents an function. In general for integer n, the G's look like this



The first coefficient seems to have a pattern in it, but this is just because .

Oh, and another weird thing: when approximated in this way.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
Question Taylor series of i[x] Xorter 12 10,288 02/20/2018, 09:55 PM
Last Post: Xorter
  Taylor series of cheta Xorter 13 10,972 08/28/2016, 08:52 PM
Last Post: sheldonison
  Derivative of E tetra x Forehead 7 8,671 12/25/2015, 03:59 AM
Last Post: andydude
  [integral] How to integrate a fourier series ? tommy1729 1 2,310 05/04/2014, 03:19 PM
Last Post: tommy1729
  Iteration series: Series of powertowers - "T- geometric series" Gottfried 10 15,673 02/04/2012, 05:02 AM
Last Post: Kouznetsov
  Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 2,675 09/04/2011, 05:59 AM
Last Post: Gottfried
  What is the convergence radius of this power series? JmsNxn 9 14,866 07/04/2011, 09:08 PM
Last Post: JmsNxn
  An alternate power series representation for ln(x) JmsNxn 7 12,926 05/09/2011, 01:02 AM
Last Post: JmsNxn
  weird series expansion tommy1729 2 4,029 07/05/2010, 07:59 PM
Last Post: tommy1729
  Something interesting about Taylor series Ztolk 3 6,076 06/29/2010, 06:32 AM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)