Tetration Forum

Full Version: Cheta with base-change: preliminary results
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
(08/12/2009, 01:31 AM)sheldonison Wrote: [ -> ]Oh, I get it, you're using the well defined function base change function at the real axis only, for sexp_e(z). And then your using those real values to generate the taylor series???? And graphing that taylor series in the complex plane?
Yes, this version of sexp is for base e (hence the lack of a subscript), and is found using the change of base formula with the cheta function. I'm working in the double-logarithmic scale, so the cheta function turns out to be the continuous iteration of e^x-1 (which is why I don't have a specific reference to eta).

So I calculate sexp(m*2^p), for m an integer between -64 and 64, for example, and p equal to -108. Because I'm only using reals, convergence is assured. I'm using sexp(0)=0, because Andrew's slog is centered at 0, so I can have slog(sexp(0)) and expect an exact answer.

Anyway, I can then use those 129 points to calculate the various derivatives (actually, I use a matrix inversion that directly gives me the coefficients of the approximated Taylor series). Once I have a Taylor series, I can then move away from the real line.

See the following wikipedia article for the general idea of how the grid of points can be used to get derivatives:
http://en.wikipedia.orirg/wiki/Five-point_stencil

Instead of a 5-point stencil, I use a 129-point stencil. I don't yet have a formula for the coefficients for the stencil, so I have to use a matrix inversion to derive them "the hard way". For an n-point stencil, the matrix inversion is the primary limiting factor for this method, because it uses O(n^3) memory, whereas a vector method would use O(n^2). (The matrix has n^2 terms, each using a multiple of n bits, so the space is order n^3).
(08/12/2009, 02:15 AM)jaydfox Wrote: [ -> ]So I calculate sexp(m*2^p), for m an integer between -64 and 64, for example, and p equal to -108. Because I'm only using reals, convergence is assured. I'm using sexp(0)=0, because Andrew's slog is centered at 0, so I can have slog(sexp(0)) and expect an exact answer.

Anyway, I can then use those 129 points to calculate the various derivatives (actually, I use a matrix inversion that directly gives me the coefficients of the approximated Taylor series). Once I have a Taylor series, I can then move away from the real line.

See the following wikipedia article for the general idea of how the grid of points can be used to get derivatives:
http://en.wikipedia.orirg/wiki/Five-point_stencil

But thats what I asked you. You make an interpolating polynomial through the points sexp(m*2^p) and then you can get the derivatives as the derivatives of the polynomial.
Its described on that wiki page.

Quote:Instead of a 5-point stencil, I use a 129-point stencil. I don't yet have a formula for the coefficients for the stencil, so I have to use a matrix inversion to derive them "the hard way".

Well the interpolating polynomial can be computed as the Lagrange or the Newton Polynomial (with which I was incidentally concerned in the last time considering Ansus formulas)

Lets consider the equidistant Newton interpolation polynomial for points :

where .

To compute the coefficients we need to compute the coefficients of or more specifically the coefficients of , but these are the well know Stirling numbers of the first kind:


Plug this into the Newton formula:

Then we want to have the coefficients at , let

Into the previous formula:


So these are coefficients of the stencil without matrix inversion. In your described case , , , , .
Maybe there is a more symmetric formula but this should at least work.

And you still didnt tell me what FMA means!
Hi Jay,

there is a simple recursion formula, with which you can generate the inverse matrix.
I used the description at wikipedia for the stencils,
Code:
f(x-2h)= f(x) -2h f'(x)/1! + 4h^2 f''(x)/2! - 8h^3*f'''(x)/3! + 16h^4*f''''(x)/4!
f(x-h) = f(x) -1h f'(x)/1! + 1h^2 f''(x)/2! - 1h^3*f'''(x)/3! +  1h^4*f''''(x)/4!
f(x)   = f(x)  0             0                0                  0
f(x+h) = f(x) +1h f'(x)/1! + 1h^2 f''(x)/2! + 1h^3*f'''(x)/3! +  1h^4*f''''(x)/4!
f(x+2h)= f(x) +2h f'(x)/1! + 4h^2 f''(x)/2! + 8h^3*f'''(x)/3! + 16h^4*f''''(x)/4!


To prepare this for matrix-analysis I rewite this as matrix-equation:

Code:
´
|f(x-2h)|= |1 -2  4  -8 16 |  | f(x)/0!    |
|f(x-h) |= |1 -1  1  -1  1 |  | f'(x)/1!   |
|f(x)   |= |1  0  0   0  0 |* | f''(x)/2!  |
|f(x+h) |= |1  1  1   1  1 |  | f'''(x)/3! |
|f(x+2h)|= |1  2  4   8 16 |  | f''''(x)/4!|



and define the following matrices:

Code:
´
Fh(x) = columnvector of [f(x-2h),f(x-h),f(x),f(x+h),f(x+h)]
Fd(x) = columnvector of [f(x)/0!,f'(x)/1!,f''(x)/2!,f'''(x)/3!,f''''(x)/4!]

prefixing them with "d", if taken as diagonal matrices

Surely this definitions are to be adapted for higher dimension with the size-parameter, but it's easier here to stay with simple examples
Code:
´
Fac = diagonalmatrix of factorials diag(0!, 1!,2!,3!,4!)
FacI = Fac^-1 = diag(1/0!,1/1!,1/2!,1/3!,1/4!)

Sc    = matrix of the Stencils coefficients, such that

    Fh(x) = Sc * Fd(x)


What you want, is the inverse of Sc, such that Fd(x) = Sc^-1 * Fh(x) and evaluating the rhs you get the values for the derivatives f'(x)/1!, f''(x)/2! etc.
The computation of the inverse of a square matrix, especially if one is interested in different sizes or even for the limit case of infinite size is tedious or even impossible. But sometimes we can make life easier, if the square matrix can be factored in triangular and diagonal matrix-factors, and then the triangular factors are easier inverted.

Using the lower triangular pascalmatrix P and its inverse PI we find, that we can indeed factor Sc into such factors: Sc = P * Fac * X where we know the entries in the other factors P and Fac and we can find then X by X = FacI * PI * Sc

Now an independent description of entries of X would be nice, but is not obvious. On the other hand, the inversion of Sc is easier now, because the inversion of X, which is now of the simple triangular form, needs significantly less computation
Code:
´
     Sc^-1 = X^-1 * FacI * PI

So this alone would save computational effort.

But the really good news is, that I've found an independent description for the entries in X as well for X^-1 (call it XI, since we can compute it independently), so that the inversion can be omitted at all.

Assume an odd dimension for the problem dim=9, then use d2 = (dim-1)/2=4
Reference a cell as a(r,c), index beginning at zero. If an index r or c exceeds the matrix, let the value of a(r,c):=0. Work only for the upper triangular submatrix.
Code:
´
Init X and XI as identity-matrix
Further cells are (where we need only c>r)

   in X  :   a(r,c) = a(r-1,c-1) - (d2 -  r  )*a(r,c-1)  
   in XI :   a(r,c) = a(r-1,c-1) + (d2 -(c-1))*a(r,c-1)


Then
Code:
Fd(x) =     Sc^-1      * Fh(x)
          = XI * FacI * PI * Fh(x)

without additional inversions...

Data-examples: I add only a picture here - I'm lazy to retranslate the formatting from my winword-table-cells at moment

[attachment=514]

[attachment=515]

[attachment=516]

I can send you the Pari/Gp-code if this is needed.


Gottfried

[attachment=514]
[attachment=515]
[attachment=516]
(08/12/2009, 12:37 PM)Gottfried Wrote: [ -> ]Hi Jay,

there is a simple recursion formula, with which you can generate the inverse matrix.
I used the description at wikipedia for the stencils,
Code:
f(x-2h)= f(x) -2h f'(x)/1! + 4h^2 f''(x)/2! - 8h^3*f'''(x)/3! + 16h^4*f''''(x)/4!
f(x-h) = f(x) -1h f'(x)/1! + 1h^2 f''(x)/2! - 1h^3*f'''(x)/3! +  1h^4*f''''(x)/4!
f(x)   = f(x)  0             0                0                  0
f(x+h) = f(x) +1h f'(x)/1! + 1h^2 f''(x)/2! + 1h^3*f'''(x)/3! +  1h^4*f''''(x)/4!
f(x+2h)= f(x) +2h f'(x)/1! + 4h^2 f''(x)/2! + 8h^3*f'''(x)/3! + 16h^4*f''''(x)/4!


To prepare this for matrix-analysis I rewite this as matrix-equation:

Code:
´
|f(x-2h)|= |1 -2  4  -8 16 |  | f(x)/0!    |
|f(x-h) |= |1 -1  1  -1  1 |  | f'(x)/1!   |
|f(x)   |= |1  0  0   0  0 |* | f''(x)/2!  |
|f(x+h) |= |1  1  1   1  1 |  | f'''(x)/3! |
|f(x+2h)|= |1  2  4   8 16 |  | f''''(x)/4!|
Actually, since I'm trying to find the Taylor series, I don't worry about the factorials. So I work directly with the matrix above, where the first row is [1 -2 4 -8 16].

Inverting this and multiplying by 4!, or (n-1)! in general, I get back a matrix of integer coefficients. For this 5x5 matrix, I get:
Code:
[  0   0  24   0   0]
[  2 -16   0  16  -2]
[ -1  16 -30  16  -1]
[ -2   4   0  -4   2]
[  1  -4   6  -4   1]
The bottom row in general is a simple set of binomial coefficients. Though not easy to see as a pattern from here, the next row up is related to the bottom row, where the middle term is 0, and as we move j terms right or left, we get j times the number below. In general, every odd row can be found by applying this trick to the even row below. But finding the even rows, with the exception of the last row, isn't so simple.

But as I said, I haven't found a general formula. I'll take a look at Henryk's suggestions about the Stirling numbers, to see if I get any help there.

It may turn out that working with the factorials helps simplify the math, so I'll take a look at your matrices later today, when I'm more awake and can focus on them.
(08/12/2009, 09:03 AM)bo198214 Wrote: [ -> ]And you still didnt tell me what FMA means!
Sorry, I appear to have been mixing up variable names from some old code of Andrew's. I work with the "FMA" on such a regular basis when working with exp(x)-1, that I'd stopped paying attention to the terminology.

See this post:
http://math.eretrandre.org/tetrationforu...147#pid147

The variable fma in that code is the matrix I'm referring to. Hopefully you can tell from the context what Andrew's talking about. Essentially, it gives us a matrix, which we can multiply on the right by a vector of powers of n, to get a vector representing a power series in x. For fractional n, the terms diverge.

Or, we can multiply on the right by a vector of powers of x (i.e., picking a fixed x), and get a power series in n, which is the iteration function. I use this latter approach.

Coincidentally, for me it took SAGE a full day to get a 200x200 FMA matrix. Andrew said he could calculate 150 terms in two minutes using Mathematica:
http://math.eretrandre.org/tetrationforu...124#pid124

I would love to be using more terms, 300 or so if possible, but I fear it would take a week to calculate in SAGE, compared to mere hours in Mathematica.

Is there a way to generate the terms in Mathematica and export them into a format I could import into SAGE?
(08/12/2009, 02:29 PM)jaydfox Wrote: [ -> ]Actually, since I'm trying to find the Taylor series, I don't worry about the factorials. So I work directly with the matrix above, where the first row is [1 -2 4 -8 16].

Inverting this and multiplying by 4!, or (n-1)! in general, I get back a matrix of integer coefficients. For this 5x5 matrix, I get:
Code:
[  0   0  24   0   0]
[  2 -16   0  16  -2]
[ -1  16 -30  16  -1]
[ -2   4   0  -4   2]
[  1  -4   6  -4   1]

Yes, exactly that's wat XI is for.
Using
Code:
´
  dim=5
  ScI = XI *FacI*PI *(dim-1)!
I get ScI (= Sc^-1 without general matrix-inversion!)
Code:
´
   0    0   24   0   0
   2  -16    0  16  -2
  -1   16  -30  16  -1
  -2    4    0  -4   2
   1   -4    6  -4   1

If I use dim=128 I need at most one second to get XI, dim=256 it's at most two seconds .
PI needs the signed binomials, and FacI the reciprocals of factorials, which is "wired" in the CAS we use.

Just for fun. The top 12x2 of the matrix ScI for the dim=63-problem
Code:
0                                                                       .
  -2181131468794922353615366650200339706856997013317222400000000000000   139737822767461358788291156722835097219304941986523381760000000000000
     70359079638545882374689246780656119576032161719910400000000000000    -4657927425582045292943038557427836573976831399550779392000000000000
   3516311353922279385648333502150600539038512466240565411840000000000  -225268491925020959287679489529214235962391937424715012898816000000000
   -113429398513621915666075274262922598033500402136792432640000000000     7508949730834031976255982984307141198746397914157167096627200000000
  -1654079054684419920279666586121037397819476185831565793689600000000   105955453869348853785751995418154264616278795206219780639948800000000
     53357388860787739363860212455517335413531489865534380441600000000    -3531848462311628459525066513938475487209293173540659354664960000000
    359946168888362231354382569977986774675428522317679104622592000000   -23053094875016314001471918871866472674486775969756723710577868800000
    -11611166738334265527560728063806024989529952332828358213632000000      768436495833877133382397295728882422482892532325224123685928960000
    -44332175994731622306985432635107929937606776681530699263508480000     2838596603460567513445078978963484415564939643140404602334085120000
      1430070193378439429257594601132513868955057312307441911726080000      -94619886782018917114835965965449480518831321438013486744469504000
      3462849129077008473136231592189678512700299197020396105131622400     -221654683043115153762232400649777616025545242953274974194850856960

...
and for a quick crosscheck also the 5 bottom rows.
Code:
...
  42061513  -2640168816  81493009963  -1648811309600  24593123988525  -288370678815056  2768098801017967  -22366853583637056
    293105    -17699760    525083425    -10199902720    145903801905    -1638716546480    15047425388225    -116142579256320
     -9455       589992    -18106325       364282240     -5403844515       63027559480     -601897015529       4839274135680
       -31         1860       -54839         1058960       -15061815         168246052       -1536862975         11803107648
         1          -62         1891          -37820          557845          -6471002          61474519          -491796152

This costs ~ 2 seconds ...
(08/12/2009, 02:43 PM)jaydfox Wrote: [ -> ]The variable fma in that code is the matrix I'm referring to. Hopefully you can tell from the context what Andrew's talking about. Essentially, it gives us a matrix, which we can multiply on the right by a vector of powers of n, to get a vector representing a power series in x. For fractional n, the terms diverge.

Or, we can multiply on the right by a vector of powers of x (i.e., picking a fixed x), and get a power series in n, which is the iteration function. I use this latter approach.

Oh you mean this matrix is only for the computation of the iteration of ? And this takes this exuberant time?

You know that Walker describes an alogrithm for the iteration of based on a limit formula, not via powerseries development. Perhaps this is more appropriate if considering time. Particularly because he conducted some accelerations.

I initially thought that the inversion of the interpolation matrix took so much time (and not the fma) thatswhy I provided the direct formula via stirling numbers.

Well I would anyway ask you to make a modular solution.
Input: some super-exponential to base at least defined on some real axis.
Output: the taylor series of the base change to base .

This would give the flexibility also to check whether change of base leads from one regular super-exponential again to a regular super-exponential (at the lower fixed point).
The regular super-exponentials for base are anyway drastically more gentle numerically than the one for base .
(08/12/2009, 03:47 PM)Gottfried Wrote: [ -> ]
(08/12/2009, 02:29 PM)jaydfox Wrote: [ -> ]Actually, since I'm trying to find the Taylor series, I don't worry about the factorials. So I work directly with the matrix above, where the first row is [1 -2 4 -8 16].

Inverting this and multiplying by 4!, or (n-1)! in general, I get back a matrix of integer coefficients. For this 5x5 matrix, I get:
Code:
[  0   0  24   0   0]
[  2 -16   0  16  -2]
[ -1  16 -30  16  -1]
[ -2   4   0  -4   2]
[  1  -4   6  -4   1]

Yes, exactly that's wat XI is for.
Using
Code:
´
  dim=5
  ScI = XI *FacI*PI *(dim-1)!
I get ScI (= Sc^-1 without general matrix-inversion!)
Code:
´
   0    0   24   0   0
   2  -16    0  16  -2
  -1   16  -30  16  -1
  -2    4    0  -4   2
   1   -4    6  -4   1
Yes, this is it! This is much faster than what I was using before!
(08/12/2009, 04:18 PM)bo198214 Wrote: [ -> ]
(08/12/2009, 02:43 PM)jaydfox Wrote: [ -> ]The variable fma in that code is the matrix I'm referring to. Hopefully you can tell from the context what Andrew's talking about. Essentially, it gives us a matrix, which we can multiply on the right by a vector of powers of n, to get a vector representing a power series in x. For fractional n, the terms diverge.

Or, we can multiply on the right by a vector of powers of x (i.e., picking a fixed x), and get a power series in n, which is the iteration function. I use this latter approach.

Oh you mean this matrix is only for the computation of the iteration of ? And this takes this exuberant time?
Well, calculating the matrix once takes a lot of time. Thereafter, using the matrix is quite fast. The problem is, the coefficients can only reasonably be calculated within a small radius about 0. The more terms you want, the smaller the radius. For 199 terms, using the 200x200 matrix, I already need almost 1000 iterations of log(x+1). Otherwise, the last few terms are inaccurate. For 500 iterations, I only use the first 150 terms, as they get wildly inaccurate at about 180 or 190. If I had a 300x300 matrix, I could probably get the first 250 terms with the same 500 iterations. It's hard to be sure without a little more analysis, because the low order terms start to accumulate errors due to the divergence of the series. I suppose that doesn't make much sense without context, so perhaps I'll dedicate a new thread just to that topic.

Quote:I initially thought that the inversion of the interpolation matrix took so much time (and not the fma) thatswhy I provided the direct formula via stirling numbers.
Actually, they both take a lot of time. But I only have to calculate the FMA once, and then I just save it to disk and reuse it. If push came to shove, I could spend the week calculating the 300x300 FMA, but it'd be easier if Andrew could do it in Mathematica in a few minutes or hours and convert it to a format I could use in SAGE. Pretty please!

Now with Gottfried's method of deriving the inverse of the stencil matrix, the time spent there can also be saved! I'll experiment with this and see how large of a grid I can manage within my memory budget.

Then it's just a matter of creating the grid points, which itself is a rather time-consuming process... (hundreds of iterations of log(x+1) and exp(x)-1, using tens of thousands of bits of precision...)
I also just see that there is a function lagrange_polynomial in sage
e.g.
# using the definition of Lagrange interpolation polynomial
sage: R = PolynomialRing(QQ, 'x')
sage: R.lagrange_polynomial([(0,1),(2,2),(3,-2),(-4,9)])

I mean this should be super easy now. Just plug in your argument-value-pairs and you have the interpolating polynomial (no matrix fuzz).
Then you can apply this interpolating polynomial to non-real values.
Or extract the coefficients as you like.
However I didnt check how long it takes Smile
For variants see
http://wiki.sagemath.org/sage-4.0.1
Pages: 1 2 3