jaydfox Wrote:Gottfried,

Thanks for the information! I had a little trouble following at first, but then I downloaded your paper on the Vandermonde matrix here:

http://go.helms-net.de/math/binomial_new...monde1.pdf

Between that paper (I've only read the first 1/3 of it so far) and your reply at the bottom of the first page, I was able to follow along, at least to get as far as V(x)~ * S2 * P~ = V(e^x-1)~ *P~ = V(e^x)~

Jay -

you stopped at it because it was the last one you could decode or the first one, which not?

Well, in case of the latter:

I say

V(x)~ * B = V(e^x) // tetration to the base e

But if

B = S2 * P~

then

V(x)~ * (S2 * P~) = V(e^x) // tetration to the base e

using associativity, we have also

(V(x)~ * S2) * P~ = V(e^x) ~

Now, the multiplication of a powerseries-row-vector with the transposed binomial-matrix adds 1 to its parameter, so we may write

V(y)~ * P~ = V(y+1)~ // binomial-theorem

and conversely, subtract 1 by postmultiplication by P's inverse:

V(y)~ = V(y+1)~ * (P^-1)~

So the above , postmultiplied by (P^-1)~

(V(x)~ * S2) * P~* (P^-1)~ = V(e^x) ~ * (P^-1)~

(V(x)~ * S2) = V(e^x-1)~

and this is then the "exp(x)-1" or as I denote it, the "U"-iteration.

I have prepared a little script which can be used with Pari-tty (just copy&paste it into the syntax-notepad, when paritty has been started and process it line by line). Using paritty (as GUI encapsulating Pari/GP) makes sense, because this allows to see the involved matrices permanently on the screen and how the summation processes work.

One needs also the matrix-definition-files, which provide functions and matrix-constants. I'll attach them - without editing them, however. They can be read in by Pari/GP at the beginning - in fact, the following Pari-tty-script uses this. Rememeber that using Pari-tty you need Pari/Gp 2.2.11, since the low-level communication protocol has a bit changed in the newer versions (you may download this version from my paritty-webspace as well (). Put these files in the standard-directory of your paritty-syntax-folder as created with the paritty-installation. (see http://go.helms-net.de/sw/paritty )

Here is the paritty-script for the "basics-demo".

Gottfried

---------------------

Code:

`\\ Pari-tty (Version Beta 2.03.1 (2007.10.11) (en))`

\\ (c) Gottfried Helms mailto:helms@uni-kassel.de

\\ Info: http://go.helms-net.de/sw/paritty

\\ ---------------------------------------------------------

\\ this demo can best be seen, if the window is splitted

\\ horizontally, use the split-window-symbol in the top-tool-bar.

\\ Don't maximize this window, since we show some child windows

\\ at the left side of the desktop

\\

\\ to follow the demo, it's best to send the following lines separately:

\\ position the cursor in that line and click the CAL-button or press

\\ the F2-key

\\

\\ =============================================================

n=20 \\ set dimension for matrices first

\\ get matrix-definitions, -constants and -functions

\r % _matrixinit.gp

\\ here the %-sign is a paritty-meta-tag, which inserts the current path befor the file name.

\\ You may omit it, if the pari/GP-default "path" is set appropriately

\\ The path is set(to my needs) in _matrixinit.gp. You must adapt the first line

\\ in this file first!

\\ give a name for the project

%proj Basics

\\ show some basic matrices.

%box >P P \\ P-matrix

%box >P PInv \\ I saved the inverse P^-1 as matrix constant

%box >P VE(P,6) \\ truncation of P-matrix, shall suffice for display

%box >S2 VE(St2,6) \\ Stirling kind 2

%box >S2 VE(PInv*St2,6) \\ I use the "shifted" version

\\ note that in dFac(b) "b" is the exponent on the factorials, so

\\ dFac(1) = diag(0!^1,1!^1,2!^1,...),

\\ dFac(-1) = diag(1/0!, 1/1!, 1/2! , ... )

%box >S2 S2 = dFac(-1)*PInv*St2*dFac(1);VE(S2,6 ) \\ and then the factorial scaled version

%box >chk V(1)~ * S2 \\ column-summing

%box >chk 1.0*V(1)~ * S2 \\ column-summing, float numeric

\\ gives e^1-1 in the second column, and its powers

\\ in the following columns, so we have a powerseries-vector

\\ as an output, the same form as input.

%box >chk2 V(exp(1)-1)~ \\ compare; however in this we didn't care for

\\ quality of approximation

\\ ---- discussion of partial sums and Euler summation -------------------

\\

\\ to display partial sums (heavily used later) I introduce a triangular matrix

%box >DR VE(DR,6) \\ to compute and display partial sums, show only 6x6-submatrix

\\ applied:

%box >chk1 DR* 1.0* S2 \\ partial sums converge to V(exp(1)-1)~

\\ the simple partial summing does not well converge with

\\ geometric series:

%box >chk1 DR*1.0*Mat(V(-2))

\\ for these cases I employ Euler-summation.

%box >chk1 ESum(1.0)*Mat(V(-2)) \\ Euler-sum of order 1 (direct sum) = DR

%box >chk1 ESum(2.0)*Mat(V(-2)) \\ Euler-sum of order 2 (classic Euler sum order 1)

%box >chk1 ESum(3.0)*Mat(V(-2)) \\ Euler-sum of order 3 (classic Euler sum order 2)

%box >chk1 ESum(4.0)*Mat(V(-2)) \\ Euler-sum of order 4 (classic Euler sum order 3)

\\ note, that higher orders give not the fastest approximation. The order must

\\ be appropriate for the order of growth-rate of geometric series

\\ to illustrate this, another example

%box >chk1 ESum(3.0)*Mat(V(-3)) \\ Euler-sum of order 3 (classic Euler sum order 2)

%box >chk1 ESum(4.0)*Mat(V(-3)) \\ Euler-sum of order 4 (classic Euler sum order 3)

%box >chk1 ESum(5.0)*Mat(V(-3)) \\ Euler-sum of order 5 (classic Euler sum order 4)

\\ ==============================================================================

\\ back to S2-matrix and U-tetration.

%box >chk1 1.0*ESum(1.0) * S2 \\ gives V(exp(1)-1)~ as shown above

\\ best approximation (using all terms) in last row

%box >chk1 1.0*ESum(1.0) * S2*S2 \\ gives next iteration

\\ check (output in standard window)

tmp = exp(1)-1

tmp = exp(tmp) - 1 \\ should be the same as in the chk1-box, 2nd column

tmp^2 \\ should be the same as in 3rd column; but Euler-sum had too small order

\\ so approximation above was not optimal. try a better "Euler-order"

%box >chk1 1.0*ESum(0.8) * S2*S2 \\ Euler-sum is even better with lower order! Strange....

\\ at least for 2nd and 3rd column

\\ for completeness let's look at the inverse of S2

\\ it's the shifted factorial scaled St1, Stirling kind 1

%box >S1 St1 \\ Stirling kind 1

%box >S1 St1*P \\ Stirling kind 1 shifted version

%box >S1 S1=dFac(-1)*St1*P*dFac(1);VE(S1,6) \\ Stirling kind 1 shifted version, factorial scaled

\\ check, is it the inverse of S2

%box >chk1 VE(S2^-1,6)

\\ as the inverse of S2, which performs (iterable V(x)->V(exp(x)-1) it

\\ should iterable perform x->log(1+x)

\\ since I use the partial-sum display via Euler-sum, the V(x)-parameter

\\ must be supplied in diagonal-format

\\ using dV(1) should thus give V(log(1+1))=V(log(2))

%box >chk1 ESum(1.45)*dV(1)*S1 \\ this looks good,

\\ the consecutive powers are obvious

\\ now iterated, using S1*S1

%box >chk1 ESum(1.8)*dV(1)*S1*S1 \\ this looks again good,

\\ check

tmp = log(1+1)

tmp = log(1+tmp)

\\ and the consecutive powers

tmp^2

tmp^3 \\ very good.

\\ ======================================================================

\\ now tetration

\\ we construct the matrix B

B = matrix(n,n,r,c,(c-1)^(r-1)/(r-1)!);

%box >B VE(B,6)

\\ note that this is also B = S2 * P~

%box >chk1 VE(S2*P~,6)

\\ now perform T:= x -> e^x, using x=1 first

%box >chk1 ESum(1.0)*dV(1)*B \\ giving [1,e,e^2,e^3,...]

\\ iterate to get x->e^e^x

%box >chk1 ESum(1.0)*dV(1)*B^2 \\ giving [1,e^e,(e^e)^2,(e^e)^3,...]

\\ now make it general. assume a base-parameter s1 to perform x->s^x

\\ still using x = 1 (by the dV(1.0)-parameter)

s1 = 2

Bs = dV(log(s1))*B;

%box >chk1 ESum(1.0)*dV(1)*Bs \\ giving [1,2,(2)^2,(2)^3,...]

\\ iterate to get s^s^x, approx is a bit more difficult

%box >chk1 ESum(0.8)*dV(1)*Bs^2 \\ giving [1,2^2,(2^2)^2,(2^2)^3,...]

\\ but we may either increase the dimension n (requires rereading the initial matrix-module!)

\\ or using x=1/2 instead, s^s^0.5 = 2^sqrt(2)

%box >chk1 ESum(1.0)*dV(1/2)*Bs^2 \\ giving [1,2^2^0.5,(2^2^0.5)^2,(2^2^0.5)^3,...]

2^sqrt(2) \\ check

\\ The base-parameter s1=2 is outside the e^(-e)...e^(1/e) range, so

\\ lets take a better parameter, say s1=sqrt(2)

s1 = sqrt(2)

Bs = dV(log(s1))*B;

%box >chk1 ESum(1.0)*dV(1)*Bs \\ giving [1,s1,(s1)^2,(s1)^3,...]

\\ iterate to get s^s^x, approx is a bit more difficult

%box >chk1 ESum(0.9)*dV(1)*Bs^2 \\ giving [1,s^s,(s^s)^2,(s^s)^3,...]

tmp=sqrt(2)^sqrt(2) \\ check

tmp^2

tmp^3 \\ ok, leave it with this.

\\ =======================================================================

\\ now naive(numerical) fractional tetration, say height=1/2

\\ we need either matrix-log or eigensystem.

\\ let's begin with matrix-log

BsLog = MLog(Bs,200,1e-80); \\ use 200 terms for log-series, or stop at error<1e-80

%box >chk1 VE(BsLog,n,6) \\ nice, the terms seem to approx zero along a column

\\ to build powers, we multiply by a constant and exponentiate

BsPow = MExp(1/2*BsLog,200,1e-80) ;

%box >chk1 VE(BsPow,n,6) \\ nice, still the terms approximate zero

\\ now compute the halfiterate sqrt(2)^^0.5

\\ store the result in a variable

%box >chk1 res = ESum(1.0)*dV(1)*BsPow

\\ the scalar result of sqrt(2)^^0.5 is in result[n,2]

tmp = res[n,2]

\\ now iterate: insert this value into the dV()-parameter

%box >chk1 res = ESum(1.0)*dV(tmp)*BsPow \\ perfect

\\ so we have the half-iterates 1.0 -> 1.24362158223 -> 1.41421352681

\\ ================================================================

\\ The same can be done using eigensystem, I don't show it here.

\\ The problem of convergence and appropriateness of order of

\\ Euler-summation gets ubiquituous, once one uses more difficult

\\ parameters. Unfortunately - to aquire better approximations

\\ one needs more terms, aka bigger matrices - but with

\\ dim>20 we need more digits (at least 400 or 800), and dim>32

\\ seem to be impossible to compute using pari/gp builtin-routine.

\\ I implemented an own version of an eigensystem-solver, but with

\\ not much improvement.

\\ Also, the "empirical" approach for finding numerically the eigensystem

\\ has its systematic flaw because it cannot reflect the fact of

\\ finite truncation of the theoretical infinite matrix.

\\ That was the reason, why I searched for an analytical description

\\ (which I found, based on a hypothese about the eigenvalues)

\\ I can show this another time/another script -

\\ Gottfried Helms 2.11.2007

Gottfried Helms, Kassel