Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Matrix Operator Method
#1
I'm surprised.
The matrix-operator-method promises for this version of tetration (x --> exp(x)-1 ) an even better behave than that of general tetration itself.
Assume the constant matrix-operator (for parameter s=exp(1))

Code:
C=
  1.0000000               .             .            .           .           .           .           .
          0       1.0000000             .            .           .           .           .           .
          0      0.50000000     1.0000000            .           .           .           .           .
          0      0.16666667     1.0000000    1.0000000           .           .           .           .
          0     0.041666667    0.58333333    1.5000000   1.0000000           .           .           .
          0    0.0083333333    0.25000000    1.2500000   2.0000000   1.0000000           .           .
          0    0.0013888889   0.086111111   0.75000000   2.1666667   2.5000000   1.0000000           .
          0   0.00019841270   0.025000000   0.35833333   1.6666667   3.3333333   3.0000000   1.0000000
     ..           ...

or
  1       .       .       .     .     .  .  .
  0       1       .       .     .     .  .  .
  0     1/2       1       .     .     .  .  .
  0     1/6       1       1     .     .  .  .
  0    1/24    7/12     3/2     1     .  .  .
  0   1/120     1/4     5/4     2     1  .  .
  0   1/720  31/360     3/4  13/6   5/2  1  .
  0  1/5040    1/40  43/120   5/3  10/3  3  1

which is just a factorial-scaling of the matrix of Stirling-numbers 2'nd kind:

Code:
St2=
  1       .       .       .     .     .   .  .
  0       1       .       .     .     .   .  .
  0       1       1       .     .     .   .  .
  0       1       3       1     .     .   .  .
  0       1       7       6     1     .   .  .
  0       1      15      25    10     1   .  .
  0       1      31      90    65    15   1  .
  0       1      63     301   350   140  21  1

Now parametrize C with s, by premultiplying it with the diagonalmatrix
of powers of log(s)

Code:
Cs = diag(1,log(s),log(s)^2,...) * C

Denote the entries of the second column of Cs as c with index r (beginning at zero)

Then

sum{r=0..inf} c_r = s - 1

Example:

s = sqrt(2)

Code:
Cs=
  1.0000000                  .                .               .              .              .              .               .
          0         0.34657359                .               .              .              .              .               .
          0        0.060056627       0.12011325               .              .              .              .               .
          0       0.0069380136      0.041628081     0.041628081              .              .              .               .
          0      0.00060113307     0.0084158630     0.021640790    0.014427194              .              .               .
          0     0.000041667369     0.0012500211    0.0062501054    0.010000169   0.0050000843              .               .
          0    0.0000024068016    0.00014922170    0.0012996729   0.0037546105   0.0043322429   0.0017328972               .
          0   0.00000011916198   0.000015014410   0.00021520654   0.0010009607   0.0020019213   0.0018017292   0.00060057639

The partial sums of V(1)~ * Cs converge very good, only the first few partial sums are given:

Code:
1.0000000            .            .             .             .              .
  1.0000000   0.34657359            .             .             .              .
  1.0000000   0.40663022   0.12011325             .             .              .
  1.0000000   0.41356823   0.16174133   0.041628081             .              .
  1.0000000   0.41416936   0.17015720   0.063268872   0.014427194              .
  1.0000000   0.41421103   0.17140722   0.069518977   0.024427362   0.0050000843
  1.0000000   0.41421344   0.17155644   0.070818650   0.028181973   0.0093323272
  1.0000000   0.41421356   0.17157146   0.071033857   0.029182933    0.011334249
  1.0000000   0.41421356   0.17157277   0.071063777   0.029393679    0.011984698
  1.0000000   0.41421356   0.17157287   0.071067386   0.029430750    0.012150514
  1.0000000   0.41421356   0.17157287   0.071067771   0.029436389    0.012185671
     ...         ...            ...         ...           ...             ...
= (s-1)^0      (s-1)^1      (s-1)^2      (s-1)^3       (s-1)^4       (s-1)^5
so
s - 1 ~ 0.41421356
---------------------------------------------------------------------------

Take Cs to any integral power to perform iteration, say Cs^y, denote the
entries of the second column as cy_r (they are meant to be different for
different y!)



y=2
sum{r=0..inf} cy_r = s^(s - 1)-1

Code:
Cs^2=
  1.0000000                 .                .                .                .                .                 .                  .
          0        0.12011325                .                .                .                .                 .                  .
          0       0.028027638      0.014427194                .                .                .                 .                  .
          0      0.0051933906     0.0067329815     0.0017328972                .                .                 .                  .
          0     0.00087258195     0.0020331386     0.0012130805    0.00020814392                .                 .                  .
          0     0.00013909595    0.00050073425    0.00050784250    0.00019427606   0.000025000843                 .                  .
          0    0.000021254738    0.00010929866    0.00016468480    0.00010399801   0.000029168911   0.0000030029326                  .
          0   0.0000031256524   0.000021966331   0.000045603341   0.000041826549   0.000019017611   0.0000042042874   0.00000036069200
The partial sums of V(1)~ * Cs^2 converge still very good, the first few partial sums:
Code:
1.0000000            .             .              .               .                .
  1.0000000   0.12011325             .              .               .                .
  1.0000000   0.14814089   0.014427194              .               .                .
  1.0000000   0.15333428   0.021160175   0.0017328972               .                .
  1.0000000   0.15420686   0.023193314   0.0029459776   0.00020814392                .
  1.0000000   0.15434596   0.023694048   0.0034538201   0.00040241997   0.000025000843
  1.0000000   0.15436721   0.023803347   0.0036185049   0.00050641798   0.000054169754
  1.0000000   0.15437034   0.023825313   0.0036641083   0.00054824453   0.000073187365
  1.0000000   0.15437078   0.023829461   0.0036754279   0.00056227479   0.000082316680
  1.0000000   0.15437085   0.023830207   0.0036780174   0.00056641656   0.000085912777
  1.0000000   0.15437085   0.023830335   0.0036785730   0.00056752723   0.000087142908
  1.0000000   0.15437086   0.023830357   0.0036786861   0.00056780338   0.000087521005
  1.0000000   0.15437086   0.023830360   0.0036787081   0.00056786794   0.000087627785
  1.0000000   0.15437086   0.023830361   0.0036787123   0.00056788228   0.000087655927
     ...         ...            ...         ...           ...             ...
=           s^(s-1)-1     (s^(s-1)-1)^2   ...

and the result is in the second column:

s^(s-1)-1 = 0.15437086
-----------------------------------------------------------------------------------------
y=3

sum{r=0..inf} cy_r = s^(s^(s - 1)-1)-1

and so on.


==========================================================================
Continuous version.

Since Cs has triangular structure, its eigensystem is very simple.
The eigenvalues are just the powers of log(s), and the matrices of
eigenvectors are triangular. Also it appears, that their entries
do not change with increasing dimensions.

Let
Cs = Q * D * Q^-1

then for the example s=sqrt(2)

Code:
D = [  1.0000000   0.34657359   0.12011325   0.041628081  ... ]

Q =
  1.0000000               .              .             .
          0       1.0000000              .             .
          0      0.26519711      1.0000000             .
          0     0.058953682     0.53039422     1.0000000
          0     0.012370448     0.18823687    0.79559133
          0    0.0025334009    0.056009589    0.38784957
          0   0.00051044437    0.015103553    0.14956860
          0   0.00010163175   0.0038231569   0.050149006
where D and Q are stabile for higher dimensions.

Now compute the half-iterate of CS by simply using the square-root
of D

Cs^0.5 = Q * D^0.5 * Q^-1

Code:
Cs^0.5=
  1.0000000                    0                  0                0              0  
          0           0.58870501                  0                0              0  
          0          0.064212553         0.34657359                0              0  
          0         0.0026279354        0.075604503       0.20402961              0  
          0      -0.000053277975       0.0072174095      0.066763125     0.12011325  
          0     0.00000075647862      0.00027476287      0.010014456    0.052405048

And the half-iterate of the function is
Code:
1.0000000            .            .            .            .             .
  1.0000000   0.58870501            .            .            .             .
  1.0000000   0.65291756   0.34657359            .            .             .
  1.0000000   0.65554550   0.42217809   0.20402961            .             .
  1.0000000   0.65549222   0.42939550   0.27079273   0.12011325             .
  1.0000000   0.65549298   0.42967027   0.28080719   0.17251830   0.070711274
  1.0000000   0.65549420   0.42967122   0.28161261   0.18323707    0.10927517
  1.0000000   0.65549417   0.42967248   0.28164602   0.18451886    0.11926607
  1.0000000   0.65549414   0.42967260   0.28164764   0.18461316    0.12084026
  1.0000000   0.65549414   0.42967258   0.28164786   0.18461814    0.12100355
  1.0000000   0.65549414   0.42967257   0.28164786   0.18461850    0.12101552
  1.0000000   0.65549414   0.42967257   0.28164786   0.18461852    0.12101631
     ...         ...            ...         ...           ...             ...
and the interesting result is in 2'nd column

half-iterate = 0.65549414

The next half-iterate is then, as expected:
Code:
1.0000000   0.41421356   0.17157288   0.071067812   0.029437252    0.012193309

next(half-iterate) = 0.41421356

---------------------------------------------------------------

I don't know, for what reason the continuous version is assumed as impossible?

Gottfried

(edit of the introductional remark)
(update 2: edit of the introductoy tables, 2'nd (now third) table corrected (I forgot the column-scaling))
Gottfried Helms, Kassel
Reply


Messages In This Thread
Matrix Operator Method - by Gottfried - 08/12/2007, 08:08 PM
RE: Matrix Operator Method - by bo198214 - 08/13/2007, 04:15 AM
RE: Matrix Operator Method - by jaydfox - 08/13/2007, 05:40 AM
RE: Matrix Operator Method - by Gottfried - 08/13/2007, 09:22 AM
RE: Matrix Operator Method - by bo198214 - 08/14/2007, 03:43 PM
RE: Matrix Operator Method - by Gottfried - 08/14/2007, 04:15 PM
RE: Matrix Operator Method - by bo198214 - 08/26/2007, 12:18 AM
RE: Matrix Operator Method - by Gottfried - 08/26/2007, 11:24 AM
RE: Matrix Operator Method - by bo198214 - 08/26/2007, 11:39 AM
RE: Matrix Operator Method - by Gottfried - 08/26/2007, 04:22 PM
RE: Matrix Operator Method - by Gottfried - 08/26/2007, 10:54 PM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 08:29 AM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 11:04 AM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 11:35 AM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 11:58 AM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 12:13 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 01:19 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 02:29 PM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 02:36 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 03:09 PM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 07:15 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 08:15 PM
RE: Matrix Operator Method - by bo198214 - 08/29/2007, 05:28 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 12:43 PM
RE: Matrix Operator Method - by Gottfried - 10/08/2007, 12:11 PM
RE: Matrix Operator Method - by Gottfried - 10/14/2007, 09:32 PM
RE: Matrix Operator Method - by Gottfried - 04/04/2008, 09:41 AM
RE: Matrix Operator Method - by Gottfried - 04/17/2008, 09:21 PM
RE: Matrix Operator Method - by bo198214 - 04/25/2008, 03:39 PM
RE: Matrix Operator Method - by Gottfried - 04/26/2008, 06:09 PM
RE: Matrix Operator Method - by bo198214 - 04/26/2008, 06:47 PM
RE: Matrix Operator Method - by Gottfried - 04/18/2008, 01:55 PM
RE: Matrix Operator Method - by Gottfried - 07/08/2008, 06:46 AM

Possibly Related Threads...
Thread Author Replies Views Last Post
  tommy beta method tommy1729 0 184 12/09/2021, 11:48 PM
Last Post: tommy1729
  Tommy's Gaussian method. tommy1729 24 5,299 11/11/2021, 12:58 AM
Last Post: JmsNxn
  The Generalized Gaussian Method (GGM) tommy1729 2 649 10/28/2021, 12:07 PM
Last Post: tommy1729
  Arguments for the beta method not being Kneser's method JmsNxn 54 9,454 10/23/2021, 03:13 AM
Last Post: sheldonison
  tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 676 09/20/2021, 04:29 AM
Last Post: JmsNxn
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 467 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Improved infinite composition method tommy1729 5 1,558 07/10/2021, 04:07 AM
Last Post: JmsNxn
  [YT] One parameter groups of transformations, vector fields and differential operator MphLee 1 723 06/16/2021, 04:31 AM
Last Post: JmsNxn
  A different approach to the base-change method JmsNxn 0 846 03/17/2021, 11:15 PM
Last Post: JmsNxn
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 5,051 03/08/2021, 07:13 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)