Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Matrix Operator Method
#21
bo198214 Wrote:Hm, then I somewhere must have an error in my computation.
I wanted to compute the Matrix logarithm via the formula
.
For A being the power derivation matrix of at 0, i.e. truncated to 6x6
Hmm, I can crosscheck such a computation in the evening. But for a quick reply: did you observe the partial sums for the entries for increasing n by s_n_(i,j) = sum (k=1..n) entry_(i,j)? If their sign oscillate we have a candidate for the Euler-summation (for each entry separately!). Another try may be to use the alternate series for logarithm
let f=(a-1)*(a+1) ^-1
log(a) = f/1 + f^3/3 + f^5/5 ...
(don't have it at hand, whether this series actually has alternating signs, but I think, it's correct). The same formula can be used for matrices A, where (A+I) is invertible and *all* the eigenvalues of (A+I) are in the admissible range for this series.

Anyway, it is worth to look at the partial sums; Euler-summation of the low order 2 can sum for instance 1-2+4-8+16-...+... which is even more divergent than many of our sums, requiring only few terms for a good approximation of the final result.

For a more detailed discussion I can do better in the evening.

Gottfried
Gottfried Helms, Kassel
Reply
#22
Gottfried Wrote:If their sign oscillate ...
No, their sign doesnt oscillate.
The reason is simply: Some of the Eigenvalues are greater than 1 and hence the logarithm sequence does no more converge.
This seems to can be fixed with the series

which then properly converges.
Reply
#23
bo198214 Wrote:
Gottfried Wrote:If their sign oscillate ...
No, their sign doesnt oscillate.
The reason is simply: Some of the Eigenvalues are greater than 1 and hence the logarithm sequence does no more converge.
This seems to can be fixed with the series

which then properly converges.
Well, I have it in my "Hütte Mathematische Tafeln":



To apply one of this series to a matrix, all eigenvalues must match the same bound separately.

I think, this settles this question for the most interesting cases.

For matrices with eigenvalues both <1 and >1 , which occurs with the Bs-matrixes for s outside the range 1/e^e ... e^(1/e) we need still workarounds, like the techniques for divergent summation. But this is then a completely different chapter. I'm happy, we have now arrived at a level of convergence of understanding of (one of?) the core points of concepts.

Gottfried
Gottfried Helms, Kassel
Reply
#24
Gottfried Wrote:With A1 = A-I then log(A) = A1 - A1*A1/2 + A1*A1*A1/3 ....
which is a nice exercise... since it comes out, that A1 is nilpotent and we can compute an exact result using only as many terms as the dimension of A is. For the infinite dimensional case one can note, that the coefficients are constant when dim is increased step by step, only new coefficients are added below the previously last row.

Are you sure about this? For me it rather looks as if they converge.
The Eigenvalues are quite different depending on the truncation.
Even in the case where you can compute the logarithm via the infinite matrix power series, it should depend on where you truncate the matrix.
Reply
#25
bo198214 Wrote:
Gottfried Wrote:With A1 = A-I then log(A) = A1 - A1*A1/2 + A1*A1*A1/3 ....
which is a nice exercise... since it comes out, that A1 is nilpotent and we can compute an exact result using only as many terms as the dimension of A is. For the infinite dimensional case one can note, that the coefficients are constant when dim is increased step by step, only new coefficients are added below the previously last row.

Are you sure about this? For me it rather looks as if they converge.
The Eigenvalues are quite different depending on the truncation.
Even in the case where you can compute the logarithm via the infinite matrix power series, it should depend on where you truncate the matrix.
Henryk - please excuse the delay. I was a bit exhausted after my search for the eigensystem-solution. Well, I think I've not taken the best wording. What I wanted to say was, that if A is nilpotent, then the series is finite. But A is only nilpotent, if the base is e ( the diagonal is 1 and the diagonal of A-I is zero) - I had in mind, that we were talking about this base. Then the entries of the partial sums up to the power d do not change for rows<d.
Call B=S2 - I, where S2 is the matrix of stirling-numbers 2'nd kind. The diagonal of B is zero.
The entries of the partial sums of a series, for instance
I, I+B/1! , I+B/1!+B^2/2!, ...
are constant up to rows
0, 1, 2, , ...
accordingly, since the additional terms do not contribute to that rows due to the nilpotency. This detail was all I wanted to point out; it's surely nothing much important there...

Regards-
Gottfried
Gottfried Helms, Kassel
Reply
#26
Ah, you meant for the nilpotent case. Surely there the eigenvalues do not change.
Reply
#27
Hi friends - I'm at a point to ask: "Where does the "matrix-method" stay?", so I'll take the occasion to provide a wider resume.

I'll use the two definitions for the related iterations, where I like the Big-T-notation, and also use Big-U:
Code:
.

T(b,x,h) = b^T(b,x,h-1)        T(b,x,0) = x
U(b,x,h) = b^U(b,x,h-1)-1      U(b,x,0) = x

Also I call 1/e^e < b < e^(1/e) the "safe range" for b in the following.

------------- Integer tetration --------------------------------
(I assume my matrix-notation as known here, but I switched to use the here more common parameter b for the base, where I earlier had used s)

Implementation of T(b,x,1) , value in the second column of the result
Code:
.
V(x)~ * dV(log(b))* B = V(y)~ = V(b^x)~

The matrix approach is just an elementary reformulation here. The needed coefficients for this transformation for x --> b^x or T(b,x,0)-->T(b,x,1) are the coefficients of the appropriate exponential series and are just collected in a matrix-scheme.

This is obviously iterable,
Code:
.
V(x)~   * dV(log(b))* B = V(b^x)~
V(b^x)~ * dV(log(b))* B = V(b^b^x)~
and since the result is finite, the involved coefficients, even if seen as matrix multiplications, should be well defined.
Code:
.
            Bb = dV(log(b))*B
V(x)~   * Bb^h = V({b,x}^^h)~
But I think, it must also formally be proven, that this "collection of coefficients" do in fact behave as a "matrix", so the rules for integer powers and other matrix-operations are applicable.

For the easier version U() one finds a discussion of this for instance in Aldrovani/Freitas [AF], which refers to the triangular form of the required operator-matrix; the useful property of row-finiteness is also adressed for instance in Berkolaiko[B], who proves the existence of the matrix-operator for any similar transformation.
I'm not sure, whether I can use Berkolaiko's arguments for square-matrices (and thus the original tetration-iteration T()) as well, so I must leave this open, "whether the collection of coefficients can indeed be used as matrix"


But the numerical results, which are always approximations based on the finite truncation, suggest that integer iteration and matrix-powers are interchangeable also for the T()-and U()-transformation:

a) numerical results by iteration and by matrix-powers coincide.

b) linear combinations of different V(x)- vectors result in linear combinations of the according V(y)-vectors as expected

c) infinite sums of various V(x) -vectors give expected results(tetra-geometric series) For a subset of parameters the result can be crosschecked by conventional scalar evaluation (possibly Euler-summation required) and agrees with these results.

d) even infinite sums of powers of Bb give results which are compatible with conventional scalar evaluation when the parameter b is in the safe range, and this can also be extended for parameters b even outside this range (b>e^(1/b)) (analytical arguments, which back that observation are based on the eigensystem-hypothesis, see below)


------ continuous tetration ------------------------

The continuous version depends then completely on the possibility to interprete the collection of coefficients used in the integer version in fact as a matrix, including the option to take matrix-logarithms and diagonalization (eigensystem-decomposition), since we need the concept of fractional iteration and thus of fractional matrix-powers.

Numerically

Again the numerical results agree with the expected results for the safe range of the base-parameter b and manageable height-parameters h.

Already numerical approximations for the matrix-logarithm and for the diagonalization using finite dimensions up to dim=32 and dim=64 show the expected behaviour in a certain range of the parameters, although the empirical eigenvalues have unknown degree of approximation to the "true" eigenvalues, which are unknown.
Anyway, approximative results can be found even for some b outside the "safe range" (if not too far) and h not too high.
Just use your favorite Eigen-solver and apply fractional powers...


Analytically

Based on a hypothese about the structure of the eigenvalues a formal and analytical solution for the eigenvectors was found for parameter b in the "safe range". Again, the results for these parameters agree with the expected values when approximated by scalar operations, and fractional iterates were computed for some examples.

It was also possible to apply the hypothesis about the eigen-system for values b>e^(1/e) when the required parameters were taken from the set of complex fixpoints for that b-parameters. The matrices for some example parameters could be reproduced perfectly, for instance for b=3 and b=7, h=1, which matched the matrices given by the simple integer-approach.
Also integer powers were correctly reproduced.

Still one problem: fractional powers for b>eta

A problem occured with non-integer powers here. The fractional powers of matrices, when constructed based on the eigensystem hypothese, were different from the matrices, which were computed by a numerical eigensystem-solver. Possibly the results are just complex rotations of each other (but I didn't confirm this yet), or the hypothese must be modified to use complex conjugates or the like.

--------------------------------------------------------

The formula for computation of the eigensystem makes use of the following decomposition. Let D be the diagonalmatrix containing the eigenvalues, dV(u), W the matrix of eigenvectors such that
Code:
Bb = W^-1 * D * W
then a further decomposition, where b = t^(1/t), t possibly complex,
Code:
W = X * P~ * dV(t)
and u = log(t) leads to the full decomposition
Code:
Bb = (dV(t^-1) * P^-1~ * X^-1) * dV(u) * (X * P~ * dV(t))
where arbitrary fractional or complex powers of Bb are expressed by that powers of the scalar elements of the diagonal eigenvalue-matrix
Code:
D = dV(u)

Here X and P are triangular (and thus invertible), X, dV(t) and dV(u) are depending on t, but only dV(u) is modified by the tetration-height-parameter h, such that we must insert D^h = dV(u^h) .

The coefficients in X are finite polynomials in t and u, having also denominators of products of (1-u),(1-u^2),(1-u^3),... which indicates singularities inherent to this method (see also Daniel's recent post, I'll add the reference later, I'm writing this in a notepad without the http:-references at hand).

While all coefficients of the individual matrices are then finitely computable, the analytical computation of W^-1 still involves evaluation of infinite series, because P^-1~ is not row-finite (but may for numerical computation simply be approximated by numerical inversion of W)

This means in practice, that there will be concurring methods for the numerical evaluation of the complete matrix-product, in the sense of using the best order for the evaluation by exploitation of the associativity of the matrix-products.

-----------------------------------------

So everything is still based on heuristics and hypotheses, which should be attempted to be proven now to close the case.

For me it is now nearly without doubt, that this method -for its implicite and underlying definition of tetration- will come out as a formal coherent/consistent method, at least as the formal skeleton/description. But -besides the need of formal proofs- I have still two problems:

1) the described matrix-approach allows dimensions of 24,32,64 and thus as few terms for the final series. This is way too few for a general implementation, even for tests of approximations this is often too few. Jay announced series with up to 700 terms - so since this seems to be possible the method should be reviewed in this aspect.

2) fractional powers for the analytically and eigensystem-constructed Bb-matrices, for b>eta.
I could exploratively try to find, what's going wrong with the result and to find a remedy this way. But I would like to find first a hypothesis for the reason (and the remedy) of this problem. It surely has to do with the different branches of complex logarithms, but I don't see in which way this is precisely involved in the current case. The integer powers come out fine...


The U()-problem seems to be simpler than the T()-problem, since we deal only with triangular matrices with a more obvious eigensystem. Possibly the "closing of the case" will be easier done if first attempted via the analysis of the U()-transformation.

Anyway, if it cannot be done in near future I think I'm taking a longer break - I'm already feeling a bit exhausted and tired of the subject.

Gottfried


[AF]
CONTINUOUS ITERATION OF DYNAMICAL MAPS#
R. Aldrovandi and L.P. Freitas
physics/9712026 16 Dec 1997


[B]Analysis of Carleman Representation of Analytical Recursions
G. Berkolaiko
JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 224, 81-90 1998.
ARTICLE NO. AY985986
Gottfried Helms, Kassel
Reply
#28
One encouraging result for fractional iterates with non-real base

Well , perhaps I was too much scared because of the difficult approximations of the matrix-operator-method for the cases when not e^-e < b < eta.

With a careful computation of the halfiterate of the complex base "I" I got
Code:
. y1= {I,1}^^0.5                ~ 1.16729812784 + 0.735996102206*I

  y2= {I,y1}^^0.5 = {I,1}^^1 ~  0.000150635188062 + 1.00000615687*I
which is near the expected result, without need to change my hypothesis.

Remember my hypothetical formula for continuous tetration

y = {b,x}^^h

implemented by

V(y)~ = V(x)~ * (dV(log(b))*B)^h = V(x)~ * Bb^h
and
y = V(y)[1]

To arrive at the desired result, Bb^h must be constructed by the analytical description:
Let
W^-1 * D * W = Bb
be the eigendecomposition of Bb and

W^-1 * D^h * W = Bb^h

the h'th power of Bb
This can simply approximated with any eigensystem-solver, when fed with the matrix Bb. Exponentiate the eigenvalues and recompute...
But the result will then be only heuristical and we don't know the degree of approximation.

-------------

Following my hypothese about the further structure of the eigen-matrices, we can compute this structure analytically.

I assumed:
W^-1 = dV(1/t)*P^-1 ~ * X^-1
and
W = X * P~ * dV(t)

P is the known lower triangular pascal-matrix, X is a lower triangular matrix depending on t and u, and dV(t) is a diagonalmatrix containing the consecutive powers of t. The eigenvalues in D are the consecutive powers of u.

Here t and u are dependend on the base-parameter b, such that
t^(1/t) = b
u = log(t)

With my fixpoint-tracer I can first find a solution for t and b (at least) for some values b outside the range e^-e < b <e^(1/e), and it seems even for complex values of b (I have to make this sure for the general case of complex b)

The entries of X can be composed by finite polynomials in t and u and since X is triangular its inverse as well as X^-1 * D^h * X can be computed exactly/symbolically without need of approximation.

As I have already indicated in a previous post, the direct computation of W^-1 is *not* possible without approximation, since P^-1 ~ is not rowfinite, but the order of computation of the whole matrix expression can be optimized to have only one situation, where we need approximative summation, giving inexact values. This can be the last step.

The whole formula in the analytical decomposition is

V(y)~ = V(x)~ * Bb^h
V(y)~ = V(x)~ * W^-1 * D^h * W
= V(x)~ * (dV(1/t)*P^-1 ~ * X^-1 ) * D^h *(X * P~ * dV(t)

Exploiting associativity we implement this by

= (V(x)~ * dV(1/t)*P^-1 ~ )* X^-1 * D^h * X * (P~ * dV(t)

V(y/t-1)~ = V(x/t-1)~ * X^-1 * D^h * X

Denote M(t,h) = X^-1 * D^h * X
then

V(y/t-1)~ = V(x/t-1)~ * M(t,h)

M(t,h) is computed with exact terms, and only its second column is needed for the final computation. Call this terms m_r using r for the row-index.

Let (x/t-1) = z then

y/t-1 = sum(r=0..inf) m_r * z^r

and finally

y = ((sum(r=0..inf) m_r * z^r ) + 1)*t

The terms of the series m_r * z^r still do not converge over the short run of 32 terms, which I have available. But they are far better summable by Euler-summation than the terms, which occur, if I applied the naive form of summation using the second column of the precomputed Bb^h.

Possibly the problems, which remained for the matrix-method, can be reduced to mere acceleration of convergence (or regular summation of alternating divergent series), and are then just object of technically improvements of the numerical methods. That would be a very nice outcome, because we had then a firm base, from where only optimizations are required...

(This is worth a good glass of wine tonight :-) )

Gottfried
Gottfried Helms, Kassel
Reply
#29
Hi all -

just as a note: I'm preparing a compilation for my matrix-method. I've not yet finished it. But because the classes started recently I'll possibly have not much time to continue in advance and/or intensity. So I thought to provide the first half part, which at least explains the "naive" method (and for instance relates it to the Bell-matices)
If you like, see at Matrix-approach and send useful comments

Gottfried
Gottfried Helms, Kassel
Reply
#30
Hi -

although I wanted to be a bit detached for some weeks and look into the basics of the divergent summation thing, I've just made a big step to backup my eigensystem-decomposition by a compeletely elementary iterative method, and I want to share this.

The symbolic eigensystem-decomposition is enormous resources-consuming and also not yet well documented and verified in a general manner. Here I propose a completely elementary (while still matrix-based) way to arrive at the same result as by eigensystem-analysis.

----------------------------------------------------

As you might recall from my "continuous iteration" article I've found a matrix representation for the coefficients of the powerseries for the U-tetration
, where .

Here I'll extend the notation for the powerseries a bit, compared to the text in the article:


Let me, as earlier, use the notation u=log(t) for this text.

Then
* the b_k are powers of u (but of no special interest here),
* d_k are products of (u^j-1), like
and
* a_k(t,h) are bivariate polynomials, whose numerical coefficients are the sole mysteries in this powerseries.

Since we have two formal variables in each a_k, the coefficients can be arranged in matrices, whose row- and column-multiplieres are the consecutive powers of the parameters.

For instance a3(t,h) looks like the following:


The 3! removed from denominator makes this


d3 is the common denominator, it is


This removed it is


Also b3 = u^2 removed, this is


Now the numerical coefficients of this polynomial can be arranged in a matrix
Code:
`
  A3 =  2  -3  1
        1  -2  2
where we understand, that the columns are to be multiplied by consecutive powers of v=u^h, beginning at 1 and the rows by consecutive powers of u, beginning at 0:
Code:
`
  A3 = [ 2   -3   1 ]*u^0  
       [ 1   -3   2 ]*u^1
       ------------------  
       *v   *v^2 *v^3
The value of this term, given u and h, is then simply the appropriate built rowsums added.
Now v is u^h and we have the interesting relation, depending on h that any integer height h provides a columnshift by h rows. So if h=0 we have no shift
Code:
`
  A3 = [ 2   -3   1 ]*u^0  
       [ 1   -3   2 ]*u^1
       ------------------  
       *1    *1  *1
and we have simple rowsums of zero (and also all A matrices provide coefficients of zero to the original powerseries the same way).
If h=1 we have
Code:
`
  A3 = [ 2   -3   1 ]*u^0  
       [ 1   -3   2 ]*u^1
       ------------------  
       *u   *u^2 *u^3
and in effect, this provides a column-shift
Code:
`
  A3 = [ 2          ]*u^1  
       [ 1   -3     ]*u^2
       [     -3   1 ]*u^3
       [          2 ]*u^4
       ------------------

= u(2 - 2u - 2u^2 + 2u^3 )
= 2u(u - 1) *(u^2 - 1 ))
= 2u*d3


and we have another sum, where also d3 cancels the denominator.

With h=2 we have the most interesting case (concerning a_3)
Code:
`
  A3 = [ 2          ]*u^2  
       [ 1          ]*u^3
       [     -3     ]*u^4
       [     -3     ]*u^5
       [          1 ]*u^6
       [          2 ]*u^7
  = u^2(2 + u - 3u^2 - 3u^3 + u^4 + 2u^5)      
  = u^2*(u - 1)*(u^2-1)* (2*u^2 + 3*u + 2)
  = u^2 * d3 * (2*u^2 + 3*u + 2)


where again d3 cancels the denominator.

From inspection of the first few A-matrices I concluded, that for each integer h the numerical coefficients of all A-matrices cancel the denominator. This even allows u=1, t=exp(1) and via fixpointshift the tetration-bases b=e^(1/e)) for integer heights, since the zero-denominators cancel.

It is interesting concerning fractional heights, for instance if h is half-integer, all these column-shifting has a special consequence. For h=1/2 we have
Code:
`
  A3 = [ 2          ]*u^0.5  
       [     -3     ]*u^1.0
       [ 1        1 ]*u^1.5
       [     -3     ]*u^2.0
       [          2 ]*u^2.5
       ------------------

where the cancelling of the denominators cannot occur. This may give a feeling for the mysteries of fractional iteration.

========================================================================

But this is only for recalling some known things. This msg is intended to cover another aspect.



The symbolic eigen-decomponsition, which gives us the A-matrices, is extremely costly in time and memory, so I was searching for another method to determine the A-matrices independently.

My idea was the striking property, that the coefficients seem to guarantee, that for integer heights the denominator is always a factor of the numerator.

So the A-matrices must be expressible in some multiples of the denominators d, let's express their polynomial-coefficients as vector D. If these multiples can be determined apriori then the costly eigenvalue-decomposition is not needed.

Yesterday I found the solution (with the help of members of the seqfan-mailing list), and this is an extremely simple routine.

We use the same example A_3 here, but since we don't want to use the eigendecomposition, all numerical coefficients in A are unknown. So we have
Code:
`
                              RS= // row-sums
  A3 = [ a0  -a1  a2]*u^0    [ ?]*u^0
       [ b0  -b1  b2]*u^1    [ ?]*u^1
       [  0    0   0]*u^2    [ 0]*u^2
       [  0    0   0]*u^3    [ 0]*u^3
                              //I've extended the rows to get
                              // compatibility with the following
                              // matrix-operation

and the row-sums must equal an integer composition of D3 = column([1,-1,-1,1). We know by 3'rd and 4'th row, that the row-sums RS are zero, so in
Code:
`
      RS = D3 * k0

the unknown k0 must be zero.

The next condition exploits the properties at h=1 since we have
Code:
`
                             RS =
  A3 = [a0          ]*u^1    [ ?]*u^1
       [b0   -a1    ]*u^2    [ ?]*u^2
       [     -b1  a2]*u^3    [ ?]*u^3
       [          b2]*u^4    [ ?]*u^4
       ------------------

and we have that
Code:
`
      RS = D3 * k0

but have no information about k0 here. From Eigenanalysis we know, that k0=1

The next condition, using h=2 is
Code:
`
                             RS=
  A3 = [ a0         ]*u^2    [ a0]*u^2
       [ b0         ]*u^3    [ b0]*u^3
       [     -a1    ]*u^4    [-a1]*u^4
       [     -b1    ]*u^5    [-b1]*u^5
       [          a2]*u^6    [ a2]*u^6
       [          b2]*u^7    [ b2]*u^7

and since this must again be an integer composition of the polynomial d3, we may express this by the matrix-multiplication
Code:
`
   RS = MD3 * K3

where MD is the matrix built from D3 with column-shift
Code:
`
  MD3 = [ 1  .  . ]  K3= [k0]
        [-1  1  . ]      [k1]
        [-1 -1  1 ]      [k2]
        [ 1 -1 -1 ]
        [ .  1 -1 ]
        [ .  .  1 ]

and K3 must be a vector now with integer weigths for each column in MD3
If K3 would be known, the result MD3 * K3 must equal RS, and from RS we can uniquely determine the unknown coefficients a0 to b2.

So, if the analoguous K-vectors for all A were known, also all numerical coefficients in the A-matrices were immediately known.

This is the concept of my new solution.

I collected some K-vectors into a matrix (from the known eigensystem-based solutions) to find a recursive generation rule for this matrix and posted the first nontrivial K-matrix into the mailing list, and a member (Richard Mathar) indeed found a computation-rule for this first K-matrix (there called "H"-matrices) , which I could systematize and generalize to a very simple iteration-process for all K- and then subsequently A-matrices analoguously.

Here an excerpt of my concluding mail to seqfan-list (I'm a bit tired of typing, but edited it a bit for this msg...)

Code:
`
==========================================================

a) Assume a matrix-function "rowshift(M)" which
computes "M1 = rowshift(M)" in the following way:
M = [a,b,c,...]
    [k,l,m,...]
    [r,s,t,...]
    [...      ]

M1 = [a,b,c, ...   ]
     [0,k,l,m, ... ]
     [0,0,r,s,t,...]
     [ ...         ]

b) Assume the lower-trianguler matrix of Stirling-numbers 2'nd kind
S = [1  0  0  0 ...]
    [1  1  0  0 ...]
    [1  3  1  0 ...]
    [1  7  6  1 ...]
    [ ... ]

c)then with
H0 = [1]
     [1]
     [1]
     [1]
    ...
we have the iterative Mathar-products (*1)
d)
H1 = S * rowshift(H0)     \\ which is then = S * I = S
H2 = S * rowshift(H1)  
H3 = S * rowshift(H2)
...
where
H1 =
  1   .   .   .  .
  1   1   .   .  .
  1   3   1   .  .
  1   7   6   1  .
  1  15  25  10  1
H2=
  1   .   .   .   .   .   .   .  .
  1   1   1   .   .   .   .   .  .
  1   3   4   3   1   .   .   .  .
  1   7  13  19  13   6   1   .  .
  1  15  40  85  96  75  35  10  1
H3=
  1   .   .    .    .    .    .    .    .   .   .   .  .
  1   1   1    1    .    .    .    .    .   .   .   .  .
  1   3   4    6    4    3    1    .    .   .   .   .  .
  1   7  13   26   31   31   25   13    6   1   .   .  .
  1  15  40  100  171  220  255  215  156  85  35  10  1
...
and so on
(based on the Maple-implementation of R.Mathar)

----------------------------------

In my basic problem-description I also had the vector D, which I also
should index now. It contains the coefficients of the polynomials in u:

e)   Dk = polcoeffs(prod(j=1,k -1, u^j - 1))

Say
D3 = columnvector([1 -1 -1 1])

f) Then define the matrix MD3 as the concatenation of shifted D3
MD3 =
   1
  -1  1
  -1 -1  1
   1 -1 -1
      1 -1  ...
         1  
      ...
up to the required dimension for the following matrix-multiplication
to obtain the k'th coefficient in the original powerseries

  Ut(x,h) = a1(t,h) b1/d1/1!*x + a2(t,h)*b2/d2/2!*x^2 + ...

Then the coefficients for the bivariate polynomial in u=log(t) and
v=u^h of the k'th coefficient are in the vector

g)   Vk = MDk * transpose(Hj[k,])

where [k,] denotes the k'th row and the index j at Hj indicates the
   j=binomial(k-1,2) 'th Mathar-iterate.

So we have three essential steps to compute one A-matrix:

  1.a) compute the D_k-vector as "polcoeffs(prod(j=1..k-1,u^j - 1))"
  1.b) compute the MD_k-matrix by concatenation/shifting of D_k

  2)   compute the m'th iterate H_m(), where m=binomial(k-1,2), use the
          k'th row of the result as K-vector K_k = H_m[k,]

  3.a) compute RS_k = MD_k * transpose(K_k)
  3.b) reformat the vector RS_k into a matrix A_k of k columns

------------------------------------
This is an eminent simple recursive scheme to obtain the coefficients
for the continuous iteration of x -> t^x-1.

However it is again enormous consumptive: the required iterations of
the Mathar-products is quadratic (precisely:binomial(index-1,2)) with
the index. So for index k=20 I need already 171 iterations with
always growing matrices... surely some shortcuts may be implemented.
The working load is in the number of columns of the H-matrices: the
final number of columns for k=20 is  1/2*k*(k^2-4k+5)=3250, and grows
cubically with the index.
  
However - the fact, that this simple scheme is able to mimic the
symbolic eigendecomposition of a matrix-operator is a very astonishing
aspect.

========================================================================

(end of mail to seqfan)

So, for our members, who did not understand my matrx-method with eigendecomposition (or did not trust it ;-) ) here is a completely elementary approach, simple to implement.
However - now two hypotheses are involved, which need proof:
a) that indeed the property holds, that A-matrices using integer values of height h contain the denominators d_k as factor
b) that the Mathar-process indeed provides the correct H-matrices.

I checked that up to h=20 and did not find an error.
I append also some A-matrices, computed by the above process. Note, they are transposes compared to my mentioned text about "continuous iteration"

Gottfried
========================================================================
Code:
`
D_2= (vector)
  -1  1

K_2= (vector)
  1

A_2= (matrix)
  -1  1

--------------------------

D_3= (vector)
  1  -1  -1  1

K_3= (vector)
  1  3  1

A_3= (matrix)
  1  -3  2
  2  -3  1

------------------------------

D_4=
  -1  1  1  0  -1  -1  1

K_4=
  1  7  13  26  31  31  25  13  6  1

A_4=
  -1   7  -12  6
  -6  18  -18  6
  -5  18  -18  5
  -6  11   -6  1

------------------------------

D_5=
  1  -1  -1  0  0  2  0  0  -1  -1  1

K_5=
  1  15  40  100  186  310  490  705  921  1140  1315  1435  1481  1420  
1285  1105  886  660  455  285  166  85  35  10  1

A_5=
   1   -15   50   -60  24
  14   -75  145  -120  36
  24  -130  230  -170  46
  45  -180  275  -180  40
  46  -165  215  -120  24
  26  -105  130   -60   9
  24   -50   35   -10   1

------------------------------------------

D_6=
  -1  1  1  0  0  -1  -1  -1  1  1  1  0  0  -1  -1  1

K_6=
  1  31  121  366  861  1642  2982  4932  7727  11497  16628  23127  
31277  40937  52147  64612  78297  92497  107162  121451  135002  146787  
156632  163631  167871  168862  166802  161541  153616  142981  130527  
116621  102186  87531  73486  60166  48101  37376  28236  20635  14656  
10026  6611  4130  2440  1306  636  255  80  15  1

A_6=
    -1    31   -180   390   -360  120
   -30   270   -870  1290   -900  240
   -89   694  -1920  2515  -1590  390
  -214  1364  -3345  3905  -2190  480
  -374  2025  -4440  4825  -2550  514
  -416  2395  -4995  4925  -2325  416
  -511  2430  -4530  4110  -1800  301
  -461  2006  -3480  2885  -1110  160
  -330  1336  -2055  1495   -510   64
  -154   675   -960   575   -150   14
  -120   274   -225    85    -15    1

-----------------------------------------------------------

D_7=
  1  -1  -1  0  0  1  0  2  0  -1  -1  -1  -1  0  2  0  1  0  0  -1  -1  1

K_7=
  1  63  364  1316  3857  8540  17522  32676  56763  92722  145565
219590  321413  457324  635782  865844  1158395  1523599  1973805  
2519790  3172421  3942099  4837273  5864971  7030269  8336258  9781520  
11364262  13075818  14906199  16838921  18855277  20927928  23031974  
25132905  27201587  29200578  31099439  32859715  34454231  35847014  
37015524  37930670  38578071  38937003  39005785  38775716  38259081  
37461558  36406349  35109662  33604207  31912699  30072889  28112666  
26071493  23978150  21871710  19778472  17733072  15757701  13878746  
12110168  10468493  8959111  7590492  6361384  5272659  4318307  3494092  
2790081  2198385  1707203  1306193  983171  727447  527625  374374  
258812  173747  112567  69951  41279  22883  11698  5369  2142  686  161  
21  1


A_7=
     1     -63     602    -2100    3360   -2520   720
    62    -903    4501   -10500   12600   -7560  1800
   300   -3290   13300   -26740   28700  -15750  3480
   889   -8337   29778   -53340   51590  -25830  5250
  2177  -16485   52269   -86415   78050  -36624  7028
  3368  -25977   78218  -121345  103040  -45360  8056
  5188  -36421  102242  -148715  118615  -49161  8252
  6980  -44604  117719  -161840  121800  -47481  7426
  8007  -48538  120967  -156765  110985  -40635  5979
  7867  -46599  110173  -134960   90160  -30849  4208
  7188  -40215   89656  -102620   63525  -20076  2542
  6111  -30723   63357   -67585   38885  -11340  1295
  4270  -20216   39081   -38220   19600   -5019   504
  2528  -11193   19355   -16695    7525   -1659   139
  1044   -4872    7658    -5425    1890    -315    20
   720   -1764    1624     -735     175     -21     1

--------------------------------------------------------------------------

=======================================================================
Gottfried Helms, Kassel
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 27 09/09/2019, 10:55 PM
Last Post: tommy1729
  A fundamental flaw of an operator who's super operator is addition JmsNxn 4 6,085 06/23/2019, 08:19 PM
Last Post: Chenjesu
  Tetration and Sign operator tetration101 0 212 05/15/2019, 07:55 PM
Last Post: tetration101
  2 fixpoints , 1 period --> method of iteration series tommy1729 0 1,236 12/21/2016, 01:27 PM
Last Post: tommy1729
  Tommy's matrix method for superlogarithm. tommy1729 0 1,444 05/07/2016, 12:28 PM
Last Post: tommy1729
  [split] Understanding Kneser Riemann method andydude 7 6,984 01/13/2016, 10:58 PM
Last Post: sheldonison
  [2015] New zeration and matrix log ? tommy1729 1 2,835 03/24/2015, 07:07 AM
Last Post: marraco
  Kouznetsov-Tommy-Cauchy method tommy1729 0 1,849 02/18/2015, 07:05 PM
Last Post: tommy1729
  Problem with cauchy method ? tommy1729 0 1,683 02/16/2015, 01:51 AM
Last Post: tommy1729
  Regular iteration using matrix-Jordan-form Gottfried 7 7,764 09/29/2014, 11:39 PM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)