Matrix Operator Method
#31
I've got some mail, where the author considered
the case from the view of generation-functions,
which may be interesting for a needed proof of
the method.

Here some exchange:
Code:
It seems that e.g.f. of your triangle is:

(exp(y*(exp(y*(exp(x)-1))-1))-1)/y^2

which gives

1,
1,1,1,
1,3,4,3,1,
1,7,13,19,13,6,1,
...
Next mail:
Code:
And the e.g.f. connected to your coefficient a_3() is

(exp(y*(exp(y*(exp(y*(exp(x)-1))-1))-1))-1)/y^3

which gives triangle
1,
1,1,1,1,
1,3,4,6,4,3,1,
1,7,13,26,31,31,25,13,6,1,
...


And so on.
Then I did not understand, how the generation-function was
related to the bivariate array:
Code:
´
> However, I did not understand how you arrived
> via the generation-function process at the
> actual K-matrices. Would you mind to explain
> this to me?
>
> Gottfried
Final mail:
Code:
An example:

Substituting t = exp(y)  in  your U_t(x,2) we get

exp(y*(exp(y*(exp(x)-1))-1))-1 =

y^2*x+(1/2*y^2+1/2*y^3+1/2*y^4)*x^2+(1/6*y^2+1/2*y^3+2/3*y^4+1/2*y^5+1/6*y^6)*x^3+(1/24*y^2+7/24*y^3+13/24*y^4+19/24*y^5+13/24*y^6+1/4*y^7+1/24*y^8)*x^4+...
.
Gottfried Helms, Kassel
#32
Gottfried Wrote:The symbolic eigensystem-decomposition is enormous resources-consuming and also not yet well documented and verified in a general manner. Here I propose a completely elementary (while still matrix-based) way to arrive at the same result as by eigensystem-analysis.

----------------------------------------------------

As you might recall from my "continuous iteration" article I've found a matrix representation for the coefficients of the powerseries for the U-tetration
\( \hspace{24} U_t^{^oh}(x) \), where \( U_t(x)= U_t^{^o1}(x) = t^x - 1 \).

Gottfried, as long as you deal with functions with fixed point at 0, for example \( b^x-1 \) or \( xb^x \) there is no need to apply the Diagonalization method in its full generality.
Note, that I would like to change the name from matrix operator method to diagonalization method, as this is more specific (for example Walker uses the name "matrix method" for his natural Abel method).

For the case of a fixed point at 0, we can use the ordinary formulas of regular iteration. You know there are two cases, first \( f'(0)=1 \) (which Daniel Geisler calls parabolic) and \( 0<f'(0)<1 \) or \( f'(0)>1 \) (which Daniel Geisler calls hyperbolic, however I am not really happy with those names, because where is the elliptic case? Is this \( 0<f'(0)<1 \) in opposite to the \( f'(0)>1 \) which we then would call "properly hyperbolic". Also the hyperbolic and elliptic cases (in the just mentioned sense) exchange by using \( f^{-1} \) and hence can be treated quite similarly. This is not the case for hyperbolas and ellipses.)

The formula for the parabolic iteration was already mentioned on this forum under the name double binomial formula.

The formula for hyperbolic iteration (or elliptic iteration) can be derived from the fact that
\( f^{\circ t}\circ f=f\circ f^{\circ t} \)
and by the regularity condition:
\( (f^{\circ t})'(0)=(f'(0))^t \)

For abbreviation let \( g:=f^{\circ t} \), then the first equation can be written as
\( g_n (f^n)_n + \sum_{m=1}^{n-1} g_m (f^m)_n = f_1 g_n + \sum_{m=2}^n f_m (g^m)_n \)
where \( f^n \) means the \( n \)th power, by the subscript index \( i \) we denote the \( i \)th coefficients of the indexed powerseries.

We can rearrange the above formula to get a recurrence relation for the coefficients of \( g \):

\( g_n = \frac{1}{(f_1)^n-f_1}\left( f_n (g_1)^n - g_1 f_n +
\sum_{m=2}^{n-1} (f_m (g^m)_n - g_m (f^m)_n) \right) \)

the only undetermined coefficient is now \( g_1 \) but we know already that we set \( g_1=f_1^t \). The coefficient \( (f^m)_n \) is the entry at the m-th row and the n-th column of the Carleman matrix of \( f \). We see that in the above formula also the term \( (g^m)_n \) occurs though we dont know all the coefficients of \( g \) yet, however we have the power formula:
\( \left(f^p\right)_n=\sum_{
p_1+\dots+p_n=p\\
1p_1+\dots+np_n=n\\
p_1,\dots,p_n\ge 0
}
\frac{p!}{p_1!\dots p_n!} (f_1)^{p_1}\dots
(f_{n})^{p_n} \)

and see that in \( (f^p)_n \) the biggest index taken at \( f \) that can occur is \( n \). However \( p_n \), the exponent of \( f_n \), can be bigger than 0 only in the case \( p_n=1 \) (second line below the sum) and all other \( p_i=0 \). This however can only happen for \( p=1 \) (first line below the sum). This case is excluded in \( (g^m)_n \) because \( m\ge 2 \) and so we have indeed a recurrence formula which gives a polynomial in \( g_1=f_1^t \) for \( g_n \).

You see, we not even have to solve a linear equation system for using the diagonalization method on a fixed point at 0.
#33
Hi Henryk -

bo198214 Wrote:Gottfried, as long as you deal with functions with fixed point at 0, for example \( b^x-1 \) or \( xb^x \) there is no need to apply the Diagonalization method in its full generality.
Note, that I would like to change the name from matrix operator method to diagonalization method, as this is more specific (for example Walker uses the name "matrix method" for his natural Abel method).

- yes, I've no problem with it. My motive to retain the name was not to indicate something as new, but to not mix methods, which are possibly different. I hate the cases, where names are borrowed for something other than their original meaning. You said it several times, that my approach is just the regular tetration; but since I've dealt with something more (most prominently infinite series of tetration/its associated matrices), for which I've not seen a reference before although such generalizations are smehow obvious, I could not be without doubt, that I was introducing something going away.

If you identify it fully with the name "diagonalization"-method, I've no problem to adapt this in the future, because it contains the name of the general idea behind (which is also widely acknowledged), and that is always a good manner of names... So I'll try to get used to it hoping to not provoke protests and demonstrations... Wink

What I'll keep anyway will be the term "operator" to indicate the class of matrices, whose second column are defined as the coefficients of a function (represented by a polynomial or powerseries) and the other columns are configured such that they provide the consecutive powers of that function, so that the matrix-multiplication of a vandermonde-vector (consecutive powers of one parameter) will again give a vandermonde-vector, and is thus (at least finitely often) iterable. (The Bell- and Carleman-matrices are -mutatis mutandis- instances of such matrix-"operators").

-------------------

(for the following formulae - I've to go through them in detail to see, whether they are of help to compute the coefficients. I think with the help of Aldrovandis Bell-/Carleman examples I'll be able to relate the both ways of views soon)

-------------------

Quote:You see, we not even have to solve a linear equation system for using the diagonalization method on a fixed point at 0.

Yepp, I already seem to have arrived at such a view. My latest Eigensystem-solver for this class of triangular matrices is already very simple, it's just a ~ten-liner in Pari/GP

Code:
\\ Ut is the lower triangular matrix-operator for the function
\\    Ut :=  x -> (t^x - 1)
\\ for t=exp(1) Ut is the factorially scaled matrix of Stirling-numbers 2'nd kind
\\ Ut may even contain the log(t)-parameter u in *symbolic* form
\\ use dim<=32 in this case to prevent excessive memory and time-consumtion.
\\ Also use exact arithmetic then; provide numfmt=1 instead of =1.0
{ APT_Init2EW(Ut,dim=9999,numfmt=1.0)=local(tu=Ut[2,2],UEW,UEWi,tt=exp(tu),tuv) ;
   dim=min(rows(Ut),dim);
   tuv=vectorv(dim,r,tu^(r-1)); \\ the eigenvalues
  
   UEW=numfmt*matid(dim); \\ the first eigenmatrix UEW
   for(c=2,dim-1,
      for(r=c+1,dim,
          UEW[r,c] = sum(k=c,r-1,Ut[r,k]*UEW[k,c])/(tuv[c]-tuv[r])
      ));
    
   UEWi=numfmt*matid(dim); \\ the second eigenmatrix UEWi = UEW^-1
   for(r=3,dim,
     forstep(c=r-1,2,-1,
       UEWi[r,c]=sum(k=0,r-1-c,Ut[r-k,c]*UEWi[r,r-k]) /(tuv[r] -tuv[c])
     ));
   return([[tu,tt,exp(tu/tt)],UEW,tuv,UEWi]);
   \\ Ut = UEW * matdiagonal(tuv) * UEWi
}

This is a recursive approach, very simple and I could imagine it implements just the g- and f- recursions in your post.
Gottfried Helms, Kassel
#34
Gottfried Wrote:You said it several times, that my approach is just the regular tetration; but since I've dealt with something more (most prominently infinite series of tetration/its associated matrices), for which I've not seen a reference before although such generalizations are smehow obvious, I could not be without doubt, that I was introducing something going away.

Nono, I didnt say that it is just regular tetration, but it is regular tetration if you do it at a fixed point. Strangely we may have different opinions about the interestingness of the application types of this method. I am fascinated to apply this method to non-fixed points, e.g. for the case \( b>e^{1/e} \) while at fixed points the case is already explored by the regular iteration and yields no new results. While you, (if I understand a discussion right, that we had some time ago on this forum), whom introduced the method to this forum, rather are fascinated with finding patterns in the matrices, which rather occur at fixed points.

Quote:This is a recursive approach, very simple and I could imagine it implements just the g- and f- recursions in your post.
Wow, yes this may indeed be, however I am currently too lazy Wink to check this in detail. But at least regarding our comparison here accutely supports that they are equal.
#35
Just read the book "Advanced combinatorics" of Louis Comtet (pg 143-14Cool
about his method of fractional iteration for powerseries. This is just
a binomial-expansion using the Bell-matrix

\( \hspace{24}B^t = \sum_{k=0}^{\infty} ({t \\k})*B1^k \)

where

\( \hspace{24} B1 = B - ID \)

However, with one example, with the matrix for dxp_2(x) (base= 2)
the results of all three methods (Binomial-expansion, Matrix-
logarithm, Diagonalization) converge to the same result.

For matrix-log and binomial-expansion I need infinitely
many terms to arrive at exact results (because for the general
case the diagonal of the matrix is not the unit-diagonal and so
the terms of the expansions are not nilpotent) while the diagonali-
zation-method needs only as many terms as the truncation-size of
the matrix determines, and is then constant for increasing sizes.
(Well, I'm talking of triangular matrices here and without thoroughly
testing...)

\( \hspace{24} U_t = {}^dV(\log(2))* fS2F \)

So Ut is the Bell-matrix for the function \( \hspace{24} Ut(x) = 2^x - 1 \)

Then I determined the coefficients for half-iteration Ut°0.5(x) by
Ut^0.5 using all three methods.


Result by Diagonalization / Binomial / Matrix-log
(differences are vanishing when using more terms for the series-expansion
of Binomial / matrix-logarithm; I used 200 terms here)

\( \hspace{24}
\begin{matrix} {rrrrrrr}
1.00000000000 & . & . & . & . & . & . & . \\
0 & 0.832554611158 & . & . & . & . & . & . \\
0 & 0.157453119779 & 0.693147180560 & . & . & . & . & . \\
0 & 0.0100902384840 & 0.262176641827 & 0.577082881386 & . & . & . & . \\
0 & -0.000178584914170 & 0.0415928340834 & 0.327414558137 & 0.480453013918 \\
0 & 0.0000878420556305 & 0.00288011566971 & 0.0829028563527 & 0.363454000182 \\
0 & -0.00000218182495620 & 0.000191842025839 & 0.0114684142796 & 0.126396502873 \\
0 & -0.00000702051219082 & 0.0000204251058104 & 0.00104695045599 & 0.0258020272404
\end{matrix}
\)


Differences:

Diagonalization - binomial (200 terms)
\( \hspace{24}
\begin{matrix} {rrrrrrr}
0.E-414 & . & . & . & . & . & . & . \\
0 & -1.06443358765E-107 & . & . & . & . & . & . \\
0 & 1.60236818692E-61 & -1.41872090005E-61 & . & . & . & . & . \\
0 & -1.75734170509E-39 & 2.93458536041E-39 & -1.29912638612E-39 & . & . & . & . \\
0 & 8.68085535070E-27 & -2.05384773365E-26 & 1.74805552817E-26 & -5.15903675680E-27 & . & . \\
0 & -7.73773729412E-19 & 2.30948495672E-18 & -2.83611081486E-18 & 1.62496041665E-18 \\
0 & 1.30102806362E-13 & -4.60061066251E-13 & 7.25093796854E-13 & -6.05100795132E-13 \\
0 & -0.000000000461000932910 & 0.00000000185781313787 & -0.00000000352662825863 & 0.00000000381278773133
\end{matrix}
\)

Binomial - matrixlog (200 terms)
\( \hspace{24}
\begin{matrix} {rrrrrrr}
0.E-414 & . & . & . & . & . & . & . \\
0 & 1.55109064503E-66 & . & . & . & . & . & . \\
0 & -5.98902754444E-56 & 5.30262558750E-56 & . & . & . & . & . \\
0 & -1.75734170498E-39 & 2.93458536023E-39 & -1.29912638604E-39 & . & . & . & . \\
0 & 8.68085535070E-27 & -2.05384773365E-26 & 1.74805552817E-26 & -5.15903675680E-27 & . \\
0 & -7.73773729412E-19 & 2.30948495672E-18 & -2.83611081486E-18 & 1.62496041665E-18 \\
0 & 1.30102806362E-13 & -4.60061066251E-13 & 7.25093796854E-13 & -6.05100795132E-13 \\
0 & -0.000000000461000932910 & 0.00000000185781313787 & -0.00000000352662825863
\end{matrix}
\)

Diagonalization - matrixlog (200 terms)
\( \hspace{24}
\begin{matrix} {rrrrrrr}
0.E-414 & . & . & . & . & . & . & . \\
0 & -1.55109064503E-66 & . & . & . & . & . & . \\
0 & 5.98904356812E-56 & -5.30263977471E-56 & . & . & . & . & . \\
0 & -1.03921639872E-49 & 1.73538878994E-49 & -7.68248541586E-50 & . & . & . & . \\
0 & 3.03477814704E-45 & -7.18038111513E-45 & 6.11155244587E-45 & -1.80377946480E-45 & . \\
0 & -9.50474725632E-42 & 2.83773329816E-41 & -3.48603689927E-41 & 1.99810508742E-41 \\
0 & 7.26187711067E-39 & -2.57084380097E-38 & 4.05741175331E-38 & -3.39109179435E-38 \\
0 & -1.42479127409E-29 & 5.74050200445E-29 & -1.08939157931E-28 & 1.17741370976E-28
\end{matrix}
\)

Gottfried Helms, Kassel
#36
I've collected basic details of the diagonalization-method as I use it for fractional U-tetration, including the Pari/GP-routine and some more sample coefficients in

[update 9.8.2008]
http://go.helms-net.de/math/tetdocs/APT.htm
[/update]

Comments welcome -

Gottfried
Gottfried Helms, Kassel
#37
Just derived a method to compute exact entries for powers of the (square) matrix-operator for T-tetration.
It is applicable to positive integer powers only, but for any base.
The restriction to positive integer powers lets look such solutions useless, since integer iteration-height can easily computed just using the scalar values. But I'll use this for further analysis of powerseries, series of powertowers and hopefully one time for the fractional iteration...



Let's use the following notational conventions:
Code:
´

b^^h - the powertower of height h using base b

V(x) - the vandermonde-column-vector containing consecutive powers of its parameter x:
        V(x) = column(1,x,x^2,x^3,...)
dV(x) - used as diagonal-matrix

T - the matrix which performs T-tetration to base b (in our forum:"exp_b°t(x)" ):
      V(x)~ * T = V(b^x)~

U - the matrix which performs U-tetration to base b (in our forum:"dxp_b°t(x)" ):
      V(x)~ * U = V(b^x - 1) ~
      Note, that U is lower triangular.
      The triangularity allows to compute exact entries for the integer matrix-powers.


Then the entries for positive integer powers of T can be finitely computed and are "exact", as far as we assume scalar logarithms and exponentials as exact:
Code:
´

T^2 = U*dV(b^^0) * T*dV(b^^1)
T^3 = U*dV(b^^0) * U*dV(b^^1) * T*dV(b^^2)
...
T^h = prod_{k=0}^{h-2} (U * dV(b^^k))
           * (T * dV(b^^(h-1)))
This finding is interesting, because in my matrix-method I had to use fixpoint-shift to get exact entries even for integer powers, and since the fixpoints for T-tetration are real only for a small range of bases we had to deal with complex-valued U-matrices when considering the general case. Here we do not need a fixpoint-shift.

I did not check how this computation is related to Ioannis Galidakis' method for exact entries yet, but I think, this is interesting too.


Here is the top left of the symbolic T^2, where lambda=log(b). Each row has to be multiplied by the entry in the most left column and each column must also be multiplied by the entry in the first row.

   

Here is the top left of the symbolic T^3. (b^^2 means b^b). Legend as before

   

The difference between the symbolic computation and the simple matrixpower is interesting. I used dim=64x64, base b=sqrt(2), which provides a good approximation when the simple matrix-power is computed. Here are two (zoomed) images: very good aproximation in the leading 12 columns (abs differences to the exact values <1e-20 ), but in the columns 52 to 63 the differences grow up to absolute values greater than 1e10. Surely I "knew" that differences should occur, but I hadn't guessed, that they are so large - I just didn't investigate this in detail.

The leading first twelve columns of the matrix of differences:

   

The twelve rightmost columns:

   

The large errors are actually still relatively small for that base b=sqrt(2). A measure for the quality of approximation is, whether the resulting vector of V(x)~*T^3 = Y~ is actually vandermonde and thus Y = V(y) .
This means, that the ratios of logarithms of its entries : log(Y[k])/log(Y[1]), k=0..63, should give the exact sequence [0,1,2,3,...], because this means, that Y contains indeed the consecutive powers of Y[1].
Here is a table of that ratios. ( Remember: we check the col-sums of the third power of T, using x=1)

Code:
´

column    symbolic              "naive"             difference
------------------------------------------------------------
  0  -1.13831366798E-19        0.E-201       -1.13831366798E-19
  1       1.00000000000  1.00000000000                  0.E-202
  2       2.00000000000  2.00000000000        1.11189479752E-19
  3       3.00000000000  3.00000000000        2.20208219209E-19
  4       4.00000000000  4.00000000000        3.27448139227E-19
....
42      42.0000000000  42.0000000000        1.19448154596E-11
43      43.0000000000  43.0000000000        3.09972987348E-11
44      44.0000000000  43.9999999999        7.77154497620E-11
45      45.0000000000  44.9999999998        1.88541576938E-10
46      46.0000000000  45.9999999996  0.000000000443257385765
47      47.0000000000  46.9999999990   0.00000000101122118773
48      48.0000000000  47.9999999978   0.00000000224146913405
49      49.0000000000  48.9999999952   0.00000000483322217091
50      50.0000000000  49.9999999899    0.0000000101495726916
51      51.0000000000  50.9999999792    0.0000000207790885254
52      52.0000000000  51.9999999585    0.0000000415151935602
53      53.0000000000  52.9999999190    0.0000000810212593226
54      54.0000000000  53.9999998454     0.000000154592714129
55      55.0000000000  54.9999997114     0.000000288630924056
56      56.0000000000  55.9999994723     0.000000527723771045
57      57.0000000000  56.9999990544     0.000000945602091679
58      58.0000000000  57.9999983383      0.00000166172556358
59      59.0000000000  58.9999971341      0.00000286585838319
60      60.0000000000  59.9999951463      0.00000485372880576
61      61.0000000000  60.9999919223      0.00000807772027087
62      62.0000000000  61.9999867825       0.0000132174922318
63      63.0000000000  62.9999787236       0.0000212764324046

For base b=2 this looks already catastrophic for the "naive"-computation:

Code:
´

column    symbolic              "naive"             difference
------------------------------------------------------------
  0  -4.36636233681E-20        0.E-202      -4.36636233681E-20
  1       1.00000000000  1.00000000000                 0.E-202
  2       2.00000000000  2.00000000000       2.46662212171E-14
  3       3.00000000000  2.99999999998       2.31650095614E-11
  4       4.00000000000  3.99999999669  0.00000000331064284896
  5       5.00000000000  4.99999985867    0.000000141325875791
  6       6.00000000000  5.99999733964     0.00000266036059669
....
50      50.0000000000  37.9413247398           12.0586752602
51      51.0000000000  38.3796009369           12.6203990631
52      52.0000000000  38.8098393554           13.1901606446
53      53.0000000000  39.2323121795           13.7676878205
54      54.0000000000  39.6472796473           14.3527203527
55      55.0000000000  40.0549905348           14.9450094652
56      56.0000000000  40.4556826502           15.5443173498
57      57.0000000000  40.8495833279           16.1504166721
58      58.0000000000  41.2369099170           16.7630900830
59      59.0000000000  41.6178702587           17.3821297413
60      60.0000000000  41.9926631496           18.0073368504
61      61.0000000000  42.3614787893           18.6385212107
62      62.0000000000  42.7244992089           19.2755007911
63      63.0000000000  43.0818986820           19.9181013180

It is obvious, that we should use the "exact" (symbolic) description, if we ever explicitely consider powers of the tetration-matrix T.
Gottfried Helms, Kassel
#38
Gottfried Wrote:Just derived a method to compute exact entries for powers of the (square) matrix-operator for T-tetration.
It is applicable to positive integer powers only, but for any base.
...
Code:
T^2 = U*dV(b^^0) * T*dV(b^^1)
T^3 = U*dV(b^^0) * U*dV(b^^1) * T*dV(b^^2)
...
T^h = prod_{k=0}^{h-2} (U * dV(b^^k))
           * (T * dV(b^^(h-1)))
So the finding is that though the matrix multiplication of the T's is infinite, the result is expressible in finite terms (polynomials) of \( \text{log}(b) \)?

Quote:This finding is interesting, because in my matrix-method I had to use fixpoint-shift to get exact entries even for integer powers, and since the fixpoints for T-tetration are real only for a small range of bases we had to deal with complex-valued U-matrices when considering the general case. Here we do not need a fixpoint-shift.

I did not check how this computation is related to Ioannis Galidakis' method for exact entries yet, but I think, this is interesting too.

That would be useful to compare. Can you derive a recurrence from your matrix formula?
#39
bo198214 Wrote:So the finding is that though the matrix multiplication of the T's is infinite, the result is expressible in finite terms (polynomials) of \( \text{log}(b) \)?

Exactly. The occuring infinite sums are decomposable into sums of simple exponential-series for which then exact values are defined (and computable to arbitrary precision). The compositions of the evaluated exponentials contain then only finitely many terms. See the display of the matrix T^2. I found these by studying the dot-products of the rows and second column of T*T which was then easily generalizable to the other columns of the matrix-product.

Quote:That would be useful to compare. Can you derive a recurrence from your matrix formula?

Yes. For this I see another streamlining of the formula first:
Note, that T = U * P~ , which can also be seen by the operation
Code:
´
  V(x)~ * T = V(b^x)~
  V(x)~ * U = V(b^x -1)~
  also: V(b^x -1)~ * P~ = V(b^x)~
  thus: V(x)~ * (U * P~ ) = V(b^x)~
  ==> U * P~  = T

A further adaption can be made. We have, that the final term in T^2 = U*dV(b^^0) * T*dV(b^^1) is
Code:
´
  T*dV(b^^1) = U *P~ * dV(b^^1)

We can rewrite this in terms of a power of P ( use notation "PPow()" ) by expanding with the trivial product dV(b^^1)*dV(1/b^^1)= I
Code:
´
  T*dV(b^^1) = U *    dV(b^^1)*dV(1/b^^1)   * P~ * dV(b^^1)
             = U * dV(b^^1)*  (dV(1/b^^1)   * P~ * dV(b^^1))
             = U * dV(b^^1)*            PPow(b^^1) ~
The general product-formula changes then to

Code:
´
  T^h =   prod_{k=0}^{h-1} (U * dV(b^^k))
       * PPow(b^^(h-1)) ~

and a recursion is then
Code:
´
  T^(h+1) = T^h    * PPow(-b^^(h-1))~ * U * dV(b^^h) * PPow(b^^h)~

or, even more simple

  T^(h+1) = T^h    * PPow(-b^^(h-1))~ * T * dV(b^^h)

where the first part of the product , T^h * PPow(-b^^(h-1))~ , gives a triangular matrix.

The recursion may also be seen "in action", when evaluated for a parameter x. We need an ascii notation for iterated exponentiation, I use x.b^^h for exp_b°h(x) here, or if x=1, simply b^^h .

We have by definition, that
Code:
'
  V(x)~ * T^(h+1) = V(x.b^^(h+1)) ~

Using the recursion-formula we get
Code:
'
  V(x)~ * T^(h+1) = V(x) ~*T^h  * PPow(-b^^(h-1))~ * T     * dV(b^^h)
                  = V(x.b^^h) ~ * PPow(-b^^(h-1))~ * T     * dV(b^^h)
                  =        V(x.b^^h - b^^(h-1))~   * T     * dV(b^^h)
                  =          V(b^(x.b^^h - b^^(h-1)))~     * dV(b^^h)
                  =          V(b^(x.b^^h)/b^(b^^(h-1)))~   * dV(b^^h)
                  =          V(x.b^^(h+1)/b^^h)~           * dV(b^^h)
                  =          V(x.b^^(h+1))~ * dV(1/b^^h)   * dV(b^^h)
                  =          V(x.b^^(h+1))~ *              I
                  =          V(x.b^^(h+1))~
which is the same result.
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  The ultimate beta method JmsNxn 8 2,086 04/15/2023, 02:36 AM
Last Post: JmsNxn
  greedy method for tetration ? tommy1729 0 510 02/11/2023, 12:13 AM
Last Post: tommy1729
  tommy's "linear" summability method tommy1729 15 3,824 02/10/2023, 03:55 AM
Last Post: JmsNxn
  another infinite composition gaussian method clone tommy1729 2 941 01/24/2023, 12:53 AM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 880 12/30/2022, 11:27 PM
Last Post: tommy1729
  Matrix question for Gottfried Daniel 6 2,224 12/10/2022, 09:33 PM
Last Post: MphLee
  [MSE] short review/implem. of Andy's method and a next step Gottfried 4 1,742 11/03/2022, 11:51 AM
Last Post: Gottfried
  Is this the beta method? bo198214 3 1,574 08/18/2022, 04:18 AM
Last Post: JmsNxn
  Describing the beta method using fractional linear transformations JmsNxn 5 2,337 08/07/2022, 12:15 PM
Last Post: JmsNxn
Question The Etas and Euler Numbers of the 2Sinh Method Catullus 2 1,280 07/18/2022, 10:01 AM
Last Post: Catullus



Users browsing this thread: 1 Guest(s)