Tetration Forum
A nice series for b^^h , base sqrt(2), by diagonalization - Printable Version

+- Tetration Forum (https://math.eretrandre.org/tetrationforum)
+-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1)
+--- Forum: Computation (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=8)
+--- Thread: A nice series for b^^h , base sqrt(2), by diagonalization (/showthread.php?tid=249)

Pages: 1 2


RE: Logarithmic behaviour of the super exponential at -2 - bo198214 - 03/11/2009

Gottfried, we discussed that already in an much earlier post.
Your whole infinite matrix computation is just r e g u l a r i t e r a t i o n !!!

Let me make it clear again:

Gottfried Wrote:If we have the Bellmatrix B for b^x , then we cannot invert B for the infinite case. But

B = fS2F * P~ // both triagular, invertible

First Bell matrix behave antiisomorphic: let B[f] be the Bell matrix of f at 0, then
B[fog]=B[g]*B[f]

Now let us assign the variables:

B is the Bellmatrix of at 0.
P~ is the Bellmatrix of
fS2F is the Bellmatrix of
(for my convenience I omit the base sub script at exp and dexp)

As dexp has a fixed point at 0 fS2F is upper triangular. Your equation just corresponds to




Quote:But P (as well as then PInv) are the binomial-matrices and they perform addition when operating on the powerseries:

V(x)~ *PInv~ = V(x-1)~

exactly: , the inverse of is .


Quote:Then rearranging the invertible P as PInv to the left

V(e^x)*PInv~ = V(x)~ * fS2F

then the invertible fS2F into fS1F to the left

V(e^x)*PInv~ * fS1F = V(x)~

where the product to construct B^-1 is forbidden *in the infinite case* but, applying the binomial-theorem by PInv

V(e^x - 1)~ * fS1F = V(x) ~

which is perfectly ok, and this leads then, since fS1F performs log(1+x)

Thats clear without touching any matrix:
V(e^x - 1) is the Bell matrix of dexp, and fS1F is the inverse of the Bell matrix of dexp:


Quote:I do the same with the eigensystem-decomposition / Schröder-function. I found the fixpoint-shift with my matrix-notation by simply proceeding from the initial equation

V(x)~ * B = V(y)~

decomposing B into matrix-factors P, P^-1 and a triangular C

V(x)~ * P^-t ~ * C * P^t ~ = V(y)~

applying binomial-theorem with the -t'th power of P~ on rhs und lhs

V(x-t)~ * C = V(y - t)~ // implements shift by t = fixpoint

where then C is triangular and allows eigendecomposition providing exact values



C is the Bell matrix of texp.

we see that texp has 0 as fixed point and hence is the Bell matrix triangular.
And the power can be taken exactly.

Quote:All in all- I use the "regular iteration with fixpoint-shift", but only as far as I can represent it coherently in terms of infinite matrices/known closed-form expressions for sums of infinite powerseries, which result by the implicte dot-products.

But I still dont see what it brings to rewrite an established method with matrices. Whether you compute it with matrices, with direct powerseries formulas, or with limit formulas (without powerseries), whether with Abel or super function, the result is always the same, its always regular iteration.

Quote:Thus I have the difficulties with b>eta, since the occuring complex-valued matrices C give unsatisfying powerseries, and I have not yet the remedy to deal with that series appropriately.

I gave the answer already in the previous post. The resulting power series (where are the coefficients obtained from ) has convergence radius |t| in your notation. It converges at most for on the real axis.


Quote:Yes, thanks! It's progressing diabetes, and sometimes I'm fitter, sometimes struck down, and in general just less powerful and regenerable than up to recent years. Just life..

For me it seems you should do something healthy not always brood about the same things. Spring is coming, go out, have a look at the blue sky, or at the young girls passing Wink


RE: Logarithmic behaviour of the super exponential at -2 - Gottfried - 03/11/2009

Hi Henryk -

just a short reply here.
bo198214 Wrote:Gottfried, we discussed that already in an much earlier post.
Your whole infinite matrix computation is just r e g u l a r i t e r a t i o n !!!


Well, I felt it is needed to make it explicitely, that it is special because of the (even blueish enhanced) hyperlink:

Quote:The matrix power approach makes use of the established method to obtain non-integer powers (and other analytic functions) of finite matrices via diagonalization.
This is applied to the truncations of the Carleman/Bell matrix.

where the keyword "finite matrices" occur. We had that several times and since I had assumed, that I had made it clear that I'm always working on infinite matrices, that remark is surprising.




Well, next question. "Why work with matrices if things are otherwise well known..." - I still don't claim something special. It's just my path into the matter: I came from a project, in which I compiled relations between pascal- Stirling, Euler- and other matrices operating on formal powerseries and stumbled on the possibility of iterating functions (other than that of addition, which I had already studied to some nice encounters with the zeta/eta-function)

Here I had the iteration of the exponential-function and as you might (frustrated? Wink ) remember a hard and long time to even explain what I'm doing although Aldrovandi, Woon, Comtet and others were long time around in the scene. So its for personal reasons (experience with my matrix "toolbox"), possibly for "historical reasons" (to keep a track consistent) and recently for the connection of iteration-series with series of matrix-powers, which I've not seen yet except possibly in the form of the umbral-calculus, where it might be hidden behind the scene.
Just recently someone in sci.math asked for a set of symbolic matrix operations for mathematica or maple. Could be a nice start...

I've just reread Andrew's "exact entries for slog-operator" today and found a similar matrix-discussion there: it helps to understand since I've no training in functional analysis, the bit I had was in 1972 to 77 and only in relation for the computer-courses, which were my main subject.

------

Well, but let's not lose the track here, for which I opended the thread. I think I'll have a much closer look at that series for tetration next time, to see whether and if, how, it's a special one.

----------

And, yes:

Quote:For me it seems you should do something healthy not always brood about the same things. Spring is coming, go out, have a look at the blue sky, or at the young girls passing Wink

that's certainly what I should do. Hope I'll get things working/walking. And why the heck should the girls *pass*?
Ok Wink - wish you all a good night

Gottfried


RE: Logarithmic behaviour of the super exponential at -2 - bo198214 - 03/11/2009

Gottfried Wrote:Well, I felt it is needed to make it explicitely, that it is special because of the (even blueish enhanced) hyperlink:

Quote:The matrix power approach makes use of the established method to obtain non-integer powers (and other analytic functions) of finite matrices via diagonalization.
This is applied to the truncations of the Carleman/Bell matrix.

where the keyword "finite matrices" occur. We had that several times and since I had assumed, that I had made it clear that I'm always working on infinite matrices, that remark is surprising.

For me matrix power method means exactly what I wrote. Take the truncations of an infinite matrix , apply the matrix power and take the limit
.

Now one can apply this method at different development points, i.e. the original function is conjugated to the development point :


or written as composition:

then

we define the application of the matrix power method at point p by
.

This is the general method.
If I now apply the matrix power method to a fixed point p, then the Bell/Carlemann matrix of is lower/upper triangular. For those matrices the power of the truncation is the same as the truncation of the power.

That means the limit to infinity is just an expansion of the matrix, once computed values do not change in that process.

So we see that regular iteration is the particular case of applying the matrix power method to a fixed point.


And I really honored this method (applied to a *non-fixed point* like 0) because it can do where regular iteration fails: to be able to compute real iterates for . This also puzzles me that despite you insinst on regular iteration for .

Quote:Well, next question. "Why work with matrices if things are otherwise well known..." - I still don't claim something special. It's just my path into the matter: I came from a project, in which I compiled relations between pascal- Stirling, Euler- and other matrices operating on formal powerseries and stumbled on the possibility of iterating functions (other than that of addition, which I had already studied to some nice encounters with the zeta/eta-function)

But if you focus too much on only matrices and nothing else, such interesting relations like the convergence radius of the iterates is just out of scope for you. Because it is derived by the interelation of the limit formulas for regular iteration (which is power series free) and power series formulas for regular iteration.

Quote:I've just reread Andrew's "exact entries for slog-operator" today and found a similar matrix-discussion there: it helps to understand since I've no training in functional analysis, the bit I had was in 1972 to 77 and only in relation for the computer-courses, which were my main subject.

Wah I was born in that time Wink

Quote:that's certainly what I should do. Hope I'll get things working/walking. And why the heck should the girls *pass*?

Well you are right, they build up a big cluster around your place.


RE: Logarithmic behaviour of the super exponential at -2 - Gottfried - 03/14/2009

bo198214 Wrote:For me matrix power method means exactly what I wrote. Take the truncations of an infinite matrix , apply the matrix power and take the limit
.

The difference for the matrix-power may be "small"; but it may be significant when applied to the matrix-inverse: "take-the-truncated,invert,use-as-approximation" may give then different results from "conclude-the-exact-entries-for-infinite-case,truncate,use-as-approximation". It is even more significant, if we discuss more complex entities like the set of eigenvalues.
So these different views of things should be still explicite, and it would be good to keep identifying nomenclature. I tended to give the finite-matrix-based the attribute "polynomially", but this might be not the best choice...

Quote:Now one can apply this method at different development points, i.e. the original function is conjugated to the development point :
Surely. No dissent here.


Quote:And I really honored this method (applied to a *non-fixed point* like 0) because it can do where regular iteration fails: to be able to compute real iterates for . This also puzzles me that despite you insinst on regular iteration for .
:-) As it comes to honor... Well, that's not my problem. I surely should have come to my full description-text about my way of thinking in a new pdf-file - I've some first chapters, but it's very complex and I stuck several times soon. I'll be "honoring" that method, too, so we have also no dissent here.
I've just left this field and am digging at the other one for the gold. I think if a definitive description for the matrices in the infinite case can be given, (based on the hypothese about eigenvalues) this would be very good and if then also a method for the actual computation were found - this would please me much more than the approximating of 7^^Pi using ad-hoc-eigensystems of finite-size-matrices. Maybe the latter will even be the only way to get to practical values; but then: well, there'll be many people, programs which could do that, very fine, why should I bother, it's not my job/profession/money to calculate values?
Wink

Quote:But if you focus too much on only matrices and nothing else, such interesting relations like the convergence radius of the iterates is just out of scope for you. Because it is derived by the interelation of the limit formulas for regular iteration (which is power series free) and power series formulas for regular iteration.
Here you made a point. However, not in the sense of missing the aspect of convergence-radius; in the contrary: I think I need the matrix-layout for the infinite case to have even better conditions for convergence considerations. And since in important cases we'll miss convergence anyway we can check for summability methods to overstep the range of converge, but in a well-founded manner.

But as I learned in some discussions in sci.math in the last monthes it is fruitful to discuss iteration also in terms of the functions themselves - even some very nice and surprising closed forms were discussed which I never could have found with the formal powerseries/matrix-approach. Here opened a much interesting field and I'm actually fiddling with that on a casual manner (you've notices my casual "iteration exercises" also here in the forum). I think I'll go into this much more if I have the feeling, that my questions/ideas with the infinite matrices are solved (or shown to be unsolvable) and I can close that case. Just currently I've applied the (infinite) matrix-concept to Andrew's slog with a nice achievement of insight... :-) So there's still something in it. (I'll need it also for the discussion of iteration series, I think)


RE: Logarithmic behaviour of the super exponential at -2 - andydude - 03/31/2009

bo198214 Wrote:then the limit can be given as

You know, at first I was confused, because this formula implies that

but the Cz page gives

but then I realized the point was different Smile

Andrew Robbins


RE: A nice series for b^^h , base sqrt(2), by diagonalization - andydude - 04/06/2009

Gottfried Wrote:Here for base b = sqrt(2) :

(I rewrote this in TeX)

I used regular iteration, and I did not get these expansions at all. I got

so what this series represents would be regular iteration, evaluated at , and expanded about . Is that right? How did you get this?

Andrew Robbins


RE: A nice series for b^^h , base sqrt(2), by diagonalization - Gottfried - 06/10/2009

(04/06/2009, 10:23 PM)andydude Wrote:
Gottfried Wrote:Here for base b = sqrt(2) :

(I rewrote this in TeX)

I used regular iteration, and I did not get these expansions at all. I got

so what this series represents would be regular iteration, evaluated at , and expanded about . Is that right? How did you get this?

Andrew Robbins

Hi Andrew -

sorry, took a long time to answer.
The coefficients occur as sums; your formula contains the x-parameter, maybe if you insert x=-1 we get identity.

[update] I've the same coefficients as yours, just evaluated at x=-1;see next post [/update]

Let me explain in my matrix/Pari-GP-notation how I got the coefficients:

Assume the notation of variables as usual: the variables b,t,u encoding the base-parameters, where for our example t=2, u=log(t) , b=t^(1/t) = sqrt(2).

Also let exp_b°h(x) the h'th iteration of exp-function to base b, and dxp_t°h(x) the h'th iteration of the decremented exp-function to base t where we use the identity (if we have a real fixpoint)



which will be numerically with x=1, t=2



The function dxp to base t (=2) gets its coefficients by the triangular Bell-matrix Ut, where we implement the h'th fractional power using diagonalization, denoting the eigenmatrices as W and WInv (=W^-1)

Code:
´  Ut^h = W * dV(u^h) * WInv


and the function is in general finally computed by
Code:
´  V(y)~ = V(x/2-1)~ * Ut^h

Here   y = V(y)[1]  , meaning y is the second element of V(y)
and is also y = dxp°h(-1/2) by the construction of the formula.
Then

   exp_b°h (1) = 2 + 2*y    = 2+ 2*dxp_t°h(-1/2)


Keeping the iteration-parameter variable we have, using the eigenmatrices
Code:
´   V(y)~ = V(x/2-1)~ * W * dV(u^h) * WInv


Since we assume x being constant x=1, we can precompute the rowvector S~
Code:
´     S~ = V(-1/2)~ *W


which implements also a Schröder-function (need not be the principal one) for dxp_t°h(x) at the constant x=-1/2 . This is also the schröder-function for exp_b°h(x) at x=1, as fas as we look only on the coefficients of the 2'nd column of W.

So we can write
Code:
´     V(y)~ = S~ * dV(u^h) * WInv


WInv provides the inverse of the schröder-function, and if we extract only the scalar result we can write ( with the notation W[,1], meaning the second column of a marix W )

Code:
´    y  = S~ * dV(u^h) * WInv [,1]


Here we can interchange the the order of multiplication of the last two factors and precompute the constant coefficients of a powerseries in the resultvector M of
Code:
´   M~ = S~ * diag(WInv[,1])


and get



and according to the above
Code:
´  exp_b°h (1) = 2 + 2*y


we have the source of my coefficients in the vector M:
Code:
´   exp_b°h (1) = 2 + 2*y = 2 + 2* M~ * V(u^h)


So the coefficients, which I provided are just the precomputed coefficients in M, which represent the evaluation of the coefficients of the Schröder-functions for dxp, including a fixpointshift of the x-parameter and of the function-value.

--------------------------------

In short, omitting matrices:

Let C(x) denote a schröder-function for dxp_t(x), c_k the k'th coefficient of its powerseries,
D(x) the inverse and d_k the k'th coefficient of its powerseries,
for brevity C and D the functions at x=-1/2
and v = u^h, the h'th power of u.

Then we can write

and the k'th coefficient in my first mail is just 2* C^k * d_k in the formula above.


Gottfried


RE: A nice series for b^^h , base sqrt(2), by diagonalization - Gottfried - 06/10/2009

(04/06/2009, 10:23 PM)andydude Wrote:
Gottfried Wrote:Here for base b = sqrt(2) :

(I rewrote this in TeX)

I used regular iteration, and I did not get these expansions at all. I got

so what this series represents would be regular iteration, evaluated at , and expanded about . Is that right? How did you get this?

Andrew Robbins

Hi Andrew,
2'nd note. I just looked at my coefficients. If I do not apply the summation with constant x. I get the same numbers as you gave (just looked at 4 decimals and handful of coefficients). I think we do the same computation except I changed order of summation.

Gottfried


RE: A nice series for b^^h , base sqrt(2), by diagonalization - Gottfried - 06/11/2009

Fun...

I now could make use of the upper (repelling) fixpoint with this type of series.

Here we have the upper fixpoint t=4, u=log(4) ~ 1.3862943611... for the same base b = sqrt(2), and we can compute the fractional heights for some appropriate initial-value x, say x=5:

to get context I quote the previous:
Gottfried Wrote:Here for base b = sqrt(2) and for brevity v = u^h:

and again v for u^h and f(h) for the longish exp-expression:



with different coefficients. (We have to compute the appropriate Ut-matrix and also W-matrix now)


For x=5 we have by fixpoint-shift x1 = x/t - 1 = 5/4 - 1 = 0.25 and the evaluation of the schröder-function
Code:
´   S~ = V(5/4-1)~ * W = V(0.25)~ * W

gives, by the summation in each column, a set of series, which converge good with 64 terms(n=64 is my selected vector/matrix-dimension) and give the vector S containing all results. Then the coefficients from WI (which represent the inverse of schröder-function) are multiplied into to get the constant vector M, which has now the coefficients which are independent from u^h:
Code:
´   M~ = S ~ * diag(WI[,1])


Then the intermediate value y as in the previous msg is again:



Thus, having the fixpoint t=4 here, we get the new coefficients for the series from the vector M:
Code:
´  f(h) = 4 + 4* y
         = 4 + 4* (M~ * V(u^h))


which can be evaluated for some height h, as long as the occuring series (by M~*V(u^h)) converges or can be Euler-summed.
This gives the powerseries:




For the heights h=0..-2 in 1/32-steps I get the following values:
Code:
´   h    |  f(h) = exp_sqrt(2)°h(5)=
-----------------------------    
       0  |  5.00000000000
   -1/32  |  4.98555564349
   -1/16  |  4.97137811129
   -3/32  |  4.95746089099
    -1/8  |  4.94379767428
   -5/32  |  4.93038234919
   -3/16  |  4.91720899253
   -7/32  |  4.90427186283

    -1/4  |  4.89156539347
   -9/32  |  4.87908418618
   -5/16  |  4.86682300480
  -11/32  |  4.85477676933
    -3/8  |  4.84294055021
  -13/32  |  4.83130956288
   -7/16  |  4.81987916254
  -15/32  |  4.80864483915

    -1/2  |  4.79760221266
  -17/32  |  4.78674702837
   -9/16  |  4.77607515259
  -19/32  |  4.76558256839
    -5/8  |  4.75526537155
  -21/32  |  4.74511976670
  -11/16  |  4.73514206358
  -23/32  |  4.72532867348

    -3/4  |  4.71567610582
  -25/32  |  4.70618096483
  -13/16  |  4.69683994642
  -27/32  |  4.68764983512
    -7/8  |  4.67860750119
  -29/32  |  4.66970989776
  -15/16  |  4.66095405821
  -31/32  |  4.65233709352

      -1  |  4.64385618977
  -33/32  |  4.63550860581
  -17/16  |  4.62729167087
  -35/32  |  4.61920278239
    -9/8  |  4.61123940385
  -37/32  |  4.60339906273
  -19/16  |  4.59567934854
  -39/32  |  4.58807791086

    -5/4  |  4.58059245755
  -41/32  |  4.57322075293
  -21/16  |  4.56596061612
  -43/32  |  4.55880991933
   -11/8  |  4.55176658633
  -45/32  |  4.54482859088
  -23/16  |  4.53799395524
  -47/32  |  4.53126074878

    -3/2  |  4.52462708658
  -49/32  |  4.51809112807
  -25/16  |  4.51165107581
  -51/32  |  4.50530517419
   -13/8  |  4.49905170827
  -53/32  |  4.49288900258
  -27/16  |  4.48681542007
  -55/32  |  4.48082936094

    -7/4  |  4.47492926166
  -57/32  |  4.46911359397
  -29/16  |  4.46338086382
  -59/32  |  4.45772961051
   -15/8  |  4.45215840574
  -61/32  |  4.44666585274
  -31/16  |  4.44125058537
  -63/32  |  4.43591126737

      -2  |  4.43064659147

For instance, f(-1) should be log(5)/log(b) and we have from the table at h=-1 the value 4.64385618977 which agrees with direct computation log(5)/log(sqrt(2)) = 4.64385618977 , and it should be b^f(-1.5) = f(-0.5), which can be checked easily using values from the table.

Don't know yet, whether this has some benefit so far.


RE: A nice series for b^^h , base sqrt(2), by diagonalization - Gottfried - 06/11/2009

(06/11/2009, 06:26 PM)Gottfried Wrote: Don't know yet, whether this has some benefit so far.

It looks, as if we had a discussion of that recently in Upper superexponential

I'm excerpting a bit of Henryk's post:
(03/29/2009, 11:23 AM)bo198214 Wrote: As it is well-known we have for the regular superexponential at the lower fixed point.

This can be obtained by computing the Schroeder function at the fixed point of .

(...)
Now the upper regular superexponential is the one obtained at the upper fixed point of .
For this function we have however always , so the condition can not be met.
Instead we normalize it by , which gives the formula:
(*1)
(...)

My construction in the previous post was obviously the same as that above construction (*1) ... Gottfried (I added the comments //... )

Gottfried Wrote:Then we can write

and the k'th coefficient in my first mail is just 2* C^k * d_k in the formula above.

where the fixpoint "a" is simply given as constant 2 and could be generalized to the symbol. The sum-expression describes the inverse of the schröder-function chi^-1 in Henryk's post. The formula for the repelling fixpoint replaces simply 2 by 4 and (1/2-1) by (5/4-1) and uses the adapted schröder-function. So I think it's useful to redirect replies to the other thread...