Thread Rating:
  • 2 Vote(s) - 3 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Andrew Robbins' Tetration Extension
#11
Base 1 and base infinity? I'm not quite sure I follow...
~ Jay Daniel Fox
Reply
#12
The best way to illustrate it is through this graph:
[Image: tetrate_1_00.png]
where the thin dotted line is tetration base-1, the solid line is tetration base-e, and the thick dotted line is tetration base-infinity. Neither of these are very interesting (because they are straight lines), and base-infinity isn't really solvable, but it doesn't matter because you can take limits. In the limit as the base goes to 1 the curve gets closer and closer to the thin dotted line, whereas in the limit as the base goes to infinity, the curve gets closer and closer to the thick dotted line, so these aren't really functions but asymptotes of tetration! Smile Some of these trends can be seen on one of my graphs on my website. However, I didn't come to this conclusion by looking at graphs; I have proof. So the super-logarithms of the previously mentioned bases would be the reflection of those asymptotes across the y=x line.

Andrew Robbins
Reply
#13
Quote:I have proof.

So what do you actually prove?
Reply
#14
bo198214 Wrote:So what do you actually prove?

That:
(expanded about z=0) and:
(expanded about z=0),
which means in the limit:

and:


One of the nice things about having an exact form for all approximations is that the approximations are invertible (so we can find tetration as well) but the limit to infinity makes it discontinuous, which is not invertible.

Andrew Robbins
Reply
#15
andydude Wrote:That:
(expanded about z=0) and:
(expanded about z=0),

Ah, ok. But you still assume that the coefficients of converge. I asked because it sounded in the beginning as if you had a proof for the convergence of the coefficients for base and which would be in itself somehow strange statement.
Reply
#16
Indeed, that would be strange. I'm not assuming convergence, but I am assuming that:
  • convergence of the finite series (approximations with coefficient approximations as well) to the infinite series, or in other words: where or in other words: , and
  • convergence of the infinite series (whose coefficients must also converge), or in other words:
are equivalent, but this may be an error as well.

What I can prove is that for the nth approximation of the super-log (the finite series):
  • As , with n+1 entries.
  • As , with k = 0 .. n.

From this it is easy to show

Andrew Robbins
Reply
#17
Hi -
I reread this thread yesterday.

Now I tried a matrix-version of the slog - I'd now like to see, if the two methods agree.

In short: I suppose most matrices as known, as a reminder I only recall that the matrix Ut performs the decremented iterated exponentiation Ut(x) to base t here:
Code:
´
V(x)~ * Ut   = V(y)~ where y=t^x-1 = Ut(x)
V(x)~ * Ut^h = V(y)~ where y= Ut°h(x)

-----------

Now to ask for the superlog is to ask for the height h, given y and x=1.

This can surprisingly easy be solved using the known eigenmatrices Z of Ut
let u=log(t) then
Code:
´  Ut   = Z * dV(u) * Z^-1

then also
Code:
´  Ut^h = Z * dV(u^h) * Z^-1

Then the equation
Code:
´ V(x)~ * Ut^h = V(y)~

can be rewritten as
Code:
´
   V(x)~ * (Z * dV(u^h) * Z^-1) = V(y)~
  (V(x)~ * Z) * dV(u^h) * Z^-1  = V(y)~
and it follows that
Code:
´ (V(x)~ * Z) * dV(u^h) = (V(y)~ * Z)

--------------------------------------------

Since we need only the second column of the evaluated parentheses, let's denote the entries of the r'th row of the second column of Z as z_r

Then define a function
Code:
´ g(x) = z0 + z1*x + z2*x^2 + z3*x^3 + ...
and we have
Code:
´ g(x)*u^h = g(y)
and
Code:
´
u^h = g(y)/g(x)
h = log(g(y)/g(x)) / log(u)
follows.

If g(x) diverges conditionally, it may still be Euler-summable (see **)

For base t=2 the sequence of z_k diverges with a small rate, and seem all to be positive. If x is negative, then this is surely Euler-summable, if |x|<1/2 then it is even convergent.

Example-terms z_r for t=2
Code:
´
z_r= [0, 1.0000000, 1.1294457, 1.1985847, 1.2474591, 1.2856301, 1.3170719, 1.3439053, ... ]
-----------------------------------------

By the definition b=t^(1/t) we may use this also for the tetration-function (or better "iterated exponential" in Andrew's wording) of base b, since
Code:
´ Tb(x) = Ut(x')"   where x'=x/t-1 and x"=(x+1)*t
and the eigenvalues of Tb^h are the same as of Ut^h (namely = dV(u^h)).
(but remember, that this use of fixpoint-shift gives varying results dependent of the choice of the fixpoint - however small the differences may be, according to our current state of discussion)

So
Code:
´ for Ut°h(x) = y

the height-function hghU() gives
Code:
´ h = hghU_t(y) =  log( (g(y)/g(1)) / log(log(t))

and for
Code:
´  Tb°h(x) = y

the height-function hghT() gives
Code:
´
h = hghT_b(y) = slog_b(y)
           =  log( (g(y')/g(1')) / log(u)
           =  log( (g(y/t-1)/g(1/t-1)) / log(log(t))
The other good news are, 1) that I have also an extremely simple recursive eigensystem-solver for Ut, which needs only about 3-7 seconds for 96x96-matrices (if the Stirling-numbers are precomputed) depending on float-precision and 2) we need only the second column for this computation and the algorithm can thus be much reduced.

Gottfried

-------------------------------------------------------------
(**) the Euler-summation adds coefficients e_k of weight to each term in g(x), so the Euler-summed variant eg(x) of the dim-truncated powerseries is then
eg(x) = e0*z0 + e1*z1*x + e2*z2*x^2 + e3*z3*x^3 + ... + e_dim*z_dim*x^dim

where the e_k have to be determined by a given size of matrix-truncation and a given appropriate Euler-order.
Gottfried Helms, Kassel
Reply
#18
Gottfried Wrote:and for
Code:
´  Tb°h(x) = y

the height-function hghT() gives
Code:
´
h = hghT_b(y) = slog_b(y)
           =  log( (g(y')/g(1')) / log(u)
           =  log( (g(y/t-1)/g(1/t-1)) / log(log(t))
Actually this means, the hgh-functions compute the height-differences;
call lgg(x)= log(g(x))/log(u)
then more generally
hghT_b(x1,x0) = lgg(x1') - lgg(x0')
hghU_t(x1,x0) = lgg(x1) - lgg(x0)
and
hghT_b(x) = lgg(x') - lgg(1')
and
hghU_t(x) = lgg(x) - lgg(1)
may be taken as short notations for a default case. (Remember, that the function g and lg are dependent on the base-parameters b and/or t)

Gottfried
Gottfried Helms, Kassel
Reply
#19
(08/19/2007, 09:50 AM)bo198214 Wrote: I just verified numerically that the superlog critical function (originally defined on ) for base satisfies
for all .

So it is quite sure that the piecewise defined slog is also analytic.
Congratulation Andrew!

(However once someone has to prove this rigorously and also compute the convergence radius.)

as for the radius of convergence :

let A be the smallest fixpoint => b^A = A

then ( andrew's ! ) slog(z) with base b should satisfy :

slog(z) = slog(b^z) - 1

=> slog(A) = slog(b^A) - 1

=> slog(A) = slog(A) - 1

=> abs ( slog(A) ) = oo

so the radius should be smaller or equal to abs(A)

maybe i missed it , but i didnt see that mentioned.

also this makes me doubt - especially considering that for every base b slog should 'also' ( together with the oo value at the fixed point A mentioned above ) have a period ( thus abs ( slog(A + period) ) = oo too ! ) - , however thats just an emotion and no math of course ...


( btw the video link mentioned in this thread doesnt work for me bo , maybe it isnt online anymore ? )
Reply
#20
(06/26/2009, 10:51 PM)tommy1729 Wrote: as for the radius of convergence :

let A be the smallest fixpoint => b^A = A

then ( andrew's ! ) slog(z) with base b should satisfy :

slog(z) = slog(b^z) - 1

=> slog(A) = slog(b^A) - 1

=> slog(A) = slog(A) - 1

=> abs ( slog(A) ) = oo

so the radius should be smaller or equal to abs(A)
Its not only valid for Andrew's slog but for every slog and also not only for the smallest but for every fixed point.
However not completely:
One can not expect the slog to satisfy slog(e^z)=slog(z)+1 *everywhere*.
Its a bit like with the logarithm, it does not satisfy log(ab)=log(a)+log(b) *everywhere*.
What we however can say is that log(ab)=log(a)+log(b) *up to branches*. I.e. for every occuring log in the equation there is a suitable branch such that the equation holds.
The same can be said about the slog equation.
So if we can show that Andrew's slog satisfies slog(e^z)=slog(z)+1 e.g. for then it must have a singularity at A.

Quote:also this makes me doubt - especially considering that for every base b slog should 'also' ( together with the oo value at the fixed point A mentioned above ) have a period ( thus abs ( slog(A + period) ) = oo too ! ) - , however thats just an emotion and no math of course ...
I just showed in some post before but dont remember which one:

again up to branches.

Quote:( btw the video link mentioned in this thread doesnt work for me bo , maybe it isnt online anymore ? )

well it seems to not exist anymore. It didnt give concrete solutions to our problems, it was just interesting that others also deal with the solution of infinite equation systems and approximation via finite equation systems.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Possible continuous extension of tetration to the reals Dasedes 0 1,154 10/10/2016, 04:57 AM
Last Post: Dasedes
  Non-trivial extension of max(n,1)-1 to the reals and its iteration. MphLee 3 3,534 05/17/2014, 07:10 PM
Last Post: MphLee
  extension of the Ackermann function to operators less than addition JmsNxn 2 3,733 11/06/2011, 08:06 PM
Last Post: JmsNxn
  Tetration Extension to Real Heights chobe 3 5,740 05/15/2010, 01:39 AM
Last Post: bo198214
  Tetration extension for bases between 1 and eta dantheman163 16 17,190 12/19/2009, 10:55 AM
Last Post: bo198214
  Extension of tetration to other branches mike3 15 19,086 10/28/2009, 07:42 AM
Last Post: bo198214
  andrew slog tommy1729 1 3,202 06/17/2009, 06:37 PM
Last Post: bo198214
  Dmitrii Kouznetsov's Tetration Extension andydude 38 35,117 11/20/2008, 01:31 AM
Last Post: Kouznetsov
  Hooshmand's extension of tetration andydude 9 9,816 08/14/2008, 04:48 AM
Last Post: Danesh



Users browsing this thread: 1 Guest(s)