Tetration Forum

Full Version: Andrew Robbins' Tetration Extension
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4
(06/27/2009, 09:39 AM)bo198214 Wrote: [ -> ][quote='tommy1729' pid='3446' dateline='1246053079']
as for the radius of convergence :

let A be the smallest fixpoint => b^A = A

then ( andrew's ! ) slog(z) with base b should satisfy :

slog(z) = slog(b^z) - 1

=> slog(A) = slog(b^A) - 1

=> slog(A) = slog(A) - 1

=> abs ( slog(A) ) = oo

so the radius should be smaller or equal to abs(A)
Its not only valid for Andrew's slog but for every slog and also not only for the smallest but for every fixed point.
However not completely:
One can not expect the slog to satisfy slog(e^z)=slog(z)+1 *everywhere*.
Its a bit like with the logarithm, it does not satisfy log(ab)=log(a)+log(b) *everywhere*.
What we however can say is that log(ab)=log(a)+log(b) *up to branches*. I.e. for every occuring log in the equation there is a suitable branch such that the equation holds.
The same can be said about the slog equation.
So if we can show that Andrew's slog satisfies slog(e^z)=slog(z)+1 e.g. for $z,e^z\in \{\zeta: |\zeta| <|A|\}$ then it must have a singularity at A.

---

of course for 'every' fixed point !

i know that silly :p

but the smallest is of course closest to the origin , so that is the one i considered , since i wanted the radius ( which is the distance to the origin )

i completely agree with you Sir Bo ( or whatever you like to be called :p )

but now seriously.

andrew nowhere mentioned " branches " or even the complex plane in his paper.

well , at least not in the pdf's of his website.

i personally feel like those branches are one of the most important topics in tetration debate.

*** warning : highly speculating below ***

as for your * everywhere * , in general i think you are correct , but maybe some bases do satisfy that ' almost everywhere ' ?

i think the only exceptions for some bases are numbers a_i

sexp(slog(a_i) + x ) = a_i for positive real x.

however thats quite ' alot ' ( uncountable and dense )

***

i often like to consider invariant and branches as ' inverses '

like

exp( x + 2pi i ) <-> log(x) + 2 pi i

following that ' philosophy '

the branch(es) of slog(z) we are looking for are the invariants of sexp(z)

just some quick musings , plz forgive any blunders , im an impulsive poster with little time

also forgive me if this has been discussed before , like eg many years ago , im only here since a few months.

regards

tommy1729
so , if at the fixed point A ( = exp(A) ) slog is defined and the " main identity " still holds

slog(z) = slog(exp(z)) - 1

then abs(slog(A)) needs to be oo.

but there is another logical value for slog(A).

of course , only when slog(A) exists and the main identity does not hold.

A = exp(A) = sexp(slog(A) + 1)

we know that a half - iterate should have the same fixpoints thus

A = hexp(A) = sexp(slog(A) + 1/2)

repeating that we get

lim n -> oo

A = sexp(slog(A) + 1/(2^n) ) = sexp(slog(A))

in fact

A = sexp(slog(A) + 1/(2^n) ) = A = sexp(slog(A) + 1/(2^m) )

holds for all positive integer n and m.

in fact , after some work ; all positive real n and m actually.

( assuming Coo )

this suggests that sexp is not an entire function either.

( hint : sexp(slog(A) + any real x ) = A => straith line on complex plane with constant value !! )

thus sexp(z) is not entire UNLESS abs ( slog(A) ) = oo !

( UNLESS = neccessary condition not neccessarily sufficient )

...

the subject gets more complicated , i wont continue to elaborate , but i have reasons to assume slog(A) = sexp(A) = A = exp(A)

i know slog has a fixed point that is not a fixed point of exp.

to save time and errors ,
i want andrew robbins to compute slog(A) ( exp(A) = A )

anybody else who has an slog function is also welcome to give his result for slog(A).

how many slog's are there anyway ??

regards

tommy1729
(06/29/2009, 08:20 PM)tommy1729 Wrote: [ -> ]i want andrew robbins to compute slog(A) ( exp(A) = A )

That is indeterminate.

(06/29/2009, 08:20 PM)tommy1729 Wrote: [ -> ]how many slog's are there anyway ??

I only know of 3 slogs that are defined from scratch:
1. The integer-valued superlog, which is better known as log star.
2. The intuitive superlog, which is based on the research of Peter Walker and a few people on this forum.
3. A continued fraction superlog (Nelson's superlog), based on this conversation, which I wrote about in section 4.4.6 of the Tetration Reference.
Of these definitions of superlogs, the only continuous one so far is the intuitive/natural superlog, and all other definitions of superlogs are defined as the inverse function of tetration where tetration is defined directly.

Andrew Robbins
On a slightly related note, you might as well try and calculate simpler cases (all of which are indeterminate, if my mental calculations are correct)

1^^-1
1^^-2
0^^-2

I'm sure you can find other places where the value of tetration is indeterminate.
(07/27/2009, 08:10 AM)andydude Wrote: [ -> ]
(06/29/2009, 08:20 PM)tommy1729 Wrote: [ -> ]i want andrew robbins to compute slog(A) ( exp(A) = A )

That is indeterminate.

How do you mean " indeterminate " ?

as in oo ? as in undefined ?

isnt andrew slog supposed to be defined for all complex z ?

didnt he claim that ?

Regards

tommy1729
(08/11/2009, 12:18 PM)tommy1729 Wrote: [ -> ]
(07/27/2009, 08:10 AM)andydude Wrote: [ -> ]
(06/29/2009, 08:20 PM)tommy1729 Wrote: [ -> ]i want andrew robbins to compute slog(A) ( exp(A) = A )

That is indeterminate.

How do you mean " indeterminate " ?

as in oo ? as in undefined ?

isnt andrew slog supposed to be defined for all complex z ?

didnt he claim that ?

Regards

tommy1729
I suppose a direct analogy is best. The logarithm is the Abel function of multiplication, and multiplication has 0 as a fixed point. So what, pray tell, is log(0)?

In the same way that log(0) is indeterminate, so too is slog(A), where A is a fixed point of exponentiation.

Of course, the real part of log(0) is negative infinity, but the imaginary part is undefined (any value would work). For the slog(A), the imaginary part is infinity (or negative infinity), but the real part is undefined.
(08/11/2009, 07:06 PM)jaydfox Wrote: [ -> ]In the same way that log(0) is indeterminate, so too is slog(A), where A is a fixed point of exponentiation.
I should preface this with the assumption that it also depends which branch we're in.

For example, consider the number 0.787605370443680 - 0.755132477752332*I. Note that this number is well within the radius of convergence for the power series derived by Andrew's slog at 0.

Exponentiate this three times, and you'll arrive at 0.318131505204764 + 1.33723570143069*I, which is A for base e.

So slog(A) is equal to 3+slog(0.787605370443680 - 0.755132477752332*I), in some branch of the slog. But in the principal branch, it's at the singularity, and hence the value is indeterminate.
(08/07/2007, 04:38 PM)bo198214 Wrote: [ -> ]I just read Andrew Robbins' solution to the tetration problem, which I find very convincing, and want to use the opportunity to present and discuss it here.

The solution ${}^y x$ satisfies the 2 natural conditions
1. ${}^1b=b$ and ${}^{x+1}b=b^{{}^xb}$
2. $x\mapsto b^x$ is infinitely differentiable.

For $k\ge 1$ the constant -1 vanishes and we make the following calculations:
$s^{(k)}(x)=(t(b^x))^{(k)}=\left(\sum_{i=0}^\infty \nu_i \frac{b^{xi}}{i!}\right)^{(k)}=\sum_{i=0}^\infty\frac{\nu_i}{i!}(b^{xi})^{(k)}$
The derivation of $b^{xi}$ is easily determined to be
$(b^{xi})'=b^{xi}\text{ln}(b) i$ and so the k-th derivative is $(b^{xi})^{(k)} = b^{xi}(\text{ln}(b)i)^k$, which give us in turn
$\nu_k=s(x)^{(k)}= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i\frac{i^k}{i!}$ for $k\ge 1$.

notice in the last line bo wrote s(x) instead of s(0).

it is an expansion at x = 0.

now if we consider expansions at both x = 0 and x = 1 and get the same coefficients for x = 0 by computing them from

1) the coefficients expanded at x = 1
2) solving the modified equation ( see below)

then that probably means we have radius 1 or larger ( radius from the origin at x = 0 )

since b^1 i = b^i , we get an extra b^i factor on the right side.

and v_k is replaced by sum v_k / k!

that equation should be solvable and have the same solutions v_k IF Andrew's slog has a radius 1 ( or larger ) from the origin.

bo mentioned the potential non-uniqueness for v_k when expanded at x = 0.

maybe this could be the extra condition we(?) are looking for.

Regards

tommy1729
(08/23/2009, 02:45 PM)tommy1729 Wrote: [ -> ]
bo198214 Wrote:For $k\ge 1$ the constant -1 vanishes and we make the following calculations:
$s^{(k)}(x)=(t(b^x))^{(k)}=\left(\sum_{i=0}^\infty \nu_i \frac{b^{xi}}{i!}\right)^{(k)}=\sum_{i=0}^\infty\frac{\nu_i}{i!}(b^{xi})^{(k)}$
The derivation of $b^{xi}$ is easily determined to be
$(b^{xi})'=b^{xi}\text{ln}(b) i$ and so the k-th derivative is $(b^{xi})^{(k)} = b^{xi}(\text{ln}(b)i)^k$, which give us in turn
$\nu_k=s(x)^{(k)}= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i\frac{i^k}{i!}$ for $k\ge 1$.
notice in the last line bo wrote s(x) instead of s(0).

which is obviously a misprint. The corresponding equation system for arbitrary $x_0$ is:

$\nu_k(x_0)=s^{(k)}(x_0)= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i \cdot \frac{ b^{x_0 i}\cdot i^k}{i!}$ for $k\ge 1$.

Quote:it is an expansion at x = 0.
now if we consider expansions at both x = 0 and x = 1 and get the same coefficients for x = 0 by computing them from
1) the coefficients expanded at x = 1
2) solving the modified equation ( see below)

I doubt this. I guess we get different superlogarithms for different development points $x_0$. One should perhaps check this with a complex plot.
I further guess that for $x_0$ converging to the lower fixed point, the solution converges to the regular tetration.
And 3. I guess that Andrew's slog corresponds to the inverse of the matrix power sexp (which also depends on a development point).

Quote:since b^1 i = b^i , we get an extra b^i factor on the right side.
*nods*

Quote:and v_k is replaced by sum v_k / k!

If you have a function f(x) with powerseries coefficients $v_k$ at 0 then f(x+d) has the powerseries development coefficients v_k(d) (provided that f has convergence radius >d at 0):
$v_k(d) = \sum_{n=k}^\infty \left(n\\k\right) d^{n-k} v_n$ and vice versa $v_k = \sum_{n=k}^\infty \left(n\\k\right) (-d)^{n-k} v_n(d)$.

Quote:bo mentioned the potential non-uniqueness for v_k when expanded at x = 0.
maybe this could be the extra condition we(?) are looking for.

The demand that the solution at different development points should give the same function does not determine how to solve the equation system in a different way. If you mean that.
(08/23/2009, 03:23 PM)bo198214 Wrote: [ -> ]
(08/23/2009, 02:45 PM)tommy1729 Wrote: [ -> ]
bo198214 Wrote:For $k\ge 1$ the constant -1 vanishes and we make the following calculations:
$s^{(k)}(x)=(t(b^x))^{(k)}=\left(\sum_{i=0}^\infty \nu_i \frac{b^{xi}}{i!}\right)^{(k)}=\sum_{i=0}^\infty\frac{\nu_i}{i!}(b^{xi})^{(k)}$
The derivation of $b^{xi}$ is easily determined to be
$(b^{xi})'=b^{xi}\text{ln}(b) i$ and so the k-th derivative is $(b^{xi})^{(k)} = b^{xi}(\text{ln}(b)i)^k$, which give us in turn
$\nu_k=s(x)^{(k)}= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i\frac{i^k}{i!}$ for $k\ge 1$.
notice in the last line bo wrote s(x) instead of s(0).

which is obviously a misprint. The corresponding equation system for arbitrary $x_0$ is:

$\nu_k(x_0)=s^{(k)}(x_0)= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i \cdot \frac{ b^{x_0 i}\cdot i^k}{i!}$ for $k\ge 1$.

right. thats what i meant.

bo198214 Wrote:
tommy1729 Wrote:it is an expansion at x = 0.
now if we consider expansions at both x = 0 and x = 1 and get the same coefficients for x = 0 by computing them from
1) the coefficients expanded at x = 1
2) solving the modified equation ( see below)

I doubt this. I guess we get different superlogarithms for different development points $x_0$. One should perhaps check this with a complex plot.

The idea is to arrive at the same superlogaritms by choosing an appropriate ( of possibly many ) solution for development points$x_0$ and $x_1$ .

if that is possible for all the points on the (real) interval [$x_0$,$x_1$]
then i assume THAT PARTICULAR SLOG has a radius at least $x_1 - x_0$ when developped at $x_0$.

bo198214 Wrote:I further guess that for $x_0$ converging to the lower fixed point, the solution converges to the regular tetration.
And 3. I guess that Andrew's slog corresponds to the inverse of the matrix power sexp (which also depends on a development point).

i dont know why you believe that.

anyways im only thinking about real bases > eta.

a proof of those statements would be very intresting though !

bo198214 Wrote:If you have a function f(x) with powerseries coefficients $v_k$ at 0 then f(x+d) has the powerseries development coefficients v_k(d) (provided that f has convergence radius >d at 0):
$v_k(d) = \sum_{n=k}^\infty \left(n\\k\right) d^{n-k} v_n$ and vice versa $v_k = \sum_{n=k}^\infty \left(n\\k\right) (-d)^{n-k} v_n(d)$.

yes , i wanted to add that comment yesterday , but you were first.
that is a very vital part of my idea.

bo198214 Wrote:
tommy1729 Wrote:bo mentioned the potential non-uniqueness for v_k when expanded at x = 0.
maybe this could be the extra condition we(?) are looking for.

The demand that the solution at different development points should give the same function does not determine how to solve the equation system in a different way. If you mean that.

i think it does...( if such a solution exists and we have the same function analytic in both developped points , nomatter where we expand ) although i didnt give a method nor proof.
i consider it like solvable coupled infinite simultane equations ?

sorry gotta run.

regards

tommy1729
(08/26/2009, 04:01 PM)tommy1729 Wrote: [ -> ]The idea is to arrive at the same superlogaritms by choosing an appropriate ( of possibly many ) solution for development points$x_0$ and $x_1$ .

if that is possible for all the points on the (real) interval [$x_0$,$x_1$]
then i assume THAT PARTICULAR SLOG has a radius at least $x_1 - x_0$ when developped at $x_0$.

I have thought about this, and I have a discussion about this too.

I'd like to write up the discussion here, but I'd rather just upload my notes. Mind you these notes are not final at all, but rather unsolicited ramblings of a curious mind. The section I'm talking about is draft/Robbins_paper3.pdf (pages 10-16), which is also what I was referring to here, when I said a "detailed investigation of the coefficients".

I noticed that if you tabulate the coefficients of z (Taylor coefficients of the super-logarithm) with the coefficients of ln(a) in the numerator and denominator of each coefficient, then the diagonals of this tabulation make interesting patterns. So I started finding expressions for the diagonals of this table, and got stuck on the 5th diagonal. That is why I haven't really developed this idea much farther.

If any of the expressions can be proven with induction or the like, then it would form the basis of a proof for the base-1 slog and base-$\infty$ slog (which only require the first row and the last row). I thought that maybe if we had expressions for all coefficients as functions of n, then perhaps we could find a closed form for $\text{slog}_e'(0)$ or something.

Andrew Robbins
Pages: 1 2 3 4