Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
some comments 2021
#1
As said before, I intend to use the incomplete Gamma function. 

Let



We can compute this by infinite composition like ;



This is pretty easy and standard.

The idea to achieve approximate tetration would then be 

, find .

then  approximately.

However this is just an approximation and depends on the real part of  which should preferably be a large positive real.


This leads to many ideas that may or may not work...

For instance assume the function is analytic in  and let this real go to positive infinity.

OR find a larger ( larger real positive part preferably close to the real line ) solution  and start from there.
or a sequence of .

Notice that tetration paths can self-intersect so this might just be other directions !

Another idea is the simple  for integer m and start from there.
Ofcourse we choose m such that y_m has a large positive real part.
notice that m can also be negative and still be good !! 

The problem here is when we pick different m for different x , we might end up in discontinu solutions.
A possible solution might be similar to the sequence above ; an infinite sequence of optimal m's.

The problem is mainly with the nonreals.

A few more comments

We could try the real line and then " simply " make analytic continuation.

This brings me to base change ideas or James Nixon's approach :
Let t go to +oo ;
to help find tetration.


Im not completely certain this is analytic but some comments might be useful.

For starters this is much more likely to be analytic then the base change or f(z) = ln .. ln ln ln 4^4^4^ ..4^z  type methods.

1) How to take a single logaritm and have instant analytic continuation ?

PROPOSED SOLUTION 

log(U(z)) = integral from a to c of U'(z) dz / U(z) + integral from c to b of U'(z) dz / U(z) + "constant".

Where a,b,c and "constant " are chosen wisely. That is to say : c  = z , a is appropriate and c is to avoid division by zero.
the path a,b,c is analytic.

( assuming we already have analytic continuation of U(z) )

example 

log(exp(z)) = integral from 1 to z/2 of exp'/exp + integral from z/2 to z of exp'/exp + constant = z.


**

2) for real s > 1 how to bound ln ln ln ... exp(a1* exp( a2 * exp ( ... ?

If you want to bound compositions of positive real direction from above you simply take weakest ones first.

for instance for a,x > 1 ; exp(a x) is between a * exp(x) and exp(x)^a.

so :

ln ln ln ... exp(a1* exp( a2 * exp ( ... < s^(a1*a2*a3*...)

This automatically proves that converges for real y > 1 and is bounded by y^(a1*a2*a3*...).


***

Combining 1) and 2) might help in proving that the proposed solution is analytic ??

***

There is more to say but James has already done so.

***

Many more ideas are in my head but they are complicated and doubtful.
I just wanted to share some easy ideas here.
I might echo some ideas of James.

***

The idea of chaos leads to the fear that slightly different bases than e lead to chaos and hence log(0) for the Jn(z) for nonreal z.
The idea occurs that when the bases are close enough to e , than all is fine.
Which leads me to ideas like : is this gamma fast enough ??
Or should we have tetrational growth ??

Another idea is this : Can speed be too fast ???

I mean if it is too fast our function might be to close to a finite power tower because the tail goes to zero too fast ??
This might loose analyticity or some undefined " smoothness or uniqueness criterion " .

***

Also Im not aware of an efficient way to avoid overflow to compute things like ln...ln ln ln 4^4^4^..^4^z.

Precompute taylor series or carleman matrices seems the only way but that is not so efficient.
funny because it converges fast !

***

What do you think ?

regards

tommy1729
Tom Marcel Raes
Reply
#2
consider y = ln ln ln ... a1^a2^a3^ ... ^x.

Then when we put that into an equation we get

exp(exp(exp(...y) = a1^a2^a3^...^x.

We know both sides are nonzero so we can take ln on both sides

exp(exp(...y) = ln(a1)* a2^a3^...^x.

So what if ln(a1)*a2^a3^...^x = 1 ??

Lets assume y exists again.
Then the Lhs again has a log. AND THAT LOG IS NOT 0.
So we get a different branch.

So solving this equation is just a matter of taking the correct branches all the time ???

THAT seems too optimistic !?

Keep in mind that choosing the branches must be consistant with analyticity as well !

Taking ln one more time :

exp(...y) = ln(ln(a1)) + ln(a2) * a3^...^x.

How do we know y exists again ??

Here is a toy idea :

ln(ln(a1)) + ln(a2)*a3^..^x = a3^..^x'

This switch from x to x' makes us able to repeat the above.

Ofcourse this assumes an x' exists !!

Im aware I mentioned 2 things in the previous post about simplifying a log (analytic continuation , lower multiplicity ) and boundaries, but Im still confused by this.

For real numbers this all works nicely. 
But for complex Im confused.

So I started thinking.

We could go the other direction ; THE INVERSE WAY ;

LET f(x) be a given function with a taylor series.

Now for a chosen set a_n we can inductively define :

f(x)  = a_0 + exp( a_1 + exp( a_2 + exp( a_3 + ...

As long as it converges and as long as f(x) - a_0 , ln(f(x) - a_0) - a_1,... are nonzero.

This is in a sense the analogue of taylor series , infinite compositions and power towers.

So I think there must be a way to make this all formal.

regards

tommy1729

Tom Marcel Raes
Reply
#3
NOTICE the a_n can also follow from the taylor expansion ( rather than picked )

because substracting and taking log of a taylor is also a taylor.

AND every taylor starts with a constant hence a new a_(n+1).

regards

tommy1729

Tom Marcel Raes
Reply
#4
Hey, Tommy!


So I've worked through this a lot. I love what you're asking! I'd like to clarify a couple of things. The first being,



(Which is a non-trivial result about in disguise.)

And as such the sequence of convergents, starting with ,



Converge to zero in the neighborhood of . Everything equals zero at positive infinity here.  When we make the change of variables , this means certain pathways towards zero converge towards tetration (specifically these path ways ). Pulling back with logs is perfectly possible here. Everything gets very large--superexponentially-- so logarithms reign supreme. And we've added a geometric sequence ; which converges like Banach's theorem.

What this does is make a tetration , somewhere way off in the right half plane.

Now when we are taking our logs (what you call optimistic) is mostly just a proof of *existence* of such logarithms. I have no f******n clue how to compute this. This is an statement. That is all. Some sequence of logs works. And I'll die to the death to prove it. There's going to be branch cuts/singularities dependent on .

Skipping a bunch of steps. When we get the tetration we want; the principal used is if is some curve in and is the real line--then must be the real-line; if has a branch cut along the negative real axis. This is not an optimistic result; it's necessary. There are some things I'm not 100% on in my paper--this is not one of them.

We then argue, if , then . Which, from there being no real valued curve ; the result must follow. This requires nothing new. I can write it all out if you want, but it's just a statement about logarithms,

I'm also 99% sure my preliminary solution is analytic. It was built solely from these identities. The real trouble--and what I'm worried might fail--is when I start pasting solutions together using .

This means if I write,



Where,



Then,



Is analytic--and does not equal . Where,



And,




It's its own unique solution to the Tetration equation. This I'm pretty much certain of.


The idea, and the parts I'm less sure of, but still pretty sure, are pasting these solutions together to get the actual tetration we want.

To accent why this matters is simple.



So we want to take a varying while we iterate the logarithm. We have to avoid a whole swath of singularities somehow. I get what you mean by optimistic, but it's optimistic for a very different reason than you expressed. Once you have it way off in the right half plane--the pull-back is perfectly manageable. I need to double check different things. Once my mapping works at it works everywhere.  The trouble is getting it to work at .


I'm pretty sure I have it though. I can't produce numbers which disagree. Everything just overflows though when I try to get the tetration. But, again, that's just how bad a coder I am. I'm still trying to figure out how to get some workable code. I'll get it.



Honestly though, as to what you're writing. After constructing this tetration I'm convinced there are multiple paths of construction. And your Incomplete Gamma function method may be possible. As an identity I'd keep in mind. Make sure your function satisfies,



Or even better,



I've been fiddling with these a lot--and I think this is the way to do it.


Regards, James

PS: I could go on for  hours about this solution, Tommy.
Reply
#5
To explain this further.

Assume you have a function holomorphic way off in the right half plane (with singularities, or branch cuts, or what ever)--and non-zero. And you have the identity,



Then,



For a function as . Now I'm going to cite Milnor, but it's actually proven by two other authors--it escapes my mind at the moment. But the accumulation points of , for almost all is the sequence (the orbit of the exponential at zero). And therefore the sequence diverges to infinity (eventually). Think, if then --and this divergence is guaranteed.

Returning to our situation, this means that as --a.e. To prove this is a tad subtle, but I'll lay it out,



By our definition,



And further, . But, for a small enough then,



And we can find a decreasing sequence of in which,



Therefore, --but this diverges almost everywhere. Therefore (where we mean this almost everywhere, but this is still an effective statement).





And now, we describe the sequence of convergents, for ,







For a compact set such that and ; we can put a bound on (which is where we need to be non-zero). This is because for large enough --this eventually goes to zero (almost everywhere). We'd have to exclude potential anomalous points; but almost everywhere this works.

Then since ; by induction, as . But even better,



This is a summable sequence, so that,



And this can get as small as you want, and the compact set argument works fine. Recall that (the series converges when ).


This guarantees a tetration WAY WAY off in the right half plane. And the pullback is the easy part. The hard part is getting a function that works relatively well. The I chose is very convoluted--but it works. It has to work, for the same argument as above.



I chose this function, SPECIFICALLY because the pull back will be easy. There will be no singularities. There will be no zeroes. There will be nothing but complex behaviour, controlled.




Please note, I simplified the arguments from my paper. It's a bit more difficult in reality. But pretty much exactly the same. I have not used the change of variables ; which I used tremendously in the paper. That's because this is a simplistic viewpoint. You need to make the change of variables somewhere down the line.

I'd like to say, your looking for the coefficients are exactly what I was trying to do. But, I focused on solving the equation at . Which, through a certain variable change, could be mapped to a fixed point at zero, rather than infinity. And then, we're just talking about geometrically converging functions about a fixed point. Yada yada...
Reply
#6
I should add some examples of other functions that will work. Or rather, work in the same framework.

If as . And for some . Let's let where is some domain in . Let's assume further that,



For all compact subsets . Then the function,



Satisfies,



And,



But! We can modify this idea further, by instead of taking and instead take . We can only speak of these things asymptotically; with no convenient functional equation; but still valid.

So assume that . Let's assume that as . Let's assume for all compact sets that,



Then, the function



Satisfies,





So again, the choice I made of,



Is a little arbitrary; but it was chosen because the pull back will work very well.

To walk you through this. The function is defined as when as . This means that,



Now, as I spent a lot of time proving, is that,



Converges uniformly on some sector for . And from here we want to take 's to correct everything--make it holomorphic on for some .


I'm going to summarize my proof in a couple of statements.


If is holomorphic at , and ; and the branch-cut of this log is ; then necessarily for .

Now assume that is a holomorphic function, such that and . If we can define a logarithm which is holomorphic with a branch-cut . Now assume further, the branch cut of can be done for . Then necessarily . Then necessarily for .

Now take our tetration function made from . Since , for large and --which I mean it isn't strictly real valued unless we're on the real line. This means, since , our tetration function is only real-valued on the real line.

This means, since for ; as I mean it isn't strictly real valued over any interval, unless . We can say that . Because if it did, it would mean that is real-valued for varying. BECAUSE, we can choose our branch-cuts solely dependent on the variable in . As in, if , we can construct a holomorphic function which is holomorphic in and .

This means we have a function which satisfies and ; and the branch cut of is along using the principal branch of the logarithm function, with a branch cut along . So the function for . And therefore, . But we know this can't happen because our tetration is only real valued on the real line.



This is essentially a mapping argument about the logarithm. Not sure how else to explain it. It's a thing, don't worry. I just may be struggling to get the point across. But the pullback isn't an issue. Again, making it work at is the real problem.
Reply




Users browsing this thread: 1 Guest(s)