Hey, Everyone; been a long time...
#1
It's been a very long time since I've posted on this forum. I hope everyone is doing well, and I hope the old groups amongst us have each found cool things of their own. I've diverged from this forum; mostly because my mathematical motivations have moved away from studies of tetration (or similar subjects). But recently, I've written a self contained solution to tetration. It's been maybe three years in the making. I keep shrinking the requirements and arguments and fiddling with it; but at this point, three years in, I guess it's done. I think the best place to announce this theorem is on this aprocryphal tetration site.

For those who'd appreciate a holomorphic tetration; taking \( (-2,\infty) \to \mathbb{R} \) bijectively, and strictly monotonely; holomorphic on \( \mathbb{C} \) upto  a nowhere dense set; I have one.

The paper below is fairly involved, despite being only fifteen pages. It is self contained; but elements involve more general ideas formed in other work. Nonetheless, there forms a novel tetration. And everything you need to prove that is in this paper. I hope you all enjoy if you read it.


.pdf   TETRATION COMPLETE ARXIV FINAL.pdf (Size: 313.56 KB / Downloads: 409)
#2
(01/07/2021, 09:53 AM)JmsNxn Wrote: The paper below is fairly involved, despite being only fifteen pages. It is self contained; but elements involve more general ideas formed in other work. Nonetheless, there forms a novel tetration. And everything you need to prove that is in this paper. I hope you all enjoy if you read it.
James,
I've been surprisingly busy this past year, but look forward to reading your paper.  As always, thanks for posting.  I have made zero progress on writing a paper proving convergence ....

I was quickly browsing your paper, and it sounds a little like Peter Walker's slog, which is constructed from the Abel function; this is from memory of Walker's approach ...
\( f(x)=\exp(x)-1;\;\;\;\alpha(f(x))=\alpha(x)+1 \)
The to generate a base "e" slog
\( \text{slog}_e(x)=\lim_{n \to \infty} \alpha(\exp^{[\circ n]}(x))-n \)
Then this is the slog_e(x) where we renormalize by a constant so that slog_e(1)=0.  
Walker proved his slog was infinitely differentiable, Henryk and I conjuctured it was nowhere analytic.  I believe there was a recent paper by Paulson claiming to rigorously prove Walker's slog was also nowhere analytic, but I am not convinced the proof was complete ....

If we used the superfunction instead \( \phi(x)= \alpha^{-1}(x) \) then is this the same as your approach?
- Sheldon
#3
(01/07/2021, 05:44 PM)sheldonison Wrote:
(01/07/2021, 09:53 AM)JmsNxn Wrote: The paper below is fairly involved, despite being only fifteen pages. It is self contained; but elements involve more general ideas formed in other work. Nonetheless, there forms a novel tetration. And everything you need to prove that is in this paper. I hope you all enjoy if you read it.
James,
I've been surprisingly busy this past year, but look forward to reading your paper.  As always, thanks for posting.  I have made zero progress on writing a paper proving convergence ....

I was quickly browsing your paper, and it sounds a little like Peter Walker's slog, which is constructed from the Abel function; this is from memory of Walker's approach ...
\( f(x)=\exp(x)-1;\;\;\;\alpha(f(x))=\alpha(x)+1 \)
The to generate a base "e" slog
\( \text{slog}_e(x)=\lim_{n \to \infty} \alpha(\exp^{[\circ n]}(x))-n \)
Then this is the slog_e(x) where we renormalize by a constant so that slog_e(1)=0.  
Walker proved his slog was infinitely differentiable, Henryk and I conjuctured it was nowhere analytic.  I believe there was a recent paper by Paulson claiming to rigorously prove Walker's slog was also nowhere analytic, but I am not convinced the proof was complete ....

If we used the superfunction instead \( \phi(x)= \alpha^{-1}(x) \) then is this the same as your approach?

HEY SHELDON! Long time no talk! I'm excited to discuss. Just because I'm being a little prideful, \( \phi(x) \) is NOT a superfunction. Not in any way shape or form. It's more like a superfunction, but with an exponential corrective term.

This is to mean that,

\(
\phi(s) = e^{\displaystyle s-1 + e^{s-2+e^{s-3....}}}
\)

So that,

\( \phi(s+1) = e^{s+\phi(s)} \)

(Believe it or not, this function is ENTIRE!)

Which I'm sure you can astutely note is NOT an inverted Abel function. The entire paper begins with constructing this function (which is the real novelty of the technique); and then, that old tried method that failed so many times, every time I read about it, finally works. Which is to say,

\(
\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = e \uparrow \uparrow s + \omega
\)

(Which although it looks like Tommy's technique; or has wafts of Kouznetsov; it's absolutely more convenient to use \( \phi \).)

For some \( \omega \in \mathbb{R} \). The trick lies in using that,

\(
\log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = \phi(s) + \tau_n(s)
\)

For a corrective term \( \tau_n \), which kind of looks like \( s \). This sequence of \( \tau_n \) converge due to a clever use of Banach's Fixed Point Theorem. Which can be summarized as, \( \tau_1(s) =s \) and \( \tau_0(s) = 0 \); and they are generated through the recursion:

\(
\tau_{n+1}(s) = s + \log(1+\frac{\tau_{n}(s+1)}{\phi(s+1)})\\
|\tau_{n+1}(s) - \tau_n(s)| \le |\log(1+\frac{\tau_{n}(s+1)}{\phi(s+1)}) - \log(1+\frac{\tau_{n-1}(s+1)}{\phi(s+1)})|\\
\le \frac{1}{|\phi(s+1)|} |\tau_{n}(s+1) - \tau_{n-1}(s+1)|\\
\vdots\\
\le \Big(\prod_{k=1}^n \frac{1}{|\phi(s+k)|} \Big)|s+n|
\)

Where \( |s+n| = |\tau_1(s+n) - \tau_0(s+n)| \), by the initial conditions of the sequence. This product goes to zero geometrically, and thus the \( \tau_n \) converge. This of course, is much harder to do in reality (which is why I wrote the paper). We have to take supremum norms of these things in the complex plane; which requires understanding \( \phi \) very well in the complex plane.

So, to answer your question. As much as it looks like the Walker solution; as much as it borrows from the technique--I spent 3 years finding a function where it didn't suffer the same pitfalls. This is most definitely not a \( C^{\infty}(\mathbb{R}^+) \) solution. And I can prove that down to the \( \epsilon \) and \( \delta \). It is analytic.

Needless to say; I'm really excited for you to read it. If you have any questions I'm absolutely happy to answer. It is a little rough around the edges as a paper; but plug it in your calculator; and pay attention to my proofs of analycity. I think you'll be pleasantly surprised. It's a very different approach. But as hard as rock when it comes to the rigor involved.

Again, if you have ANY questions I'm happy to answer them.


Best regards, James

PS: I only came back to this forum, because 3 years ago, I said I'd only come back if I could construct a holomorphic tetration. So this paper has been 3 years in the making. Thank you, for giving it a chance.

PPS: Also when I uploaded it here it didn't render properly. I suggest using this link:

https://arxiv.org/pdf/2101.03021.pdf
#4
I'm glad to see you are completing a circle. I followed you during this time at ArXiv, reading your stuff. The turn on Iterated composition was a pleasure, and I'd expected it in some way.
I'm doing well, finally I could start university, but I was not very lucky, covid came... so I'm sure that on the math side you did for sure better than me.

I did indeed find cool things. My attack on the subject went way more algebraic-abstract than ever, and far than ever I hope (Spoiler alert: yep, recursions are functors, h.ops are functors and flows are probably Kan-extensions). I like to excuse myself saying that I'm more of a theory-builder than a Problem-cracker mathematician. At the end I'm not even a mathematician... so idc...


That's also an excuse for not being that good at analysis (or the cause? Tongue)... but said that, I find very interesting what I can understand of your paper... If I ever come up with some doubt I might write here!

Good luck with life and wish you the best.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#5
(01/12/2021, 05:39 AM)JmsNxn Wrote: HEY SHELDON! Long time no talk! I'm excited to discuss. Just because I'm being a little prideful, \( \phi(x) \) is NOT a superfunction. Not in any way shape or form. It's more like a superfunction, but with an exponential corrective term.

This is to mean that,

\(
\phi(s) = e^{\displaystyle s-1 + e^{s-2+e^{s-3....}}}
\)

So that,

\( \phi(s+1) = e^{s+\phi(s)} \)

(Believe it or not, this function is ENTIRE!)

Which I'm sure you can astutely note is NOT an inverted Abel function. The entire paper begins with constructing this function (which is the real novelty of the technique); and then, that old tried method that failed so many times, every time I read about it, finally works. Which is to say,

\(
\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = e \uparrow \uparrow s + \omega
\)

(Which although it looks like Tommy's technique; or has wafts of Kouznetsov; it's absolutely more convenient to use \( \phi \).)
...

James,
Looks like I have some catching up to do.  I haven't gone back to your paper yet, because I wanted to think about \( \phi(s) \) on my own more first.  I wanted to generate a formal power series for f which leads tor \phi.  I started to generate the formal series for f, so I am convinced enough that there is a unique definition for \( \phi \) that is indeed well defined and entire and is \( 2\pi i \) periodic.
 Back to real work, and then back to your paper when I am able.  Again, thanks for posting James, it is a delight to see some well thought out results from your work.
\( f=x+\sum_{n=2}^{\infty}a_n\cdot\.x^n\;\;\;\phi(s)=f(e^s)\;\;\;\phi(s+1) = e^{s+\phi(s)} \)

So for Re(z) negative enough, psi(z) looks more or less like exp(z), and as z at the real axis increases, psi(z) looks like a super exponential.   Psi is 2 pi i periodic.  

So how does psi(z) behave at Im(z)=pi*i as the real(z) increases?  When Im(z)=Pi*i, I think it is always 0>psi(z)>abs(exp(z), but psi will eventually oscillate between negative values approaching the magnitude of exp(z), and nearly zero values ... 

Also, I'm curious to is if phi might lead to an analytic solution for tet_e(x), instead of one only defined at the real axis, which is typically what we see for Tommy's iterated logarithm approach.
- Sheldon
#6
(01/13/2021, 04:19 PM)sheldonison Wrote: James,
Looks like I have some catching up to do.  I haven't gone back to your paper yet, because I wanted to think about \( \phi(s) \) on my own more first.  I wanted to generate a formal power series for f which leads tor \phi.  I started to generate the formal series for f, so I am convinced enough that there is a unique definition for \( \phi \) that is indeed well defined and entire and is \( 2\pi i \) periodic.
 Back to real work, and then back to your paper when I am able.  Again, thanks for posting James, it is a delight to see some well thought out results from your work.
\( f=x+\sum_{n=2}^{\infty}a_n\cdot\.x^n\;\;\;\phi(s)=f(e^s)\;\;\;\phi(s+1) = e^{s+\phi(s)} \)

So for Re(z) negative enough, psi(z) looks more or less like exp(z), and as z at the real axis increases, psi(z) looks like a super exponential.   Psi is 2 pi i periodic.  

So how does psi(z) behave at Im(z)=pi*i as the real(z) increases?  When Im(z)=Pi*i, I think it is always 0>psi(z)>abs(exp(z), but psi will eventually oscillate between negative values approaching the magnitude of exp(z), and nearly zero values ... 

Also, I'm curious to is if phi might lead to an analytic solution for tet_e(x), instead of one only defined at the real axis, which is typically what we see for Tommy's iterated logarithm approach.
Hey Sheldon! You seem to be piecing together the whole argument!

Yes the worst \( \phi(s) \) behaves is when \( \Im(s) = (2k+1)\pi \) for integer \( k \). It starts to get all loopy here. The hardest part of this paper is arguing that, eventually, \( |\phi(t+\pi i)| \ge 1+\epsilon \) for \( t > T \) very large, and some \( \epsilon > 0 \). So, we say that "eventually" it grows past this point, but it bounces around and about and as you've guessed shrinks and grows. But luckily, this is sufficient for us to employ the Banach Fixed Point Theorem.

As a little demonstration for you, call \( \psi(t) = - \phi(t+\pi i) \); then \( \psi(t+1) = e^{t-\psi(t)} \)

If \( \psi(t) > t \) for \( t>T \) then \( \psi(t+1) <1 \)--contradiction. So if \( \psi \) grows, expect it to grow \( o(t) \). If it grows, it grows slower than \( t \). And if it grows too fast, it gets arbitrarily close to zero; which will kaput the entire construction. The argument to prove that it "grows" and eventually stays past \( 1+\epsilon \) is indeed the most complicated part of the paper. Mostly because it uses infinite compositions; and it's something I'm very familiar with; but there's probably only a handful of people who know what an "infinite composition" is. I've been spending the past 2 years creating criterion for things like,

\( \lim_{n\to\infty} h_0(s,h_1(s,h_2(s,...h_n(s,z)))) \)

To converge to holomorphic functions (and I've developed a good feel for handling asymptotics of these things as well). In particular, if we look at,

\(
\psi_m(t,z) = e^{\displaystyle t-1-e^{\displaystyle t-2-e^{\displaystyle ...e^{t-2m-1-z}}}\\
\)

Where there are an odd number of exponentials; you can note that each of these grow exponentially. Now \( \lim_{m\to\infty}\psi_m \to \psi \); so we can show that for very large m and very large T, that this is greater than 1; and then we can show as we increase m it can't shrink below 1. It's a little tricky, but doable. Essentially we group the functions in pairs. Because \( a(t) = e^{t-k-e^{t-k-1-z}} \) has decay to zero as \( k\to\infty \) and \( t\to\infty \), we know that \( e^{t-a(t)} \) still has exponential growth; similarly does \( e^{t-e^{\displaystyle t-1-e^{t-2-a(t)}}} \) and so on and so forth. An odd number of exponentials composed with \( a(t) \) essentially teeter out more manageably--and exhibit growth... But the whole thing still approaches \( \psi \)

Now this doesn't let us prove \( \psi \) has exponential growth (and it can't by nature), but it can show us that it stays above \( 1+\epsilon \).... eventually.

As to your last point; this paper actually focuses on proving that \( F \) (our tetration function) is holomorphic on \( \mathbb{C} \) (not just the real line), excluding a nowhere dense set in \( \mathbb{C} \) (I.e: where-ever the branch cuts pop-up). Which is to say there are most definitely branch cuts (at least the one at \( (-\infty,-2] \) and perhaps elsewhere in the complex plane), but I couldn't derive where, just that it only accounts for a nowhere dense portion of \( \mathbb{C} \). Proving it's analytic on \( (-2,\infty) \) is actually fairly elementary thanks to the super-exponential nature of \( \phi \) on a neighborhood of \( \mathbb{R} \). Getting it holomorphic on \( \mathbb{C} \) is the real hard part. Mostly because of \( \phi \)'s pesky behaviour when \( \Im(s) = \pi \).

No rush, on reading it; I might do a bit more editing to it (to clarify some of the arguments, clean it up a bit). When you get to it, you get to it.

Regards, James
#7
(01/13/2021, 12:22 AM)MphLee Wrote: I'm glad to see you are completing a circle. I followed you during this time at ArXiv, reading your stuff. The turn on Iterated composition was a pleasure, and I'd expected it in some way.
I'm doing well, finally I could start university, but I was not very lucky, covid came... so I'm sure that on the math side you did for sure better than me.

I did indeed find cool things. My attack on the subject went way more algebraic-abstract than ever, and far than ever I hope (Spoiler alert: yep, recursions are functors, h.ops are functors and flows are probably Kan-extensions). I like to excuse myself saying that I'm more of a theory-builder than a Problem-cracker mathematician. At the end I'm not even a mathematician... so idc...


That's also an excuse for not being that good at analysis (or the cause? Tongue)... but said that, I find very interesting what I can understand of your paper... If I ever come up with some doubt I might write here!

Good luck with life and wish you the best.

Oh I totally missed your reply MphLee!

Thanks for following me on arxiv! I've been doing tons of stuff there (occasionally uploading incorrect drafts...)! I'm glad you liked the infinite compositions; full circle back to tetration, lol.

Honestly, your discussions of hyper operators were always category theory (iterations within iterations within iterations), rather than \( \epsilon,\delta \). I'm happy to hear that's where you've gravitated towards; kinda always felt you were writing Godel code, lol. Happy to hear you're still working on tetration. Honestly, I'd like to read some of your cool ideas. As far as Kan extensions go, I'm clueless. Frankly, all I know from category theory is from group theory; don't know how much help I'd be. But I'd be happy to talk.

Regards, James
#8
Good news everyone. 

I just logged in and haven’t read the paper yet but from the comments, I can confirm this tetration is ( or can be made ) analytic and in fact has more or less a closed form !!!

Im sleepy now but will explain more tomorrow.


I immediately realized this function phi(s) was one that satisfied properties that I was Looking for years, hence my quick analysis. 

Knowledge of perturbation theory and complex analysis helped too. 

This paper by James Nixon also convinced me my sinh method can Probably also be made analytic.
The “ probably “ comes from the idea that I’m not sure if fixed point theorems like that of banach can be used and the higher complexity of the superfunction of 2sinh.
Apart from those 2 single obstacles I see no reason why analog arguments would fail.
#9
(01/15/2021, 02:25 AM)tommy1729 Wrote: Good news everyone. 

I just logged in and haven’t read the paper yet but from the comments, I can confirm this tetration is ( or can be made ) analytic and in fact has more or less a closed form !!!

Im sleepy now but will explain more tomorrow.


I immediately realized this function phi(s) was one that satisfied properties that I was Looking for years, hence my quick analysis. 

Knowledge of perturbation theory and complex analysis helped too. 

This paper by James Nixon also convinced me my sinh method can Probably also be made analytic.
The “ probably “ comes from the idea that I’m not sure if fixed point theorems like that of banach can be used and the higher complexity of the superfunction of 2sinh.
Apart from those 2 single obstacles I see no reason why analog arguments would fail.

Hey, thanks Tom. I know your sentiment, of just how precisely \( \phi \) allows for the limiting method you used, to work. It started a lot with your Tommy Sexp (at least the inception of the idea). I was always frustrated that \( 2\sinh \) didn't just work--or at least as far as proving it goes. Needless to say it took a very long time to find \( \phi \) and a very long time to condense its construction in a bite sized paper. I've been working a lot on difference equations in the complex plane, and this sort of just popped out.

I'm excited to see if you can make some of the techniques from this paper work for \( 2\sinh(s) = h(s) \)

And if you wanna add a Banach Theorem; I think it prolly looks like this,

\(
\log \log \cdots(n\,\text{times})\cdots \log H(s+n)\\
\)

Where \( H \) looks like this,

\(
H(s) = \lim_{n\to\infty}e^{s-1}h(e^{s-2}h(e^{s-3}h(...e^{s-n})))\\
\)

This, I imagine is a whole 'nother uniqueness problem.
#10
JmsNxn Wrote:Honestly, your discussions of hyper operators were always category theory (iterations within iterations within iterations)
Yep, even without realizing it I was always trying to do category theory. I didn't have, and still I don't have, the right level of education to realize it, and now to use this fact fully. If you vaguely remember me talking about \( \Sigma \) multivalued maps and stuff... I was really obsessed with it back in the days because I was convinced to be on something really big, really monumental that nobody was noticing. I was, but it was not really my discovery, it was just me not knowing enough math: I rediscovered Hom-sets, so basically I was rediscovering category theory from another angle. The funny thing is that what pushed me there were, not counting my ignorance of analytical methods, three posts in this forum: one you made (JmsNxn, dec 2010), the other from (bo198214, aug 2007)  and (Base-Acid Tetration, feb 2009).

JmsNxn Wrote:Happy to hear you're still working on tetration. Honestly, I'd like to read some of your cool ideas.
Not exactly Tetration but I'm looking for "new" ways to look at the concept of iteration in general, ways that can simplify things in order to work with conditions as weak as possible. All of this may sound weird for people like you, Sheldonison and Tommy. You are monsters in analysis and will see this as "running away from serious stuff and turning instead to trivial matters" and you are right. But for me understanding delicate complex dynamic structures and arguments when to me the basic algebraic/geometric backbone on which the topological/analytic machinery lies on is not clear is like learning the alphabet from the Z.


My primary effort was on rephrasing the language of this forum and the concept of iteration into categorical terms and in this language trying to find formal definitions of gadgets like Hyperoperations and ranks. This effort, as you can imagine, is headed towards the point where my naif first-year undergrad expertise will render me superfluous. But it's kinda exciting when you discover that tons of concepts and definitions where already known for decades under other names and are just easy exercises for competent mathematicians: it means you were not that crazy.
JmsNxn Wrote:As far as Kan extensions go, I'm clueless. Frankly, all I know from category theory is from group theory; don't know how much help I'd be. But I'd be happy to talk.

Group theory and a bit of linear algebra is enough to give you a taste of it using some slogans. Long story short, the endgame of my crackpotish travels in the lands of math are the following points:

  1. Solutions sets of Superfunction/Abel's/Shroeder's/Böttcher's equations, and general recursion, like sums and iterated compositions, are Hom-sets aka Functors (! in the appropriate categories).
    You already know well the hom functors: every cat \( C \) has its own hom functor: \( {\rm Hom}_C:C^{op}\times C\to {\bf Set} \). Take two groups \( {\rm Hom}_{\rm Grp}(G,H) \) is the set of homomorphisms, take two vec. \( k \)-spaces \( {\rm Hom}__{{\rm Vec}_k}(V,W) \) are all the linear applications, take two sets and you get the set functions \( Y^X \), take two top. spaces and you get the continuous maps . The key point is that that a morphism between two structures is a function that solves  some conditions, e.g. respecting the group operations. Now, an "iteration" \( (X,s)\to (Y,f) \) is a function that respects the successor and the dynamics of an endomap (using Kouznetsov terms: it respects the transfer function) \( \phi(s(x))=f(\phi(x)) \). So, in short, the Superfunctions/Abel's/Shroeder's/Böttcher's equation solutions sets are spaces of morphisms in the right category, e.g. the category of endomaps. To have a glimpse of how this is interesting look at the particular case of the Homsets \( C^0(X,{\mathbb R}) \) and how thet're stratified into the smoothness classes, well in some sense there is a context where this phenomenon happens but where we get the rank-classes.
  2. Don't focus on the "concrete" functions, but abstract away the algebraic structure, i. e. study monoids and categories.
    Tl;dr: don't study directly iterations and continuous iterations but rather groups and monoid actions and their categories.
  3. Extension of the iteration "is like" extension of scalars for vector spaces.
  4. Goodstein Hyperoperations are star-shaped diagrams in categories of monoid actions.
If needed I can add some context for my slogans. I had prepared few paragraphs of elementary explainations for some of them but I cutted them because the post was getting lengthy.
I wont say too much in this post because I'm preparing trivial question on the the limiting trick you use.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)


Possibly Related Threads…
Thread Author Replies Views Last Post
  Two tetrations two pentations and two hexations at the same time (as a cauchy) leon 0 288 10/13/2023, 02:12 AM
Last Post: leon
  2 pentations at the same time leon 0 252 10/11/2023, 03:45 PM
Last Post: leon
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 13,254 03/18/2023, 04:05 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)