Nixon-Banach-Lambert-Raes tetration is analytic , simple and “ closed form “ !! tommy1729 Ultimate Fellow Posts: 1,742 Threads: 382 Joined: Feb 2009 01/15/2021, 02:57 AM Recently James Nixon posted a paper claiming analytic tetration by using a simple phi function and banach fixed point theorem ! As I commented there, I confirm this is indeed analytic !! So it is simple (relatively easy to compute or define ) and analytic.  But I discovered it also has a closed form more or less.   That form uses mainly the Lambert-W and phi functions. Hence I name this type of tetration: Nixon-Banach-Lambert-Raes tetration  Or in short  NBLR tetration.  More about that soon !  ( I need to sleep now ) Not trying to steal too much credit by adding my name I hope. This is mainly James Nixon’s result ofcourse.  Although I must add it might influence questions about my sinh method.  See also the comment I made in his thread.  I started a new title because I wanted to highlight this wonderful result and my little contribution by noting the Lambert W connection.  The naming ( NBLR ) is in order of importance.  Regards  Tom Marcel Raes JmsNxn Ultimate Fellow Posts: 1,064 Threads: 121 Joined: Dec 2010 01/15/2021, 08:49 AM (This post was last modified: 01/15/2021, 08:50 AM by JmsNxn.) Hey, Tom; I'm curious to see your representation of this function using the Lambert function. I have a couple representations of this function in the complex plane; but they're a little... esoteric, let's say. And don't look much better than the initial representation. I've got some half-results on an integral version, that's about it. tommy1729 Ultimate Fellow Posts: 1,742 Threads: 382 Joined: Feb 2009 01/17/2021, 10:19 PM (This post was last modified: 01/17/2021, 10:49 PM by tommy1729. Edit Reason: improved tex i hope ) I recommend reading James his paper first, in particular to see that everything is well defined and the crucial things are proven. Ok so let's quickly evaluate what James Nixon has achieved and considered The main (auxiliary) function is  $\phi(s) = e^{\displaystyle s-1 + e^{s-2+e^{s-3....}}}$ or  $\phi(s) = \lim e^{ s-1 + e^{s-2+e^{s-3....}}}$ So that, $\phi(s+1) = e^{s+\phi(s)}$ And then uses it to define tetration like : $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = e \uparrow \uparrow s + \omega$ First I would like to remark 2 things. There is a simple second way to define or view his auxiliary function : Let $a_n = exp{-n}$ and notice they are all positive ! Also note that exp(x) takes any real x to a positive real.  $\phi(s) = a_1 e^{\displaystyle s + a_2 e^{s+ a_3 e^{s....}}}$ For some this might be easier to comprehend or use in proofs or not. But the main thing is that this shows : 1) every truncation can be written as a maclaurin series ( taylor series expanded at 0) with strictly nonnegative coefficients. 2) Since this function has been proven to be entire ( in the paper ) this implies not only the truncation can be written with nonnegative coefficients , but also the limit. Thus    $\phi(s) = p_0 + p_1 s + p_2 s^2 + p_3 s^3 +....$ Where the p_n are all positive reals. All the derivatives of phi are thus also strictly nonnegative and increasing and have the same taylor property for positive real s.   *** For those intrested this implies James has also solved at least one problem from fake function theory and fake function theory is somewhat involved.  We could say the same about laplace transformations , bernstein theorems etc. Or in other words ; this is good. We have some understanding of these type of functions both from classical math as ideas from this forum. For numerical algorithms and convergeance this is also nice. ***  Secondly this function phi is periodic with the same period as exp(x). This was already noted in the paper ofcourse but together with the above and the property that phi is an entire function this is quite powerful. Ok. When trying to " visualize " or " understand " an analytic function or analytic continuation, I always (also) think in terms of copies or multiplicities. How many solutions does f(z) = y ( for fixed y) have in a certain "modest" area ? What are the local invariants ? Symmetry ?  The equation f(z) = y (for fixed y) always has a finite number of solutions within a nonzero radius if it is analytic within the radius (and on the boundary). ( IF it does not, it cannot be analytic everywhere within the radius and on the boundary of the radius ! ) When we consider Riemann surfaces this matters. We also know that the inverse of a locally analytic function is also locally analytic. Now let's consider what I call " relativity " in math. Take for example the square root. a^2 = b now a is an analytic function of b and vice versa. We can define or represent a (or b) as  1) a series expansion 2) an integral 3) an iterative algorithm 4) an equation etc They might not converge the same. But when they do converge, they should always give one of the correct solutions. And they do. And the point is that the analytic continuation , second solution (minus square root) or other branch are equal+equivalent in meaning &value for each representation,  AND the same for ALL representations of them. This is very important because "objections" are often in the form of this iteration is chaotic or divergent or not well defined , this sum diverges , this is only defined for reals etc. Consequently this relativity is an important concept and a powerful one. In case you wonder about divergent sums , there are summability methods that rely on algebraic equations, analytic continuations etc. If we have convergeance in a nonzero radius we could apply " relativity " and other representations. *** If we have no convergeance we might add a parameter and then later try to apply relativity/other representations/continuations. For instance a sum that diverges everywhere. This is related to perturbation theory.  HOWEVER in this case the equivalence is no longer garanteed. THIS IS A REMARK THOUGH AND IRRELEVANT FOR THIS POST ) *** Another way to put it and define "relativity" informally is this :  " analytic continuation = algebraic continuation " *** I often say that. I feel this connects all branches of math, but I guess that is perhaps philosophy.  *** Anyways it thus makes sense to consider this : $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = \phi(s) + r_n(s)$ And then by recursion and induction we get $r_0(s) = 0 , r_1(s) = s$ $r_{n+1}(s) = ln(\phi(s+1) + r_n(s+1)) - \phi(s)$ by using log(a + b) = log(a) + log(1 + b/a) we get  $r_{n+1}(s) = ln(\phi(s+1)) + ln(1 + (r_n(s+1)/\phi(s+1)) ) - \phi(s)$ ( notice up to here we could have done the analogue with the sinh method and in fact we could try the things below too. But for phi things work out nicer. ) by using the fundamental equation of phi : $\phi(s+1) = e^{s+\phi(s)}$   we can simplify without any problems ; $r_{n+1}(s) = s + \phi(s) + ln(1 + (r_n(s+1)/\phi(s+1)) ) - \phi(s)$ and finally  $r_{n+1}(s) = s + ln(1 + (r_n(s+1)/\phi(s+1)) )$ NOW apply the banach fixed point theorem and relativity to notice that the recursion of these functions r_1(s),r_2(s),... actually converges to a constant rather than a non-constant function. The conditions for the banach fixed point theorem are clearly met for positive real s since we have an analytic contraction. NOW we can therefore consistently define $\lim_n r_n(s) = v(s) = V$ We arrive at   $V = s + ln(1 + (V/\phi(s+1)) )$ Notice this equation contains functions that are analytic almost everywhere. [1] And remember the inverse of a locally analytic function is also locally analytic.[2] Now from the concept of relativity we know we only need to solve this equation for V and that is defined and analytic for most complex numbers too by [1] and [2]. We continue ; We split the problem in 2 cases. 1) V = 0 if V = 0 then 0 = s + ln(1 + 0) since phi(s+1) is never 0. Therefore V = 0 = s + ln(1) = s. 2) if V and s are nonzero we get :   $V = s + ln(1 + (V/\phi(s+1)) ) = s + ln(1 + (V/(ts) )$ Where t = t(s) = phi(s+1)/s. ( notice s is nonzero ! ) t(s) is at worst a meromorphic function since it can have only a pole at s = 0 because phi(s+1) is never zero and entire. SO we are left to solve this : $V = s + ln(1 + (V/(ts) )$ Using the Lambert-W function we get  $V = - LAMBERT-W( -\exp(-ts - s) ts) - ts$ But ts is just \phi(s+1) so  $V = -W( -\exp(- \phi(s+1) - s) \phi(s+1) ) - \phi(s+1)$ So we have a closed form using phi and lambert-W as claimed : $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = \phi(s) - W( -\exp(- \phi(s+1) - s) \phi(s+1) ) - \phi(s+1)$ Notice this also gives us a way to compute the difference operator over phi , although I am uncertain how practical it is. But let us continue $\phi(s+1) = e^{s+\phi(s)}$ therefore  $V = -W( -\exp(- \phi(s+1) + \phi(s)) ) - \phi(s+1)$ and thus  $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = \phi(s) - \phi(s+1) - W( -\exp(- \phi(s+1) + \phi(s) )$ So if we take $M(s) = - \phi(s+1) + \phi(s)$ then we get  $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = M(s) - W( -\exp(M(s)) )$ Now let  L = x - W( - exp(x) )  then x = L - exp(L) and  M(s) is also an entire function so we also have a more or less practical way to compute an slog ( the functional inverse of this kind of tetration ) too ! notice the integer solution L = 0 and x = 1 for x = L - exp(L). *** also   let $Q = \phi(s)$ then  $M(s) = H = - exp(s + Q) + Q$ $Q = H - W(-exp(H + s))$ and exp(s) = phi(s+1) exp(-phi(s)) so  $phi(s) = M(s) - W(-exp(M(S) - \phi(s)) \phi(s+1) ) = M(s) - W(- exp(-\phi(s+1)) \phi(s+1) )$. hence $\phi(s+1) = - W ( - exp(- \phi(s+1) ) \phi(s+1))$. *** Ofcourse branches of logarithms , inverse functions of phi or M and ofcourse lambertW are not considered in this sketchy overview here. So be careful with those issues. But I think if your branches are correct for the real case they will not pose a problem for the complex numbers close to the real line, and by continuation a solid formal solution to the complex plane will be achieved. None of these computations are super hard compared to many complicated solutions or riemann mappings (like kneser's tetration solution). A deeper study of phi(s) would be desired. In particular the riemann surface of its functional inverse. pictures would be nice too. Some day students might have a psi button on their calculators I would like that.     tommy1729 truth is what does not go away when you stop believing in it Tom Marcel Raes sorry for the delay of posting this. feel free to ask or comment. tommy1729 Ultimate Fellow Posts: 1,742 Threads: 382 Joined: Feb 2009 01/17/2021, 10:26 PM In case the tex formatting does not work , i added a copy here : Tom Marcel Raes Attached Files   tetration relativity.txt (Size: 9.44 KB / Downloads: 260) tommy1729 Ultimate Fellow Posts: 1,742 Threads: 382 Joined: Feb 2009 01/18/2021, 10:24 PM Consider the equation  $f(x+1) = \exp(x + f(x))$ such that f maps the reals to a subset of the reals. The solution $\phi(s+c)$ for a real constant $c$ seems to be the unique entire solution such that also  $f(x) = f(x+2\pi i)$ To see why notice that for a real 1-period function $g(s)$ all the solutions are probably $\phi(s + g(s))$ But if our solution is $2\pi i$ periodic than our $g(s)$ should be as well. But if $g(s)$ is double periodic than by the theory of (analytic) double periodic functions , $g(s)$ can not be an entire function. regards tommy1729 JmsNxn Ultimate Fellow Posts: 1,064 Threads: 121 Joined: Dec 2010 01/18/2021, 11:05 PM (This post was last modified: 01/18/2021, 11:09 PM by JmsNxn.) Hey, Tommy! Wow! Never would've guessed the correction term for $\phi$ to make it into tetration (the function V) would be expressible using Lambert! That definitely makes talking about branch cuts much easier. Gives us something concrete to fiddle with! My uniqueness condition (at least the one I like) is the exponential decay uniqueness. If $f$ is continuous on $\mathbb{R}$ and $ f(x+1) = e^{x+f(x)}\\ \lim_{x\to-\infty} f(x) = 0\\$ Then, $f(x) = \phi(x+q)$. Also, we can note instantly, if it's asymptotic to $e^{x-1}$, it means that $q = 0$. This really isn't too hard to prove. I haven't tried doing anything with $\phi^{-1}$ but I have been fiddling with the slogarithm. The conjecture I was thinking is that, $\text{slog}$ is holomorphic on $\mathbb{C}$ minus a nowhere dense set (i.e: a whole bunch of branch cuts) and these branch cuts are located about all the fixed points of $e^L = L$. This is reasoned because I have a rough argument that $e \uparrow \uparrow s \neq L$ for all these fixed points, but as $\lim_{|s|\to\infty} e \uparrow \uparrow s = L$ so long as $\pi \ge \arg(s)\ge \pi/2$. I.e: that if we limit to infinity in the left half plane we approach a fixed point of exp. I haven't gone back to trying to figure this out lately, as I inevitably bump into the question--where the hell are the branch cuts of $e\uparrow\uparrow s$? Which amounts to, where the hell are the zeroes of $e\uparrow\uparrow s$? In my wildest dreams there is only one zero at $-1$, but I can't prove that . You seem to also have stumbled upon the thesis of my work lately. If, $ \sum_{j=0}^\infty ||h_j(s,z) - z|| < \infty$ Then, $ \lim_{n\to\infty} h_0(h_1(...h_n(s,z))) = H$ Is holomorphic in both variables. For the function $\phi$ it's a special case but the hearty theorem is, if: $ \sum_{j=0}^\infty ||h_j(s,z) - A|| < \infty$ For a constant A, then, $ \lim_{n\to\infty} h_0(h_1(...h_n(s,z))) = H$ is holomorphic in $s$ but constant in $z$. These $||...||$ are all supremum norms of compact subsets of wherever the hell these things are holomorphic. I've been detailing a lot of what I like to call compositional analysis. No more sums and products; everything is composition. I did a whole bunch on the integral too and switching that up. Again, I only really came to this tetration in passing; it kind of just popped out after doing all this stuff. You might find my first paper on the subject interesting, I solve the equation $y(s+1) - y(s) = e^{sy(s)}$ in the complex plane. It's a lot harder to construct than $\phi$ though... Thanks for contributing Tommy! I'm excited to see what you uncover! Regards, James tommy1729 Ultimate Fellow Posts: 1,742 Threads: 382 Joined: Feb 2009 01/19/2021, 12:00 AM (01/18/2021, 11:05 PM)JmsNxn Wrote: Hey, Tommy! Wow! Never would've guessed the correction term for $\phi$ to make it into tetration (the function V) would be expressible using Lambert! That definitely makes talking about branch cuts much easier. Gives us something concrete to fiddle with! My uniqueness condition (at least the one I like) is the exponential decay uniqueness. If $f$ is continuous on $\mathbb{R}$ and $ f(x+1) = e^{x+f(x)}\\ \lim_{x\to-\infty} f(x) = 0\\$ Then, $f(x) = \phi(x+q)$. Also, we can note instantly, if it's asymptotic to $e^{x-1}$, it means that $q = 0$. This really isn't too hard to prove. I haven't tried doing anything with $\phi^{-1}$ but I have been fiddling with the slogarithm. The conjecture I was thinking is that, $\text{slog}$ is holomorphic on $\mathbb{C}$ minus a nowhere dense set (i.e: a whole bunch of branch cuts) and these branch cuts are located about all the fixed points of $e^L = L$. This is reasoned because I have a rough argument that $e \uparrow \uparrow s \neq L$ for all these fixed points, but as $\lim_{|s|\to\infty} e \uparrow \uparrow s = L$ so long as $\pi \ge \arg(s)\ge \pi/2$. I.e: that if we limit to infinity in the left half plane we approach a fixed point of exp. I haven't gone back to trying to figure this out lately, as I inevitably bump into the question--where the hell are the branch cuts of $e\uparrow\uparrow s$? Which amounts to, where the hell are the zeroes of $e\uparrow\uparrow s$? In my wildest dreams there is only one zero at $-1$, but I can't prove that . You seem to also have stumbled upon the thesis of my work lately. If, $ \sum_{j=0}^\infty ||h_j(s,z) - z|| < \infty$ Then, $ \lim_{n\to\infty} h_0(h_1(...h_n(s,z))) = H$ Is holomorphic in both variables. For the function $\phi$ it's a special case but the hearty theorem is, if: $ \sum_{j=0}^\infty ||h_j(s,z) - A|| < \infty$ For a constant A, then, $ \lim_{n\to\infty} h_0(h_1(...h_n(s,z))) = H$ is holomorphic in $s$ but constant in $z$. These $||...||$ are all supremum norms of compact subsets of wherever the hell these things are holomorphic. I've been detailing a lot of what I like to call compositional analysis. No more sums and products; everything is composition. I did a whole bunch on the integral too and switching that up. Again, I only really came to this tetration in passing; it kind of just popped out after doing all this stuff. You might find my first paper on the subject interesting, I solve the equation $y(s+1) - y(s) = e^{sy(s)}$ in the complex plane. It's a lot harder to construct than $\phi$ though... Thanks for contributing Tommy! I'm excited to see what you uncover! Regards, James Thank you for your kind and interesting reply James. I hope that means you agree with the name Nixon-Banach-Lambert-Raes tetration (NBLR) then ?  I considered the questions you posted too but did not have time to solve them yet. They are interesting though. I have not read any of your papers completely yet. Not even the one I discussed here.  I knew it worked from the comments given in the other thread. I read about 2 pages to get about the same notation for clarity. Ofcourse reading your papers is on my to do list !! I could be wrong but does your uniqueness condition not fail ?  I mean when I plug in the 1 periodic function , the limit is still zero I think ... Maybe I missed a detail. I also wonder if any other solution exists with all derivatives positive ( for 0 < s ) ... periodic functions tend to plug in some negatives. I have not stumbled on your thesis. Nor have I read the paper about  $y(s+1) - y(s) = e^{sy(s)}$ . I do not even know where to find them ? Tell me It seems your skills have improved dramatically. You seem to also have stumbled upon the thesis of my work lately.  If, $ \sum_{j=0}^\infty ||h_j(s,z) - z|| < \infty$ Then, $ \lim_{n\to\infty} h_0(h_1(...h_n(s,z))) = H$ Is holomorphic in both variables.  For the function $\phi$ it's a special case but the hearty theorem is, if: $ \sum_{j=0}^\infty ||h_j(s,z) - A|| < \infty$ For a constant A, then, $ \lim_{n\to\infty} h_0(h_1(...h_n(s,z))) = H$ is holomorphic in $s$ but constant in $z$.  This feels very intuitive to me ! Are you the first to discover this and prove it formally ? Is that your thesis ? It looks familiar especially in 1 variable (instead of the 2 you use). If that is completely new , my congratulations. I conjecture that :  the plots made by sheldon by comparing the fake semi exponential and the same function by using kneser method will look the same as comparing the fake semi expontential and the same function based on NBLR tetration. Although I must add that the fake is also computed based on kneser. But I think the fake based on kneser will be similar to that of NBLR. Regards tommy1729 Tom Marcel Raes JmsNxn Ultimate Fellow Posts: 1,064 Threads: 121 Joined: Dec 2010 01/19/2021, 01:27 AM (This post was last modified: 01/19/2021, 02:42 AM by JmsNxn.) I assure you it's not a mistake. But I did misspeak a bit. Observe, $ f(x) = e^{\displaystyle x-1+e^{\displaystyle x-2 +e^{...x-n+f(x-n)}}}\\$ Now $f(x-n) \to 0$ as $n\to\infty$, and therefore, $ f(x) = e^{\displaystyle s-1+e^{\displaystyle s-2 +e^{...}}} = \phi(x)\\$ We can actually decrease this condition to $f(x-n) \to C$; it doesn't matter because the limit always converges to $\phi$. I like to then categorize the uniqueness as being asymptotic to $e^{s-1}$ which is equivalent to $f(x) \to 0$ but it sounds a bit nicer. This is the beauty of always using Banach in everything I do, lol. As to the originality of this thesis, I assure you it's 100% mine. It falls under the category of infinite compositions, and not to speak too harshly, the field is dead. No one does anything. I keep a correspondence with the only other person who seems to be actively working on it, Dr. John Gill. And he derived similar results to mine; but nothing of holomorphy; and nothing on differentiability. Actually, he did very similar things to what I'm doing--but with continuous functions mostly. He also says nothing about the functional equation aspect. And the summability criterion are all my own; though John uses similar constructs (I tried to boil it down into a single condition, the convergence of a sum); he sort of talks about different types of conditions. My condition is definitely weaker though and encompasses most of John's. Plus holomorphy is a cheap write off, lol. To find my papers just google James David Nixon arXiv, I just publish them on arxiv as I'm too lazy to be bothered with usual journals. Always such a headache and I doubt I'd gain much prestige from the journals I could get in anyway, lol. Plus they always ask for a publishing fee of like 200\$ and I ain't got spare cash like that, lol. This is the one that started it all though, [/url][url=https://arxiv.org/abs/1910.05111]$\Delta y = e^{sy}$, Or How I Learned To Stop Worrying and Love The Gamma Function It's written loosely, I like to play with language <_< when I probably shouldn't. But I believe expository style math papers are easier to digest, especially when introducing notation and the sorts. And on what this tetration's called; I hardly care about what it's called; call it what you want. I just call it a tetration function. I need a good uniqueness condition first. I'm suspicious that this solution is actually Kneser's solution; hard to explain why, but I'll have to do more manipulations to argue that. Thanks, Tommy. I do believe I have come a long way, and come into my own; a lot of my old posts make me cringe a fair amount. Luckily my internet footprint was fairly small and I didn't say something too stupid to too large an audience. I eventually learned to just shut up and pick up a book, lol Regards, James tommy1729 Ultimate Fellow Posts: 1,742 Threads: 382 Joined: Feb 2009 02/01/2021, 11:06 PM I found a mistake and therefore this edit : ... consider tetration (base e) for $Re(s) > 2$. We start with real s and then extend by analytic continuation.  Anyways it thus makes sense to consider this : For $s > 2$ :  $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = \phi(s) + r_n(s)$ And then by recursion and induction we get $r_0(s) = 0 , r_1(s) = s$ $r_{n+1}(s) = ln(\phi(s+1) + r_n(s+1)) - \phi(s)$ by using log(a + b) = log(a) + log(1 + b/a) we get  $r_{n+1}(s) = ln(\phi(s+1)) + ln(1 + (r_n(s+1)/\phi(s+1)) ) - \phi(s)$ ( notice up to here we could have done the analogue with the sinh method and in fact we could try the things below too. But for phi things work out nicer. ) by using the fundamental equation of phi : $\phi(s+1) = e^{s+\phi(s)}$   we can simplify without any problems ; $r_{n+1}(s) = s + \phi(s) + ln(1 + (r_n(s+1)/\phi(s+1)) ) - \phi(s)$ and finally  $r_{n+1}(s) = s + ln(1 + (r_n(s+1)/\phi(s+1)) )$ NOW we can therefore consistently define $\lim_n r_n(s) = v(s) = V(s)$ We arrive at  $V(s) = s + ln(1 + (V(s+1)/\phi(s+1)) )$ Define  $V(s+1) = V(s) + R(s)$ Notice this equation contains functions that are analytic almost everywhere. [1] And remember the inverse of a locally analytic function is also locally analytic.[2] Now from the concept of relativity we know we only need to solve this equation for V and that is defined and analytic for most complex numbers too by [1] and [2]. ... SO we are left to solve this : $V = s + ln(1 + (V+R(s))/(ts) )$ Using the Lambert-W function we get  $V = - LAMBERT-W( -\exp(-ts - s - R(s)) ts) - ts - R(s)$ But ts is just \phi(s+1) so  $V = -W( -\exp(- \phi(s+1) - s - R(s)) \phi(s+1) ) - \phi(s+1) - R(s)$ So we have a closed form using phi, R(s) and lambert-W : $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = \phi(s) - W( -\exp(- \phi(s+1) - s - R(s)) \phi(s+1) ) - \phi(s+1) - R(s)$ Notice this also gives us a way to compute the difference operator over phi , although I am uncertain how practical it is. But let us continue $\phi(s+1) = e^{s+\phi(s)}$ therefore  $V = -W( -\exp(- \phi(s+1) + \phi(s) - R(s)) ) - \phi(s+1) - R(s)$ and thus  $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = \phi(s) - \phi(s+1) - R(s) -W( -\exp(- \phi(s+1) + \phi(s) - R(s))$ So if we take $M(s) = - \phi(s+1) + \phi(s)$ then we get  $\lim_{n\to\infty} \log \log \cdots (n\,\text{times})\cdots \log \phi(s+n) = M(s) - R(s) - W( -\exp(M(s) - R(s)) )$ Ofcourse branches of logarithms , inverse functions of phi or M - R and ofcourse lambertW are not considered in this sketchy overview here. So be careful with those issues. *** This replaces alot of attention and mystery towards M(s) - R(s) and the correct branch of LambertW. I think R(s) is close to 1 for all Re(s) > 2 with small imaginary parts but I need to investigate. Another thing is that at first sight there are way too many " negative terms " in the solution. WE CANNOT ACCEPT " POSITIVE = NEGATIVE ". Not even for a C^oo solution. ( reminds me of the famous 1+2+3+4+... = -1/2 ) I hope this is resolved by the understanding of M,R and the branches of LambertW. Maybe we can rewrite the expression without negative terms ?? I mean M(s) is negative for s > 2 right ?? Did I make a sign mistake ? Or another mistake ? Or does the equation give other solutions than the one given here that cannot even be reached by the branches of the lambertW ? How about Banach then ??  So many questions. What do you think ?? regards tommy1729 JmsNxn Ultimate Fellow Posts: 1,064 Threads: 121 Joined: Dec 2010 02/02/2021, 04:40 AM (This post was last modified: 02/02/2021, 06:34 AM by JmsNxn.) Hey, Tommy! So I had noticed that you implicitly assumed $V(s+1) = V(s)$ in your original analysis. I didn't notice it right away, but I noticed it afterwards when I saw it would imply $2 \pi i$ periodicity. I looked at it some more, and was pretty sure you were on to something--but never been too good with the Lambert function. This makes much much more sense quite frankly. Especially if we think of the branch cuts appearing at $\Im(s) = 2\pi k$ for $k \in \mathbb{Z}$. This is where our $\phi$ function will recycle, and a cluster of singularities will force non-analycity of $\tau$. I'm not sure how helpful I'd be at proving this using the Lambert function; but it's always helpful to have alternative representations. EDIT: Oh and definitely along the lines $\Im(s) = 2\pi i k$ there is a neighborhood in which $|\phi(s)| = 1$. And these neighborhoods definitely appear closer and closer to $\mathbb{R} + 2\pi i k$. Just think Picard's theorem for this one. You might appreciate this. $e \uparrow \uparrow s$ Must get arbitrarily large in the right half plane and must get arbitrarily close to every large enough complex value (Picard). Because of this, we get arbitrarily close to $\phi(s_0) = -s_0 + i\ell$ which makes $\phi(s+1) = e^{i\ell}$. The trouble is making sure they cluster and don't just appear all willy nilly out of the blue. I implicitly assumed this before without even realizing it. So I was at best hal-right. I think I'm closer now. « Next Oldest | Next Newest »

 Possibly Related Threads… Thread Author Replies Views Last Post Iteration with two analytic fixed points bo198214 62 6,348 11/27/2022, 06:53 AM Last Post: JmsNxn Qs on extension of continuous iterations from analytic functs to non-analytic Leo.W 18 2,438 09/18/2022, 09:37 PM Last Post: tommy1729 Constructing an analytic repelling Abel function JmsNxn 0 397 07/11/2022, 10:30 PM Last Post: JmsNxn Closed Forms for non Integer Tetration Catullus 1 382 07/08/2022, 11:32 AM Last Post: JmsNxn Is tetration analytic? Daniel 6 786 07/08/2022, 01:31 AM Last Post: JmsNxn Jabotinsky IL and Nixon's program: a first categorical foundation MphLee 10 5,432 05/13/2021, 03:11 PM Last Post: MphLee Brute force tetration A_k(s) is analytic ! tommy1729 9 4,677 03/22/2021, 11:39 PM Last Post: JmsNxn Doubts on the domains of Nixon's method. MphLee 1 1,716 03/02/2021, 10:43 PM Last Post: JmsNxn [repost] A nowhere analytic infinite sum for tetration. tommy1729 0 3,657 03/20/2018, 12:16 AM Last Post: tommy1729 Analytic matrices and the base units Xorter 2 6,428 07/19/2017, 10:34 AM Last Post: Xorter

Users browsing this thread: 1 Guest(s)