• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Bounded Analytic Hyper operators JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 03/23/2015, 02:12 AM Hey everyone, check out my paper! It's been written formal and rigorous, written in a purist complex analysis format. It's about the bounded analytic hyper operators, those with a base in between 1 and eta, and fractional iteration through the lens of fractional calculus. I've centered it around Ramanujan's master theorem, using this theorem as a base, but it gives some very advantageous results. I give an expression for $\alpha [n] z$ when $\alpha \in [1,e^{1/e}], z \in \mathbb{C}$ using ramanujan's master theorem. This is not as easy as it sounds off hand, but I do believe it's all been proved. I've uploaded it on arxiv, I've just been waiting for the conclusion of the wait period. Thank you if you do read it, I really appreciate any support or comments or suggestions. I am trying to get this published, and I would have never been able to do this without the existence of this community. It has just knocked me on the head so many times that I finally eventually got the rigor and formality that my thoughts have now. I really hope this paper makes up for all my stupidities when I have posted here. This is really a big step lol. Attached Files   Complex iterations and bounded analytic hyper-operators.pdf (Size: 396.86 KB / Downloads: 488) MphLee Fellow Posts: 95 Threads: 7 Joined: May 2013 03/24/2015, 08:34 PM OMG ... i'm reading at page 4... and looks so amazing... Finally I understand why the auxiliary functions... this is really "as beautiful as the Gamma function interpolates the factorial" (as you told me)...!!! I guess that for me has come the time to study complex analisys! MathStackExchange account:MphLee fivexthethird Junior Fellow Posts: 9 Threads: 3 Joined: Nov 2013 03/25/2015, 07:29 AM That is a fantastic paper! I have an idea on how to extend this to real bases greater than eta Define, for integer $n$, the super root $\text{srt}_n(z)$ as the inverse of tetration in the base so that we have $^n\text{srt}_n(z) = z$ Then we simply apply your method to factor this in $n$: $\vartheta(z,w) = \sum_{n=0}^{\infty} \text{srt}_{n+1}(z)\frac{w^n}{n!}$ $\text{srt}_{n+1}(z) = \frac{\mathrm{d}^n }{\mathrm{d} w^n} |_{w=0} \vartheta(z,w)$ Then we can simply invert in z. ...unfortunately, there does not seem to be a nice recursion relation between super roots that would force the result to be a tetration. But it seems to converge numerically to the super root for your tetration. marraco Fellow Posts: 93 Threads: 11 Joined: Apr 2011 03/25/2015, 01:48 PM ¿Is there any way to deduce from it what is $\frac {1}{n}x$ or $^{n}(^{m}x)=x$ ? I mean, given n and x, what is m as function of n (and x)? MphLee Fellow Posts: 95 Threads: 7 Joined: May 2013 03/25/2015, 07:43 PM @Marraco read Jmsn's paper at page 15, theorem 4.1. It gives a closed form for complex tetration but only for a small set of bases. MathStackExchange account:MphLee MphLee Fellow Posts: 95 Threads: 7 Joined: May 2013 03/26/2015, 11:08 PM @JmsNxn I don't yet understand all the analysis behind but I'm tryng to follow the logical implications of the lemmas. If I get it is the Ramanujan's that makes all the works (but we have the exp bound requirmeent). The other interesting point is the differential equation that holds for the auxiliary function (seems crazy). An this is the really interesting trick imho. But pls, tell me if I get it (started to study calculus just 1 month ago). We have a power series $\varphi$ $\vartheta(w)=a_0w^0+a_1w^1+a_2w^2+...+a_{n}w^n+a_{n+1}w^{n+1}+...$ and its derivative becomes the following (D distributes over addition, commutes with product and appliyng power rule we get a "shift" of indexes in the series ) $\vartheta'(w)=a_1w^0+(a_22)w^1+(a_33)w^2+...+(a_{n}n)w^{n-1}+a_{n+1}w^n...$ Now comes the trick... (Am I right?) Define the coefficents in the following form $a_n=\phi_{n+1}/n!$ that becomes $a_nn=\phi_{n+1}/n!\cdot n=\phi_{n+1}/(n-1)!$. Replace this in the prevous series and $\displaystyle\vartheta(w)= \phi_1{w^0\over0!}+\phi_2{w^1\over1!}+{\phi_3}{w^2\over 2!} + ... +\phi_{n+1}{w^n\over n!}+\phi_{n+2}{w^{n+1}\over(n+1)!}+ ...=\sum_{n=0}^{\infty}\phi_{n+1}{w^n\over n!}$ $\displaystyle\vartheta'(w)= \phi_2{w^0\over0!}+\phi_3{w^1\over1!}+{\phi_4}{w^2\over 2!}+...+\phi_{n+1}{w^{n-1}\over (n-1)!}+\phi_{n+2}{w^n\over n!}+...=\sum_{n=0}^{\infty}\phi_{1+n+1}{w^n\over n!}$ So if we define $\displaystyle\Theta_\phi(k,w):=\sum_{n=0}^{\infty}\phi_{k+n+1}{w^n/ n!}$ We could assume to have the following $\displaystyle{d\over dw}\Theta_\phi(k,w)=\Theta_\phi(k+1,w)$ and that and we hope that $\displaystyle{d^z\over d^zw}\Theta_\phi(k,w)=\Theta_\phi(k+z,w)$ At this point you set $\phi_n=\phi^{\circ n}(\xi)$ (in other notation youre using its xi-based superfunction $\phi_n=\Sigma[\phi]_\xi(n)$) so you can send differentiations by w to iteration by the transfer function (or application of the right-composition operator). Seeing this I now understand why you told me, long ago, that this can be used for the fractional ranks... but now you talk only about linear operators and the recursion (I mean the b-based superfunction operator) and the antirecursion/subfunction operator aren't linear. Also, just before Lemma 3.1, you say that this is not proved for operators different than the left-composition yet. Why? Why can't we just repeat the trick using other sequences $\tau_n$ like using a sequence of values of your bounded hyperoperators indexed by the rank? We just need $|T(n)|=\tau_n$ to be exp. bounded or I've missed something important somewhere in your paper? For example we could try to set in it one of these two sequences Given an invertible function $f$ and a $t$ in its domain define the direct antirecursion sequence $f_{\sigma}$ of $f$ and its inverse antirecursion sequence $g_{\sigma}$ $f_0:=f$ and $f_{\sigma+1}:=f_{\sigma}Sf_{\sigma}^{\circ-1}$ $g_0:=f^{\circ-1}$ and $g_{\sigma+1}:=f_{\sigma}S^{\circ-1}f_{\sigma}^{\circ-1}=f_{\sigma+1}^{\circ-1}$ Those two sequences of functions give us two sequences of real numbers $\tau_\sigma:=f_{\sigma}(t)$ and $\theta_\sigma=g_{\sigma}(t)$ for a fixed $t$ btw those sequences satisfie those recurrences $\tau_{\sigma+1}=f_\sigma(\theta_{\sigma}+1)$ $\theta_{\sigma+1}=f_\sigma(\theta_{\sigma}-1)$ MathStackExchange account:MphLee tommy1729 Ultimate Fellow Posts: 1,358 Threads: 330 Joined: Feb 2009 03/27/2015, 09:02 AM I havent read the whole paper , but it seems good. Is it really new ? Why didnt Ramanujan see this ? 1 I really like the idea for the super root. I wonder if it agrees with other methods. 2 For a sequence of iterates going to hyperb fix , Does this agree with Koenigs ? I think so. I think proving that in the paper would impress. 3 I think this can be applied to nonbounded operators with some tricks. Regards Tommy1729 MphLee Fellow Posts: 95 Threads: 7 Joined: May 2013 03/27/2015, 07:09 PM Off topic: @Tommy Do you mean non-linear operators? From wiki I see that boundness comes with the definition of a norm... what is the relationship between bounded operators and linear ones? Wiki page about operators is a little bit chaotic. MathStackExchange account:MphLee tommy1729 Ultimate Fellow Posts: 1,358 Threads: 330 Joined: Feb 2009 03/27/2015, 11:20 PM What I meant is that you have tetration ! Not just for the bases between 1 and eta but for all real bases larger than 1. Not sure if you realise it yet. Here is a sketchy way to show it : In short since you can interpolate analyticly x^x^... = m where ... are integer iterations and m is a real > e ... You can THUS solve for the x in tet_x(t) = m for a given m > e and t >0. ( x is the base ). But this also means that you can solve for t since you can set up the equation RAM(m,t) = x for any desired x. WHen you have this t , you have found tet_base_x(t) = m for a given m. In other words from the relation tet_x(t) = m you can solve for either x or t. therefore you can solve sexp_x(t) = m which is slog_x(m). Then invert this function and you have sexp for any base > 1. Since all of this is done analyticly you have found tetration. And it seems simpler then some other methods , like Kneser or Cauchy. Hope this is clear enough. I can explain more if required. SO JmsNxn finally has his own method , with credit to the brilliant comment of fivexthethird. ( Im thinking of a variant of this method too ) I just wonder what this will be called ... JMS method ? JN method ? Jms5x3 method ? Jms5x31729 method I already started calling it in my head " Ramanujan-Lagrange method ". The reason seems clear : Ramanujan's master theorem and Lagrange's inversion theorem. For those unfamiliar : http://en.wikipedia.org/wiki/Lagrange_inversion_theorem regards tommy1729 fivexthethird Junior Fellow Posts: 9 Threads: 3 Joined: Nov 2013 03/28/2015, 06:46 AM (This post was last modified: 03/28/2015, 06:47 AM by fivexthethird.) It seems that this method is equivalent to the method of newton series. To see this, note that $e^x \sum_{k=0}^{\infty}f(k) \frac{(-1)^k x^k}{k!} = (\sum_{k=0}^{\infty}\frac{x^k}{k!})(\sum_{k=0}^{\infty}f(k) \frac{(-1)^k x^k}{k!}) =$ $\sum_{k=0}^{\infty}x^k \sum_{j=0}^k\frac{(-1)^{j}f(j)}{(k-j)!j!} = \sum_{k=0}^{\infty}\frac{x^k}{k!} \sum_{j=0}^k(-1)^{j}f(j){k \choose j} =$ $\sum_{k=0}^{\infty}{\Delta}^kf(0) \frac{(-1)^{k}x^k}{k!}$ where $\Delta f(x) = f(x+1)-f(x)$ So we can rewrite the integral as $\frac{1}{\Gamma(-z)}\int_0^{\infty} \sum_{k=0}^{\infty}{\Delta}^kf(0) \frac{(-1)^{k}x^{k-z-1}e^{-x}}{k! } dx$ Since the power series defines an entire function we can exchange the integral and sum so that we have $\sum_{k=0}^{\infty}{\Delta}^kf(0)\frac{(-1)^{k}}{k!\Gamma(-z)} \int_0^{\infty} x^{k-z-1}e^{-x} dx =$ $\sum_{k=0}^{\infty}{\Delta}^kf(0)\frac{(-1)^{k}\Gamma(k-z)}{k!\Gamma(-z)} =\sum_{k=0}^{\infty}{\Delta}^kf(0)\frac{(z)_{k}}{k!}$ where $(z)_{k} = z(z-1)(z-2)...(z-(k-1))$ is the falling factorial. This is just the newton series of f around 0. TPID 13 is thus solved, as $x^{\frac{1}{x}}$ satisfies the bounds required for Ramanujan's master theorem to apply. @tommy: I don't see how inverting around t helps us recover tetration from the interpolated super root... it gets us the slog, yes, but in either case we need to just invert around m to get tetration. Or am I misinterpreting your post? « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Thoughts on hyper-operations of rational but non-integer orders? VSO 2 365 09/09/2019, 10:38 PM Last Post: tommy1729 [repost] A nowhere analytic infinite sum for tetration. tommy1729 0 1,061 03/20/2018, 12:16 AM Last Post: tommy1729 Analytic matrices and the base units Xorter 2 2,245 07/19/2017, 10:34 AM Last Post: Xorter Hyper-volume by integration Xorter 0 1,308 04/08/2017, 01:52 PM Last Post: Xorter Non-analytic Xorter 0 1,333 04/04/2017, 10:38 PM Last Post: Xorter A conjectured uniqueness criteria for analytic tetration Vladimir Reshetnikov 13 10,525 02/17/2017, 05:21 AM Last Post: JmsNxn Hyper operators in computability theory JmsNxn 5 3,764 02/15/2017, 10:07 PM Last Post: MphLee Recursive formula generating bounded hyper-operators JmsNxn 0 1,436 01/17/2017, 05:10 AM Last Post: JmsNxn Is bounded tetration is analytic in the base argument? JmsNxn 0 1,260 01/02/2017, 06:38 AM Last Post: JmsNxn Are tetrations fixed points analytic? JmsNxn 2 2,742 12/14/2016, 08:50 PM Last Post: JmsNxn

Users browsing this thread: 1 Guest(s)