• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Arguments for the beta method not being Kneser's method JmsNxn Long Time Fellow Posts: 568 Threads: 95 Joined: Dec 2010 09/23/2021, 08:42 AM Additionally, here are some graphs of beta(z,1). These are called with the code, Code:func(z) = beta(z,1) %33 = (z)->beta(z,1) MakeGraph(800,800,-3,3,3,-3,BETA_2PI_I_TEST) Which gives the picture:     This is $\beta_1(z)$ over $|\Re(z)|,|\Im(z)| \le 3$ And if you run: Code:MakeGraph(500,500,-1,3,5,-3, BETA_2PI_I_FURTHEROUT) you get this picture:     This is $\beta_1(z)$ over $-1 \le \Re(z) \le 5$ and $|\Im(z)| \le 3$. We can also sum Taylor series, if you run Code:Y= Abel(s,1) %24 = 1.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 + 1.091746621076809013884974261863482866515963611560536324138994111751239098282697066968792471034239886*s + 0.2714060762070193171437424955943827931638503717136848165830532882692408669826080900482469467615488910*s^2 + 0.2125525861313469888021478425802562090387068291121784641878304794743661021504702004547199878572094071*s^3 + 0.06977521648514998263674987921378838226309211345190041781551961351764919973313522659458688303371573846*s^4 + 0.04404163417774007948974320281475423239471012247835111677418601284865041866096275647089240474660889988*s^5 + 0.01480629466783694024680606804770874466095025847943622314359234215211918566929634587982547822697524125*s^6 + 0.008561268049536550787110621350376164411249768075478143351453259439601273283709683201952517632614868019*s^7 + 0.002814905681249618769344334929755303131301111679210593232530174245343075739294184023761363767325649788*s^8 + 0.001600984980242967980058067512988028998626599587465129469228613980725879543699254151351823397240636449*s^9 + 0.0005240150126752845344816864863220032432383929844600032134916998405913413929359719890727966534676820259*s^10 + 0.0003017151070992455198885590074112687198266940835895138851311257194250982324280807827139264047511680096*s^11 + 9.024605643550261986785556669340450988716179032759129805040792164935223712423512110510982647458269449 E-5*s^12 + 4.859419988807502511049138781325890892877814919176570632479411035916500915789129780855922960206661452 E-5*s^13 + 8.084794728794402624489481277553499323658745901241299037539670393969783591119684627779672866000859297 E-6*s^14 + 2.704836036866106315173902706242762078649072551241974645493836652948459913867590456573805827341354948 E-6*s^15 - 2.450720613547027454196739715567577929122842285508726144924924557910470116337318420497467650134284425 E-6*s^16 - 6.892515381942296218498173521547907785286934797376940481495402983799448402762944660107195839612212864 E-7*s[+++] \ps 35 func(z) = sum(j=0,34, polcoef(Y,j,s)*z^j)   seriesprecision = 35 significant terms %25 = (z)->sum(j=0,34,polcoef(Y,j,s)*z^j) func(0) %26 = 1.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 MakeGraph(300,300,-1,1,1,-1, TAYLOR_WITH_2PI_I) You get the sum of the Taylor series of Abel(z,1) about zero, which converges as it should:     Note, these graphs take about 6 hours to compile o average... Regards, James Ember Edison Fellow Posts: 66 Threads: 7 Joined: May 2019 09/24/2021, 04:42 PM (07/08/2021, 05:11 AM)JmsNxn Wrote: I believe William Paulsen and Samuel Cogwill's uniqueness condition is quite beautiful (Which is largely just based around Henryk and Dmitrii's work). Personally though; I'm very opposed to using fixed points, it just feels unnatural to me--sort of, arbitrary; like "why that fixed point and not this one?". "A kind of" Fixed point at infinity seems a bit more natural to me. Also, tetration diverging to infinity as we increase the imaginary argument also seems more anomalous--I think it represents well just how whacky tetration is. If anything, I've just thrown a wrench in the gears; but I think it's a good thing. Have you seen what proposed pentations/hexations/septations look like with kneser?--they look less than desirable. I think at this point, in the quest for "the right tetration"; which ever one runs faster and simpler and solves the storage of large numbers in a better way will probably win out. It's definitely Kneser's at the moment. I still feel Kneser is the superior tetration, simply because it's much better behaved, and taylor series are much easier to grab. I'm still having trouble making a non glitching program. God damn overflow errors. Need a perfect turing machine with geometric convergence speeds   . Regards, James Oh. I think "goodness tetration" must uniquely determine Tetration, and also pent/hex/sept/... all of the Ackermann-like function, (until $f\omega^\omega (n)$ growth rate)  It's hard to argue that a solution other than kneser could have done better. However, in cases where it is difficult to calculate L/L*, like base 0/1/Shell-Thron-region , it does make sense to use a solution other than kneser. (But I don't see you starting any work on it  ) Ember Edison Fellow Posts: 66 Threads: 7 Joined: May 2019 09/24/2021, 04:52 PM (09/21/2021, 07:22 PM)sheldonison Wrote: (07/23/2021, 04:05 PM)JmsNxn Wrote: What I was pointing out is that Samuel Cogwill and William Paulsen proved a uniqueness condition.... This implies that ... is Kneser's solution; unless it has singularities in the upper half plane. http://myweb.astate.edu/wpaulsen/tetration2.pdf James, thanks for pointing out that Cowgill/Paulsen have proven this uniqueness criteria!  Very nice. I talked with James on a zoom call, and I hope to understand Jame's Beta method well enough to generate a Taylor series for $\lambda=1$ case, first for James $2\pi i$ periodic $\beta$ function, and then for his Tetration solution generated from $\beta(\lambda=1)$.  This function is very interesting to me all by itself!  Also, if I understand Cowgill's proof, then if Jame's solution can be arbitrarily extended to arbitrarily larger then 2pi i imaginary periods with other different values of $0<\lambda<1$, then in the limit it must be Kneser's solution otherwise Jame's solution must have singularities in the upper half of the complex plane. Can this method solve the base that theta-mapping cannot solve? JmsNxn Long Time Fellow Posts: 568 Threads: 95 Joined: Dec 2010 09/25/2021, 03:00 AM (This post was last modified: 09/25/2021, 03:01 AM by JmsNxn.) (09/24/2021, 04:52 PM)Ember Edison Wrote: (09/21/2021, 07:22 PM)sheldonison Wrote: (07/23/2021, 04:05 PM)JmsNxn Wrote: What I was pointing out is that Samuel Cogwill and William Paulsen proved a uniqueness condition.... This implies that ... is Kneser's solution; unless it has singularities in the upper half plane. http://myweb.astate.edu/wpaulsen/tetration2.pdf James, thanks for pointing out that Cowgill/Paulsen have proven this uniqueness criteria!  Very nice. I talked with James on a zoom call, and I hope to understand Jame's Beta method well enough to generate a Taylor series for $\lambda=1$ case, first for James $2\pi i$ periodic $\beta$ function, and then for his Tetration solution generated from $\beta(\lambda=1)$.  This function is very interesting to me all by itself!  Also, if I understand Cowgill's proof, then if Jame's solution can be arbitrarily extended to arbitrarily larger then 2pi i imaginary periods with other different values of $0<\lambda<1$, then in the limit it must be Kneser's solution otherwise Jame's solution must have singularities in the upper half of the complex plane. Can this method solve the base that theta-mapping cannot solve? See my thread here; I run a quick toy model for a $2 \pi i$ periodic tetration base $b = 1/2$. As far as I can tell this should work on the real positive line; and should work in the complex plane, but I'm not sure. I think the real trouble would be $b <0$. Ember Edison Fellow Posts: 66 Threads: 7 Joined: May 2019 09/28/2021, 04:41 PM (This post was last modified: 09/28/2021, 04:45 PM by Ember Edison.) (09/25/2021, 03:00 AM)JmsNxn Wrote: See my thread here; I run a quick toy model for a $2 \pi i$ periodic tetration base $b = 1/2$. As far as I can tell this should work on the real positive line; and should work in the complex plane, but I'm not sure. I think the real trouble would be $b <0$. I will not be as optimistic as you are. The real big trouble should be b=$e^{-e}$≈0.065988035845312537076790187596846424938577048252796 If you really want to give yourself some real meaningful trials, try deriving a numerical approximation of the tetration function for the following bases: b=e^-e, b=10^-10(, b=-10^-10), b=1+10^-10, b=1-10^-10(, b=1+10^-10 * I) Oh, Maybe the numerical approximation accuracy you already have is not up to 10^-10, so you can try from 10^-5. Maybe a simpler sequence would be b=0.1, 0.07, 0.066, 0.06599 JmsNxn Long Time Fellow Posts: 568 Threads: 95 Joined: Dec 2010 09/29/2021, 12:33 AM (This post was last modified: 09/29/2021, 03:27 AM by JmsNxn.) (09/28/2021, 04:41 PM)Ember Edison Wrote: (09/25/2021, 03:00 AM)JmsNxn Wrote: See my thread here; I run a quick toy model for a $2 \pi i$ periodic tetration base $b = 1/2$. As far as I can tell this should work on the real positive line; and should work in the complex plane, but I'm not sure. I think the real trouble would be $b <0$. I will not be as optimistic as you are. The real big trouble should be b=$e^{-e}$≈0.065988035845312537076790187596846424938577048252796 If you really want to give yourself some real meaningful trials, try deriving a numerical approximation of the tetration function for the following bases: b=e^-e, b=10^-10(, b=-10^-10), b=1+10^-10, b=1-10^-10(, b=1+10^-10 * I) Oh, Maybe the numerical approximation accuracy you already have is not up to 10^-10, so you can try from 10^-5. Maybe a simpler sequence would be b=0.1, 0.07, 0.066, 0.06599 I'm not super optimistic yet; but I see no reason for it to fail at the moment. But you are absolutely right, I won't get ahead of myself. I still want to make sure everything is kosher with $b=e$. I'll give $b = e^{-e}$ a shot though. This should be easy to patch work code. I'll post something later tonight, on what it's shaping up to be. I do think the code that I have is too patchwork at the moment to work for $b = 0.001$ or something like that. But mathematically, I can't see a difference between this b and b=1/2. But my code will surely crap out for this base. That's moreso a problem with my code than the math though. I'm not the greatest programmer. EDIT: I updated the other thread and handled the case where $b = e^{-e}$--no obvious errors, as expected. It runs slower than b = 1/2, but seems fine so far. I'm making a complex plane graph at the moment, and I'll see how it looks. I'm still working with the toy model case which is 2pi i periodic solutions.  I graphed some Taylor series for $b= e^{-e}$ and there are no errors. The infinite composition method works fine here. Again, I'll say that for $b > 0$ we don't fall into the same traps we fall into when talking about Schroder functions. We're solving a schroder equation in the neighborhood of infinity; not a fixed point. So the neutral, attracting, repelling paradigm doesn't matter for us. We don't care about fixed points. All we care about is that $b^z$ and $\log_b(z)$ are well enough behaved. We can always find an asymptotic solution, and we're just trying to solve for an error between the asymptotic and the actual tetration. Again, ember, I don't see anything glaringly wrong. This avoids all the problems that the theta mapping method has. Anyway, that's enough for tonight. I have a zoom with sheldon tomorrow, to talk about $b = e$ with $2 \pi i$-period case. I'm focusing on $b = e$ for now; if I can get this to work perfectly, I'll move on to $b > 0$; then if I dare $b \in \mathbb{C}$. Keep posting challenges though; let's try to break this method together. Find everything that could break it. Ember Edison Fellow Posts: 66 Threads: 7 Joined: May 2019 09/29/2021, 11:55 AM (09/29/2021, 12:33 AM)JmsNxn Wrote: EDIT: I updated the other thread and handled the case where $b = e^{-e}$--no obvious errors, as expected. It runs slower than b = 1/2, but seems fine so far. I'm making a complex plane graph at the moment, and I'll see how it looks. I'm still working with the toy model case which is 2pi i periodic solutions.  I graphed some Taylor series for $b= e^{-e}$ and there are no errors. The infinite composition method works fine here. Again, I'll say that for $b > 0$ we don't fall into the same traps we fall into when talking about Schroder functions. We're solving a schroder equation in the neighborhood of infinity; not a fixed point. So the neutral, attracting, repelling paradigm doesn't matter for us. We don't care about fixed points. All we care about is that $b^z$ and $\log_b(z)$ are well enough behaved. We can always find an asymptotic solution, and we're just trying to solve for an error between the asymptotic and the actual tetration. Again, ember, I don't see anything glaringly wrong. This avoids all the problems that the theta mapping method has. Anyway, that's enough for tonight. I have a zoom with sheldon tomorrow, to talk about $b = e$ with $2 \pi i$-period case. I'm focusing on $b = e$ for now; if I can get this to work perfectly, I'll move on to $b > 0$; then if I dare $b \in \mathbb{C}$. Keep posting challenges though; let's try to break this method together. Find everything that could break it. 10^-3 is not enough to prove the reliability of your method at the singularity 0/1, I continue to request <10^-5, but base=0.066 test pass is indeed enough to suggest that your method may have avoided the problem for Shell-Thron-region. If things go in the best direction, this could indeed be the strongest contender for Kneser's method.  Because you may incidentally solve the numerical approximation problem of the super-root function, which is very difficult to solve by Kneser's method for the same reason. sheldonison Long Time Fellow Posts: 683 Threads: 24 Joined: Oct 2008 10/01/2021, 01:23 AM (This post was last modified: 10/01/2021, 03:32 AM by sheldonison.) For James, I started meeting with James on a zoom call, more or less weekly, trying to understand his latest Tetration solution.  Here is a challenge problem for James.  Calculate the value for Abel_N(1+I,1).  I figure a single point as a challenge problem should help clarify the problems in general, and a single point allows one to focus. After yesterday's zoom call, I started working with Jame's Abel_T.gp program, and I wanted to share my observations. The beta(z,1) function appears very well behaved in the complex plane, and matches to precision a Taylor series generated by sampling a unit circle, and also matches the functions iterative definition.   The results are accurate to 112 decimal digits with 240 sample points. The Abel_N(z,1) function does not match its Taylor series generated by sampling around a unit circle with 32 sample points, except at those 32 sample points.   Here, I sampled 32 points around a unit circle for Abel_N(z,1) centered at z=1.  Granted, 32 sample points isn't particularly large, but its quicker that way, and easier to see the problems.  This Taylor series, will match Abel(N(z) at the 32 sample points around a unit circle exactly, but the difference is ... clearly not an analytic function, and adding more sample points is not going to help.  I also posted the 32 term Taylor series.  which matches the Abel_N(z,1) function at the 32 equally spaced sample points.  The graph is from 1+exp(0*I) ... 1+exp(Pi*I), or half of a unit circle graphing Abel_N(z,1)-TaylorSeries. So, perhaps James can make some headway in figuring out the Abel_N(1+I,1) function which is approximated by 0.334 + 0.832i; the Taylor series gives 0.344+0.811i. update: There is a discontinuity between Abel_N(1+exp(2.3826*I)) and Abel_N(1+exp(2.3827*I)).  Those two points could also be studied to see what causes the discontinuity, and to study the iterated function's convergence. This post should have been added to this thread but I don't know how to move this post:  https://math.eretrandre.org/tetrationfor...23#pid9723     Code:{Abel_N=         0.26962349367025 +x^ 1*  0.98289395290899 +x^ 2* -0.12839966506604 +x^ 3*  0.21830648113188 +x^ 4* -0.089448736367909 +x^ 5*  0.074647906187944 +x^ 6* -0.036017423946514 +x^ 7*  0.018234953617508 +x^ 8* -0.0058405324292364 +x^ 9*  0.0048183497091155 +x^10* -0.010002486474801 +x^11*  0.016389477074026 +x^12* -0.019181228205727 +x^13*  0.016131660699365 +x^14* -0.0075339894316277 +x^15* -0.0030394488907054 +x^16*  0.010082804929620 +x^17* -0.011107672494816 +x^18*  0.0074702878952322 +x^19*  0.00050519581149936 +x^20* -0.0091471972435540 +x^21*  0.011952484076219 +x^22* -0.0096472370827766 +x^23*  0.0055591676270766 +x^24*  0.0016849792938102 +x^25* -0.0099505377559867 +x^26*  0.011385235421743 +x^27* -0.0070217176746849 +x^28*  0.0027394592867243 +x^29*  0.0030327763362493 +x^30* -0.010678678609549 +x^31*  0.012604188471810 } - Sheldon JmsNxn Long Time Fellow Posts: 568 Threads: 95 Joined: Dec 2010 10/01/2021, 03:50 AM (This post was last modified: 10/01/2021, 05:04 AM by JmsNxn.) Hey, Sheldon This is very much the problem I've been facing when trying to get Taylor series. And for that; I'd like to say my taylor series, as I'm currently grabbing them, are rather defunct. And I can explain why pretty plainly. You're pointing out a very good problem, which I don't know how to avoid perfectly just yet. But ultimately, I feel it's a problem with how I've coded this--as opposed to the math. For that, I'm going to run through how this works. Each point $s\in \mathbb{C}$ is matched to a different level of iteration $1 \le n \le 10$. If I write, $ \tau^0 = 0\\ \tau^{n+1}(s) = -\log(1+e^{-s}) + \log(1+\frac{\tau^{n+1}(s+1)}{\beta_1(s+1)})\\$ Then each point $s \in \mathbb{C}$; when I run Abel_N(s,1) does $n \le 10$ iterations for some $n = n(s)$. This was done using a limiter in my program which quits as soon as $\Re(\beta(s,1)) > 3$ or $n > 10$. So what you are seeing here, is a discrepancy, where, $ \beta(s,1) + \tau^9(s) \neq \beta(s,1)+ \tau^{10}(s)\\$ But they agree fairly well point wise. The trouble is... their Taylor series are vastly different; especially as you go further out in the terms. After our talk, and further testing today, I realize my protocol for grabbing taylor series are defunct, and not done perfectly. As each point $s$ has it's own depth of iteration--and if for (on the real line for instance) n = 4 but then at s = 1+i we get n =10, the two functions, even though may somewhat agree pointwise--their taylor series are vastly different.  So I've been thinking about: $ \beta(s,1) + \tau^n(s)\\$ And for the taylor series to work naturally; every s must have the same n. Now, I avoided this, mostly because it produces many errors on the real line. If I set n=10, then in no way will the abel function work on the real line, because we need to sample values of about $\beta(10,1)$. Which are astronomical. I've been thinking about a work around into how to code this. But, as your post shows, at least the taylor series code is entirely wrong. We require another analytic expression for this. We need a way such that each s has the same depth of iteration; rather than a patch work of doing different amounts of iterations at different points. Albeit, this may work pointwise, but it's a disaster analytically. I think the recursive method is ineffective for taylor series. From this, enters the fixed iteration method. This was the initial way I had coded this problem. But it proved to create inaccuracies. But, it's analytic in nature. So, taking two steps back; we instead write, $ \text{Abel}_N(s,1,n) = \beta(s) + \tau^n(s)\\$ Where n is fixed for all s. And we completely remove the limiter $\Re \beta(s) \le 3$.  This will not work as well point wise; but it will work analytically; because this will be an analytic expression. I abandoned this method because it caps at about 15 digits precision before overflowing; and can decrease precision in the complex plane. But it is analytic. Again, the main evil is the overflow errors we get from nesting recursion.  To recode this, we start with changing our rho function into, Code:rho(z,y,count)={     if(count>0,         count--;         log(1 + (rho(z+1,y,count)-log(1+exp(-y*(z+1))))/beta(z+1,y)),         0     ); } And our Abel_N function into: Code:Abel_N(z,y,count) = {     beta(z,y) + rho(z,y,count) - log(1+exp(-y*z)); } And now, everytime you call the Abel_N you have to specify the amount of recursion. count=4 gets good accuracy on the real line, but less accuracy in the complex plane. However, the benefit is that this is an analytic function. Remember, the answer is count = \infty. It's not a bunch of analytic functions pasted together; which agree fairly well pointwise. So for example, if you work with Abel_N(s,1,5) you get about 15 digits on the real line (avoiding overflows, so sticking with z < 1). And now we move onto your question. Which I write using this table: Code:Abel_N(1+I,1,3) %72 = 0.3326962392308007020014865854237297888996872301263950452212819011740822116949238956645687894808235713 + 0.8253196416421547770143811245258918770537667057235651771844372452625085974948591308978646820358718994*I exp(Abel_N(I,1,3)) %73 = 0.3331324362934382533107762285571036421697911477662160925662937067298050267131043999222875942632743548 + 0.8265817438212454693846804181500372134007618310660305633420340147921518796924964209508964583105448308*I Abel_N(1+I,1,4) %74 = 0.3338405164485978560617226914964707072664591940724549428755239283104216900912384022667310746444439571 + 0.8297707536819189534491342070803442963391124193412046139999079825530844387021891777815842418547851239*I exp(Abel_N(I,1,4)) %75 = 0.3326962392308007020014865854237297888996872301263950452212819011740822116949238956645687894808235713 + 0.8253196416421547770143811245258918770537667057235651771844372452625085974948591308978646820358718994*I Abel_N(1+I,1,5) %76 = 0.3342824352516798494105860729409608796878908824771145078019334622592328575006054064884925397153343555 + 0.8315813619339502416453819519085905691938993051369868315671650853929466887077756760576359583380320225*I exp(Abel_N(I,1,5)) %77 = 0.3338405164485978560617226914964707072664591940724549428755239283104216900912384022667310746444439571 + 0.8297707536819189534491342070803442963391124193412046139999079825530844387021891777815842418547851239*I Abel_N(1+I,1,6) %78 = 0.3343503812545163782483121839224093960869801264764702427619396667729679185297203272127713675669724870 + 0.8318522150534258467642344294075900135334283086787844285845867243763170498636223660748960810403490459*I exp(Abel_N(I,1,6)) %79 = 0.3342824352516798494105860729409608796878908824771145078019334622592328575006054064884925397153343555 + 0.8315813619339502416453819519085905691938993051369868315671650853929466887077756760576359583380320225*I Abel_N(1+I,1,7) %80 = 0.3343528247466435854370710702152854431493564677151529776954052706900708135747687578304817050057444114 + 0.8318607651729233849245226812501299571093924010418649837027066585722523959004595892396238594261250086*I exp(Abel_N(I,1,7)) %81 = 0.3343503812545163782483121839224093960869801264764702427619396667729679185297203272127713675669724870 + 0.8318522150534258467642344294075900135334283086787844285845867243763170498636223660748960810403490459*I Abel_N(1+I,1,8) %82 = 0.3343528247661539766247053296097915790613022346645454910710939522968467496625386880329966406027443183 + 0.8318607651979606419012858881111704414647307119906690903661656988862193774443656979962248951639318901*I exp(Abel_N(I,1,8)) %83 = 0.3343528247466435854370710702152854431493564677151529776954052706900708135747687578304817050057444114 + 0.8318607651729233849245226812501299571093924010418649837027066585722523959004595892396238594261250086*I Abel_N(1+I,1,9) %84 = 0.3343528247661539766247053296097915790613022346645454910710939522968467496625386880329966406027443183 + 0.8318607651979606419012858881111704414647307119906690903661656988862193774443656979962248951639318901*I exp(Abel_N(I,1,9)) %85 = 0.3343528247661539766247053296097915790613022346645454910710939522968467496625386880329966406027443183 + 0.8318607651979606419012858881111704414647307119906690903661656988862193774443656979962248951639318901*I And after this there's an overflow in the process. But the TRUE value of Abel_N(1+I,1) is when we let count go to infinity. This obviously can't be done without overflows. Now this is important because Abel_N(z,1,9) is NOT THE SAME ANALYTIC FUNCTION as Abel_N(z,1,3); but on the real line, we kind of cap at 3. So of course the taylor series won't equal. I can't believe I missed this   So, in essence, my code is pointwise only. And grabbing taylor series won't work perfectly because some times we call Abel_N(z,1,3) and sometimes we call Abel_N(z,1,9). Think of my code as a piecewise approximation to 100 digits... I hadn't realized this, but your challenges make this clear. My solution.... Pull back in the lefthalf plane more. I can't believe I hadn't thought of this sooner. But, let's restrict ourselves to nonzero imaginary vaues. So screw the real line for the moment. And let's work just with the values $0 < \Im(s) < \pi$. Then I'll add the code: Code:Abel_N(z,y,{count=1000}) = {     if(real(Const(z)) <= -1000,         beta(z,y) + rho(z,y,count) - log(1+exp(-y*z)),         exp(Abel_N(z-1,y,count))     ); } Then, you get 100 digit accuracy, displayed by this: Code:Abel_N(-1000+I,1) %218 = 0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*I exp(Abel_N(-1001+I,1)) %219 = 0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*I This allows us to run 1000 iterations; but only way off in the left half plane. You can see it aggregating towards the fixed point L. Which, I should say $\tau(-\infty) = L$ for $0 < \Im(s) < \pi$. And now we push forward. Infact, I think you'll be able to make a change of variables $w = e^s$ in which at $w = 0$ we have $Abel_N = L$; and now we're back to just pushing forward with exponentials; but we might be able to grab taylor series better. Now, SIGNIFICANTLY, we alter the normalization constant A LOT. So that now, Abel_N(1+I,1) is way off in the right half plane, compared to before. We get 1000 iterations, but we now have to find the normalization constant again. To my best guess this normalization constant should be about $\approx -180$. I'm not really sure.  Either way, this gives a tetration for $0 < \Im(s) < \pi$ and $\Re(s) < -1000$. And absolutely, we get a fixed amount of iterations; rather than pasting different iterations together. This should be much better for Taylor series. All in all, I feel your challenge is perfectly correct. And you are absolutely correct in doubting these taylor series. But in my defense, it's how I coded it. The math is solid, even if this is the hill I die on. The math is right. I do not know how to code it efficiently though without fucking everything up. But the math is solid. I swear, I'll die on this hill. I need your help coding this better, so thank you for any challenge you make. As I said to ember, let's break this method together. Here are some values of $\Re(s) \approx -180$: Code:Abel_N(-180+0.5*I,1) %23 = -4.594979390707927534666609124398354584088205114874356021315346273924960915837111059622974914088977405 - 1.245536222570619397930679863903537441983317514571627551506395779887607679115072488921172120017242391*I Abel_N(-181+I,1) %24 = -4.646093398162349222749913062587164021884607537129389945895507538405018235559517002184091847195395429 - 2.111079009223824705007214393403535783852031910962549740360763109382552393207990708425053211387566108*I Abel_N(-182+0.25*I,1) %25 = 1.319781624143128386109148423892091221888459095835226587197839366868690149315196477658600798681783902 + 1.104119945601300209454094790959724015807443342953939457605317862482700111558216705808742393140752610*I And BEWARE THIS IS INSANELY SLOWER THAN IT WAS BEFORE!!!!!!!!! It takes a minute to calculate a single value. I've avoided all optimization in favour of absolute calculations. I hope this helps you assess the situation. I'm so happy to have you back, sheldon. This is the best way I can answer your question. I hope it makes sense. Regards, James Actually sheldon, I believe this is the proof that the beta method (including tommy's version) IS kneser. We need to use the normality at $\Re(s) \to - \infty$ in our coding; and consequently the $Im(s) \to \infty$ must also be normal; just by inspection. And if it's normal there, Paulsen & Cowgill take care of the rest. Additionally sheldon. We pass your previous Taylor series test. The Taylor series converges ridiculously fast. As evidenced by this code: Code:Y = Abel_N(X+I-1000,1) %23 = (0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*I) + (-5.075883674631298447 E-116 - 2.537941837315649223 E-116*I)*X + (0.E-115 + 0.E-115*I)*X^2 + (8.459806124385497412 E-117 + 0.E-116*I)*X^3 + (3.172427296644561529 E-116 + 0.E-116*I)*X^4 + (2.5379418373156492235 E-117 - 6.027611863624666906 E-117*I)*X^5 + (4.229903062192748706 E-117 - 4.175640896903583953 E-117*I)*X^6 + (2.9458253468842357056 E-117 - 2.3563062119608880554 E-117*I)*X^7 + (1.3879369422819956690 E-117 - 9.125598249612160771 E-118*I)*X^8 + (5.067071376585063553 E-118 + 1.0158238880640300643 E-117*I)*X^9 + (3.965534120805701911 E-119 + 1.0439800113328959935 E-117*I)*X^10 + (-1.2392294127517818472 E-119 + 4.510898478223302923 E-118*I)*X^11 + (5.163455886465757697 E-120 + 9.517826428875257808 E-119*I)*X^12 + (7.238767963910648772 E-120 + 3.0485315845312846290 E-120*I)*X^13 + (4.259851106334250100 E-120 - 4.956852446317326376 E-120*I)*X^14 + (1.5119244267557546757 E-120 - 2.0008533378606380267 E-120*I)*X^15 + (4.021974122502197249 E-121 - 4.710772690704263553 E-121*I)*X^16 + (8.495487390177039476 E-122 - 7.894916805677055555 E-122*I)*X^17 + (1.4686098895831561809 E-122 - 9.776168703672051202 E-123*I)*X^18 + (2.0967608494748364605 E-123 - 8.979059969411839749 E-124*I)*X^19 + (2.430858610490628516 E-124 - 7.397819886053798921 E-125*I)*X^20 + (2.0121573193076205434 E-125 - 1.2662079281119933470 E-125*I)*X^21 + (2.945634939708346955 E-127 - 3.762574621168038482 E-126*I)*X^22 + (-3.417597698460109009 E-127 - 9.698025968913432879 E-127*I)*X^23 + (-9.634592148971158615 E-128 - 1.991065259289842006 E-127*I)*X^24 + (-1.8499050904353787682 E-128 - 3.444000794290251030 E-128*I)*X^25 + (-2.981946873618937088 E-129 - 5.195743215424565520 E-129*I)*X^26 + (-4.238317566690980895 E-130 - 7.055907248459257794 E-130*I)*X^27 + (-5.460417455040163927 E-131 - 8[+++] func(z) = sum(j=0,99, polcoef(Y,j,X)*z^j) %24 = (z)->sum(j=0,99,polcoef(Y,j,X)*z^j) func(0) %25 = 0.3181315052047641353126542515876645172035176138713998669223786062294138715576269792324863848986361638 + 1.337235701430689408901162143193710612539502138460512418876312781914350531361204988418881323438794016*I exp(func(-0.5)) - func(0.5) %26 = -2.4046998908565776391 E-113 + 4.974366001138672478 E-114*I JmsNxn Long Time Fellow Posts: 568 Threads: 95 Joined: Dec 2010 10/01/2021, 11:38 PM (This post was last modified: 10/02/2021, 09:02 AM by JmsNxn.) So, I'm still fiddling around with this, but I have some good news! So let's start by adding the following code: Code:Abel_N(z,y,{count=210}) = {     if(real(Const(z)) <= -200,         beta(z,y) + tau(z,y,count),         exp(Abel_N(z-1,y,count))     ); } And, throw away the rho function, as it isn't helping too much and tau is a bit simpler: Code:tau(z,y,{count=50}) ={     if(count>0,         count--;         log(1+tau(z+1,y,count)/beta(z+1,y)) - log(1+exp(-z*y)),         0     ); } This allows us to cap at 210 iterations; which is giving pretty good accuracy. And now, the moment of truth: Code:Y = Abel_N(-200+I+z,1) %207 = (0.3181315052047641353126542516868002490601022446936799586401033709777181182840717528927210774601097169 + 1.337235701430689408901162143170855957855323110313139359786647162450775646763995803052118147567939595*I) + (5.54876673259671867090099471080669318106785322420394358343834194635267845596250827285933089578346 E-29 + 1.254276694929928406925790477592362989605855648696982726300442861383344851757185951068450692905412 E-28*I)*z + (-8.77610451137567890702516702107760213598080342427416618704660290578196459015232036838913486840284 E-29 + 2.65558294516824465594518189586093084041160292357429621843094754106936193317165658238042140999508 E-29*I)*z^2 + (6.70010662252486126722077043078401776251685624470141332630367309667524637043178003445335908533503 E-29 - 1.356518896593052873127419788268571846157069979539793387506208085249648334361660345141854465986784 E-28*I)*z^3 + (5.07187405891633435609385513945516330046880742308149045141520996847322745740004204937389514442906 E-28 + 1.601770513355758776698919019246208480708963464262029371220476223493976751742109641796248229042656 E-28*I)*z^4 + (5.02924589814571227481028487349480143201114449094763895454594929074623772587805969610280555501615 E-29 + 1.750662368216475754101406646648667325538950871544438396575589350687680745148590989900079179187547 E-27*I)*z^5 + (-5.21776781038142151238401826948176347767693901807438076479622075678235355108946788166412237108280 E-27 + 1.678116605195591639054171699539912173875772490077822266848014384423492171271409904601623538024171 E-27*I)*z^6 + (-8.26434343840890350722346904148768684811179570548850275022784345864054233872204418001701646759989 E-27 - 1.476874451000434105441105228997148244210708680526571004185624457222454938185134929871430315674361 E-26*I)*z^7 + (4.34754737664907610583429268212122186262617158825367860044457533991408458195502101243594732885252 E-26 - 3.19623139425425634643404792819225382327104820166019680421930982531248760617941078262654785955599 E-26*I)*z^8 + (1.211093080645117834552[+++] func(x) = sum(j=0,34, polcoef(Y,j,z)*x^j) %208 = (x)->sum(j=0,34,polcoef(Y,j,z)*x^j) exp(func(-0.5)) - func(0.5) %209 = 5.81195007554839531308913981535518319235468370597307177012228385948292519940500147967864485328014 E-23 + 3.358625715751082476775152384313478520427155759259773009359566195839925352925697325662647409142233 E-21*I The taylor series is converging well and fine in the left half plane. I'm having trouble at the moment translating this into a good push forward, so that we can get Abel_N(1+I,1) accurately. But if we restrict ourselves to way out in the left half plane, we can show the taylor series converging--rather well actually. I'm going to keep working on this, but I think this is an "all is not lost" moment... I think the key for accurate taylor series is to have large amount of iterations. This is inherent to pari as I see it. We can get accurate taylor series, but in order for large iterations, we need the value way off in the left half plane. Quite honestly, this results in a standoff. Can't do large iterations on the real line = Can't get accurate taylor series. I think the code we need is to calculate $w = e^s$ and for $\Im(w) > 0$ we have $\lim_{w\to 0} \text{Abel}_N(w,1) = L$--and focus on the iteration: $ \text{Abel}_N(\exp(1)*w,1) = \exp(\text{Abel}_N(w,1))\\$ And here is where I think theta mapping arguments will come in... This means we have some function: $ \theta(z) = \sum_{k=-\infty}^\infty a_k e^{2 \pi i k z}\\$ In which, $ \lim_{\Im(z) \to \pi} \theta(z) = \infty\\$ I think in coding this; that Sheldon's perspective is the correct perspective. And since this decays to $L$ at $\Re(s) = -\infty$--we can take advantage of Kneser. So finding a $\theta$ such that: $ F_1(z) =\beta(z+x_0,1) + \tau(z+x_0,1)\\ = \text{tet}_K(z+\theta(z))\\$ I've also realized, if sheldon hasn't yet; that these taylor series are such horrible converging things because there are singularities (not just singularities, essential singularities) at $\Im(s) = \pi$... At this point I have to admit that the taylor series I am calculating are the correct taylor series. Nowhere in the book of taylor series does it say that exp(func(-0.5)) - func(0.5) has to converge better than it does already. It's a good heuristic. But again, I'm of the opinion these taylor series converge very slow. We are just a hair's width away from an essential singularity. As you said yourself, for certain values of exp the taylor series isn't very accurate. This is a series of said points. But I've managed to get my code a good amount better with 64 bit; let's leave it at that. As far as I can go off of; these are wild taylor series with no nice decay conditions. But pointwise I'm getting perfect values. And additionally, locally converging taylorseries; which tells us local holomorphy. I'm creating an update to my programming. It is necessary that the user use 64 bit pari. I've coded this much more elegantly. We get 100 digits point wise--and if you want series precision you have to move into the left half plane. This is to mean the taylor series only really converge in the left half plane. As we push forwards with exponentials; all bets are off on the taylor series. But, they're correctly drawn from the iteration. They don't converge well because there are essential singularities near by. Within the radius of convergence are volatiles values. I'm just finalizing the code at the moment, but I feel this is the best I can do. « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Tommy's Gaussian method. tommy1729 24 3,952 11/11/2021, 12:58 AM Last Post: JmsNxn Calculating the residues of $$\beta$$; Laurent series; and Mittag-Leffler JmsNxn 0 156 10/29/2021, 11:44 PM Last Post: JmsNxn The Generalized Gaussian Method (GGM) tommy1729 2 390 10/28/2021, 12:07 PM Last Post: tommy1729 tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 495 09/20/2021, 04:29 AM Last Post: JmsNxn Why the beta-method is non-zero in the upper half plane JmsNxn 0 342 09/01/2021, 01:57 AM Last Post: JmsNxn Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 734 07/22/2021, 03:37 AM Last Post: JmsNxn Improved infinite composition method tommy1729 5 1,280 07/10/2021, 04:07 AM Last Post: JmsNxn Generalized Kneser superfunction trick (the iterated limit definition) MphLee 25 8,240 05/26/2021, 11:55 PM Last Post: MphLee Alternative manners of expressing Kneser JmsNxn 1 901 03/19/2021, 01:02 AM Last Post: JmsNxn A different approach to the base-change method JmsNxn 0 730 03/17/2021, 11:15 PM Last Post: JmsNxn

Users browsing this thread: 2 Guest(s)