Handling large iterated exponentials and their pullbacks
#1
UPDATE:

Please see the last post for the most up to date code on the Abel function associated with the multiplier \( \lambda \). It works much better than what is here; and in this post.




Hey, everyone!

I've hit a crossroads in my proposed method of constructing tetration, which can be found in this thread. As far as I can see, there are no errors in the construction; though there may be some elements of the paper which leave things to be desired. There may be some opaque arguments, which could use better phrasing--but, I can't see what the rephrasing would be at the moment.

So, instead, I've decided to try and create more experimental evidence. But, as to my capacity with a compiler; I'm nothing like I used to be. I haven't coded in a very long time; and even then, I only coded a very small niche of problems. I've been trying to think of ways to effectively compute this tetration function better; but I can't quite get a grasp of MatLab's inhouse functions; and I'm having trouble even thinking of manners of approach. The ideal is to construct a Taylor series, but, I'm not sure how one could recover a Taylor series. My approach so far has been entirely analytical; and me fiddling with transforms trying to see if I can pull out a convenient manner of determining Taylor coefficients... but every method I've thought of encounters the same problem as the code I have now.

To begin, I point the reader to my Github repository for this tetration.

I'll include the README file I added there here,
Code:
This is the code I used in the paper "The Limits of a Family; of Asymptotic Solutions to The Tetration Equation" The paper is available on arXiv at the link https://arxiv.org/abs/2104.01990 All code is written for MatLab, but works fine for other languages once translated. Nothing specific to MatLab is especially called.

This repository consists of 5 functions:

beta_function(z,l,n)

Which is the asymptotic solution to tetration, written in a simple recursive manner. This is not the most efficient code, as it will over flow for large numbers. But, that's precisely its purpose. The variable z is the main argument. The value l is the multiplier. The value n is the depth of the iteration.

tau_K(z,l,n,k)

This function is the error term between beta_function and the actual tetration function associated to the value l (the multiplier). The value n is the inherited depth of iteration of beta_function; the value k is the new depth of iteration we use to construct tau_K. As an important disclaimer, keeping n and k closer together, keeps the iteration stable. If n is 100 and k is 5; we'll see a lot more anomalies and overflows. Where as n is 10 and k is 6 will produce the same results (where it converges), but will produce more correct results where there were anomalies and overflows.

beta2(z,n)

This function is the pasted together version of beta_function. This is when we combine all our multipliers into an implicit function. The variable z is the main argument, and n is the depth of iteration.

tau2(z,n,k)

This function is the error term between beta2 and the final tetration function. The value n is the inherited depth of iteration from beta2; and k is the new depth of iteration.

TET(z,n,k)

This function is an optimized version of beta2(z,n) + tau2(z,n,k); where it works more exactly as iterating the exponential.


Now, I can produce a Taylor series for the function \( \beta_\lambda(s) \); but it's really not needed. The function beta_function works very well; and creates very accurate results. And as it's a backwards iteration, we never really have any overflow errors unless we increase the argument. We can do 100 iterations and everything still works; and it converges rapidly so there's no problem there. The trouble is, when we increase the real part of the argument \( s \) in \( \beta_\lambda(s) \); then this starts to looks like orbits \( \exp^{\lfloor \Re(s)\rfloor}(z) \); which grow very very fast; especially near the real line \( z \approx \mathbb{R}^+ \).

So if you want to calculate, for instance, the value \( \beta_\lambda(10) \); how do you say... shit out of luck? It works fine upto about \( \Re(s) = 5 \); but after that we'll just short out and everything falls apart. This wouldn't be much of a problem if we only needed \( \beta_\lambda \)'s behaviour for small values; but the trouble is, we need \( \beta_\lambda \)'s behaviour for very large values. Using a Taylor series won't help us here either. Because, well, \( \beta_\lambda(10) \) is ASTRONOMICALLY large, and probably looks about \( \exp^{\circ 8}(1) \) give or take; depending on \( \lambda \). So, in this world of finite computing resources; there's really nothing we can do that's better.

As to why this matters is pretty simple. If you've been paying attention to how I've introduced this function, or if you read the paper; you'll know that the magic happens for large values. The function,

\(
\log(\beta_\lambda(s+1)) = \beta_\lambda(s) - \log(1+e^{-\lambda s})\\
\)

Which, typographically can look like,

\(
\log(\beta_\lambda(s+1)) = \beta_\lambda(s) - \mathcal{O}(e^{-\lambda s})\\
\)

And the idea is to add a sequence of convergents \( \tau_\lambda^n(s) \) such that,

\(
\log(\beta_\lambda(s+1) + \tau_\lambda^n(s+1)) = \beta_\lambda(s) + \tau_\lambda^{n+1}(s)\\
\)

Much of the theory, mathematically, proving that \( \tau_\lambda^n \) converges follows similarly to Ecalle's construction of an Abel function about a neutral fixed point in Milnor's book Dynamics in One Complex Variable. Where in this construction, there's an intermediary step where we solve,

\(
F(f(s)) = F(s) + 1 + \mathcal{O}(1/s)\\
\)

There's something similar here; where then Ecalle makes a specific limit process to construct the actual Abel function. I'm doing something similar here, but discovering the inverse-abel function; and there's a bit more massaging that goes into it. I make a similar mapping argument he makes though; however it reduces into solving a Schroder equation (at least how I do it).

So the function I call is tau_K(z,l,n,k) which follows the rule,


Code:
beta_function(z,l,n) + tau_K(z,l,n,k)  = log(beta_function(z+1,l,n) + tau_K(z+1,l,n,k-1))


The trouble is; when we set \( k \) to large values; somewhere in the process we eventually produce values like \( \beta_\lambda(10) \); which are astronomic in size. You can see it clearly in the recursion process where we have to shift the variable forward in each iteration. Again, the facile way of viewing this, is that,

\(
\tau_\lambda^n(s) = \log^{\circ n} \beta_\lambda(s+n) - \beta_\lambda(s)\\
\)

And obviously, we'll begin to overflow by the time we hit \( n=10 \). The absolute cosmic irony of all this though, is that,

\(
\tau_\lambda^n(s) = -\log(1+e^{-\lambda s}) + o(e^{-\lambda s})\\
\)

We can make a pretty good guess for large \( s \) by just using \( -\log(1+e^{-\lambda s}) \). But we need something very large to produce something very small. So for that, we can rewrite our iteration (which I haven't uploaded on to Github yet, because I'm trying to work out the kinks). Instead we'll be focusing on very small values.

So, in this we define a function,

Code:
function f = tau_K(z,l,n,k)
   if k == 1
       f = -log(1+exp(-l*z));
       return
   end

   f = log(1 + tau_K(z+1,l,n,k-1)./beta_function(z+1,l,n)) - log(1+exp(-l*z));
end



And this will work for large iterations of \( k \); it's equivalent to our previous code mathematically; but at least we're underflowing rather than overflowing. This gives us the graphs for the function

Code:
beta_function(z,log(2),100) + tau_K(z,log(2),100,100)

where \( -1 \le \Re(z) \le 2 \) and \( |\Im(z)| \le 3 \). Which mathematically looks like,

\(
\log^{\circ 100} \beta_{\log(2)}(z+100)\\
\)


   

We can see that this algorithm works horribly near the real line. This is because our multiplier is \( \log(2) \) and so the beta function grows the fastest on the real line. Away from the real line it looks good in this graph. This is about the size of the period of this tetration, which is \( 2\pi i / \log(2) \); so just expect this to repeat off to infinity in either direction. You can clearly see (we're at 100 iterations deep), that this converges for imaginary arguments. The trouble is when we start to overflow and short-circuit.

   

You can perfectly see the graph of what this tetration looks like on the real line by the contour Matlab draws. But as you move away, it all short circuits and just plain overflows.

   

And this is how it overflows, in a nice Leau petal looking thing.



So, my idea, and what I especially need help with; is understanding how to reduce this into computing Taylor coefficients. Now, I'm perfectly capable of providing the Taylor coefficients \( a_k = \beta_\lambda^{(k)}(s_0) \) that,

\(
\beta_\lambda(s) = \sum_{k=0}^\infty a_k\frac{(s-s_0)^k}{k!}\\
\)

If I could have an effective way at computing the Taylor coefficients of,

\(
1/\beta_\lambda(s) = \sum_{k=0}^\infty b_k\frac{(s-s_0)^k}{k!}\\
\)

Since \( \beta_\lambda(s) \) is non-zero; this is always discoverable. Again, though, I'm effectively naive at how the hell you program that. After this, we're talking about creating the taylor series,

\(
\sum_{k=0}^\infty c_k^n \frac{(s-s_0)^k}{k!} = -\log(1+e^{-\lambda s}) + \log(1+\sum_{k=0}^\infty \sum_{j=0}^k \binom{k}{j}c_{j}^{n-1}b_{k-j} \frac{(s+1-s_0)^k}{k!})\\
\)


The trouble is, I have not a clue how to program this. Not necessarily because it's difficult to program; but, because I'm just that bad a programmer. How would one even begin to do this? Any clues, or points to what kind of how-to's or literature; or if it's just so obvious. I prefer using MatLab; largely because it's just that much easier of a gui. And raw coding using pari-gp would be too much of a leap for me. I need baby steps.

So I'm asking mostly, if there are any resources you as a programmer can suggest for handling Taylor Series; in a, speak to me like I'm in kindergarten kind of manner. Also, if anyone has any suggestions, or different methods of approach; it is greatly appreciated. I am at a loss on how to program this. But the more I fiddle, the more it's patently obvious this function is holomorphic. And that this solves tetration. And furthermore; It's not Kouznetsov or Kneser's. In the basic priniciple that the math is telling me the final tetration diverges as \( |\Im(z)| \to \infty \). Where as in both Kouznetsov's and Kneser's \( \Im(z) \to \infty \) tends to a fixed point.



Now, it's important to remember that this isn't the tetration we want. The tetration we want is,

\(
\lim_{k\to\infty} \log^{\circ k} \beta_{\sqrt{1+s+k}}(s+k)\\
\)

Solving this coding problem doesn't solve the problem for the actual tetration we want. I need a much better grasp on how the hell you code in an efficient, let's say, Taylor series grabbing code; before I approach coding the actual tetration.

Again, this is to code the intermediary tetrations which we effectively paste together to get the right tetration.

Regards, James.


It's also important to remember that these graphs solve the equation \( e^{F_\lambda(s)} = F_\lambda(s+1) \). If we make a similar graph using the code:


Code:
function f = TET_L(z,l,n,k)
   if (-1<real(z)<=0)
       f=beta_function(z,l,n) + tau_K(z,l,n,k);
       return
   end
   
   f = exp(TET_L(z-1));
end



Which calculates the value for, say,

\(
\log^{\circ 100} \beta_\lambda(s+100)\,\,\text{for}\,\,-1< \Re(s) \le 0\\
\)

And simply applies \( \exp \) to increase the real argument; we still get pretty much the same graph.


   

I don't know how the hell Sheldon does it!

Lmao!

Regards, James
#2
So I've decided to recode my functions into Pari-GP. Sadly, this is still very primitive code, but nonetheless seems to get the job done in restricted cases.

I've attached two files here, which are the Schroder function and the Abel function respectively. To explain the math, follow the link in the previous post to the arxiv page.

The first function is the Schroder function, which I write as,

\(
\varphi_\lambda(w)\\
\)

Which satisfies the equation,

\(
\varphi_\lambda(e^{-\lambda}w) = \exp(\varphi_\lambda(w))\\
\)

This function tends to infinity at \( w=0 \) and has singularities at the points \( w = -e^{-\lambda j} \) for \( j\ge 1 \). The variable \( \lambda \) is restricted so \( \Re(\lambda) > 0 \). I've coded this function, rudimentarily, as the function \( \text{Sch_L(w,l,n,k)} \) where w is the same and \( \text{l} = \lambda \). The variable n is the first depth of iteration, and the variable k is the second depth of iteration. There's no real problem setting n very large; but about n =15, n=20, is sufficient accuracy. The variable k is more finicky. If you set this to values larger than 10 we're almost guaranteed to over flow. But setting it to about 10 produces about 12 digits of accuracy.

For example, here are some code snippets verifying the functional equation for about 10-12 digits.



Code:
exp(Sch_L(1+1*I,log(2),100,10))

%40 = 0.05510674309654904481102386681 - 1.050130227798316703527845033*I

Sch_L(0.5+0.5*I,log(2),100,10)

%41 = 0.05510674309655249618896068904 - 1.050130227798277994007491593*I

exp(Sch_L(1+I,log(2)+I,100,10))

%42 = -0.4244299033948726076538482291 + 0.1965660278912558591088438168*I

Sch_L(exp(-log(2)-I)*(1+I),log(2)+I,100,10)

%44 = -0.4244299033948266048629686497 + 0.1965660278914230800708608920*I

exp(Sch_L(2.1, 1+0.5*I, 100, 10))

%48 = -0.1983543337931354790254094135 + 0.5439856344355677623947164582*I

Sch_L(exp(-1-0.5*I)*2.1,1+0.5*I,100,10)

%49 = -0.1983543337931700823145109510 + 0.5439856344354902802955187393*I

The overflow errors, again, are inherent to the code and not the function itself. At some point Pari-GP doesn't like calculating 10 iterated logarithms, and there's nothing I can do :/. Trying to use built in Taylor series seems to malfunction for me; as its estimates are wildly off.

The second file attached deals with the Abel function, which is closer to the function we actually want. A primitive way to code this is to just make the substitution \( w = e^{-\lambda s} \); however I've attached a different code here, which does the recursive process using only the abel equation. This equates to the function,

\(
F_\lambda(s)
\)

which satisfies the functional equation,

\(
F_\lambda(s+1) = \exp(F_\lambda(s))\\
\)

Which is holomorphic almost everywhere on \( (s,\lambda) \in \mathbb{L} = \{(s,\lambda) \in \mathbb{C}^2\,|\, \Re \lambda > 0,\,\lambda(j-s) \neq (2k+1)\pi i,\,j,k \in \mathbb{Z},\,j\ge1\} \). There are branch cuts which arise, but they are isolated, as well as the singularities. This function is initialized as \( \text{Abl_L(s,l,n,k)} \), where s and l are the same, and n and k denote the same depths of iteration. The variable n can be increased however. The variable k is more finicky here, sometimes you can set it large, and sometimes setting it to 6 or 7 produces overflows. A good heuristic to go by, is that for complex values we can set k large, and we need to set k large to gain better accuracy. For purely real values setting k small will produce good accuracy, and setting k larger will just overflow. Here are some code snippets confirming the functional equation for about 10 - 12 digits.


Code:
Abl_L(1,log(2),100,5)

%52 = 0.1520155156321416705967746811

exp(Abl_L(0,log(2),100,5))

%53 = 0.1520155156321485241351294757

Abl_L(1+I,0.3 + 0.3*I,100,14)

%59 = 0.3353395055605129001249035662 + 1.113155080425616717814647305*I

exp(Abl_L(0+I,0.3 + 0.3*I,100,14))

%61 = 0.3353395055605136611147422467 + 1.113155080425614418399986325*I

Abl_L(0.5+5*I, 0.2+3*I,100,60)

%68 = -0.2622549204469267170737985296 + 1.453935357725113433325798650*I

exp(Abl_L(-0.5+5*I, 0.2+3*I,100,60))

%69 = -0.2622549205108654273925182635 + 1.453935357685525635276573253*I


A lot of this code is very particular to the variable \( k \). The trouble, is that, as we approach closer accuracy the logarithm seems to destabilize and I get an overflow error. I am not sure at all why this is happening, other than for larger values of k, we are getting larger and larger values being put into a logarithm, and pari just can't take it. :/  The math says though, that the larger values produce greater accuracy (which is why it works on the real number line for small values of k, because it's growing so fast and we are getting better accuracy faster); whereas for imaginary values, it takes longer for large values to appear, so setting k =60 is necessary; but once you start getting large values the accuracy goes up until we hit an overflow because the logs can't take it.

Here are the attached sources. Again, it's very rudimentary code.


.gp   Schroder_L.gp (Size: 1.61 KB / Downloads: 350)
.gp   Abel_L.gp (Size: 1.84 KB / Downloads: 361)


Lastly, this doesn't quite get us to the correct tetration function. There's still another step I haven't programmed in, which is to paste these solutions together to get the Tetration we actually want \( \text{tet}_\beta \). I'm getting there though. I'm going to ask for more help from people to see if there's anything obvious I can do to improve this code.

Thanks again.

Regards, James



I've been fiddling with the numbers more, and I thought I'd produce some 200 digit accuracy for certain values.

I've begun to realize in the code for \( \text{Abl_L(z,l,n,k)} \) that if you keep n=k and real(z) = -n = -k that you can get arbitrary precision for large enough \( n \).

By which I mean, here are some numerical examples of the family of tetrations \( F_\lambda \) converging with arbitrary precision in the left half plane. I've done it here for about 200 precision, something about 200 decimals are displayed here.


Code:
Abl_L(-1000,1+I,1000,1000)

%16 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335946 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I

exp(Abl_L(-1001,1+I,1000,1000))

%17 = -0.29532276871494189936534470547577975723321944770194434340228137221059739121428422475938130544369331383702421911689967920679087535009910425871326862226131457477211238400580694414163545689138863426335945 + 1.5986481048938885384507658431034702033660039263036525275298731995537068062017849201570422126715147679264813047746465919488794895784667843154275008585688490133825421586142532469402244721785671947462053*I

Abl_L(-900 + 2*I, log(2) + 3*I,900,900)

%18 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245899257485038491446550396897420145640 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102224074017515566663538666679347982267*I

exp(Abl_L(-901+2*I,log(2) + 3*I,900,900))

%19 = 0.20353875452777667678084511743583613390002687634123569448354843781494362200997943624836883436552749978073278597542986537166527005507457802227019178454911106220050245980468697844651953381258310669530583 - 5.0331931122239257925629364016676903584393129868620886431850253696250415005420068629776255235599535892051199267683839967636562292529054669236477082528566454129529102221938340371793896394856865112060084*I

Abl_L(-967 -200*I,12 + 5*I,600,600)

%20 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123736438439579713006923910623 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464903158662028146092983832*I

exp(Abl_L(-968 -200*I,12 + 5*I,600,600))

%21 = -0.27654907399026253909314469851908124578844308887705076177457491260312326399816915518145788812138543930757803667195961206089367474489771076618495231437711085298551748942104123731995533634133194224880928 - 1.6112686617153127854042520499848670075221756090591592745779176831161238110695974282839335636124974589920150876805977093815716044137123254329208112200116893459086654166069454464833417170799085356582884*I

This leads me to believe there must be some kind of manageable way of coding this to arbitrary precision for arbitrary values \( z \). I'm just not seeing it. I'll check back later.

I've added a question on stackoverflow regarding this, the link can be found here:

https://stackoverflow.com/questions/6741...3_67410814


I've updated the code for Abel_L.gp  I've made it work essentially flawlessly whereout there is a branch-cut/singularity.

This means, for instance that, \( \text{Abl_L(z,log(2),n,k)} \) gets as accurate as you want for large n and k (before capping out at about 1000, before that reaching 200 digit accuracy); if and only if z does not have imaginary part \( k \pi/\log(2) \) for \( k \in \mathbb{Z} \). This I believe is what I meant to post here, it just occured to me that if I add in one if statement, everything gets better.


Here's some further code output with the new call of Abl_L (To get the old call just use beta_function + tau_K).


Code:
Abl_L(3+5*I,2+I,100,100)

%8 = 15.1200663196084113726996578822621549291860306417726714010004342239253716532446190432940984607410001294433327830030769099833245647475387094919266911414294698020718973159623129155457864959247621572944494483657677359549605952220846281575084335046637768186335711376812853043209266498604206900517283615396 + 0.0154413032498042698439235137161907111994796403594950922888823644100610130842280732295925518332997633477295347631386476329400518730163561535044058808120135478276109805841582780898290159468773997459393954980347524620352354131865014838896294726312673191492212508552912478865091902424025717717143862084667*I

exp(Abl_L(2+5*I,2+I,100,100))

%9 = 15.1200663196084113726996578822621549291860306417726714010004342239253716532446190432940984607410001294433327830030769099833245647475387094919266911414294698020718973159623129155457864959247621572944494483657677359549605952220846281575084335046637768186335711376812853043209266498604206900517283615396 + 0.0154413032498042698439235137161907111994796403594950922888823644100610130842280732295925518332997633477295347631386476329400518730163561535044058808120135478276109805841582780898290159468773997459393954980347524620352354131865014838896294726312673191492212508552912478865091902424025717717143862084667*I

Abl_L(1.6 + 20*I, 0.3+5*I,100,100)

%10 = 2.71828182845904523536028747135266249775724712959871813321225102256571271901105581071936156664227780696595294492147108528943130700452739942797365538386174463004139713303743908232450551513833163730223951634952001001076979141010136383799044963603292514328479006198340919951738281224303430017077703855085 - 5.59090038841324852979214929915966720518674801775146129527343623926677361988265020937291650548792559291633999177861356729132079355498093880790116738389193439080209356297799632268051106346363676783243618498939659342080854442167848907452919312698823648586018500683273648314404547825798290928760017911638 E-44*I

exp(Abl_L(0.6 + 20*I, 0.3+5*I,100,100))

%11 = 2.71828182845904523536028747135266249775724712959871813321225102256571271901105581071936156664227780696595294492147108528943130700452739942797365538386174463004139713303743908232450551513833163730223951634952001001076979141010136383799044963603292514328479006198340919951738281224303430017077703855085 - 5.59090038841324852979214929915966720518674801775146129527343623926677361988265020937291650548792559291633999177861356729132079355498093880790116738389193439080209356297799632268051106346363676783243618498939659342080854442167848907452919312698823648586018500683273648314404547825798290928760017911638 E-44*I

Abl_L(4,0.3333 + 0.2333*I,100,100)

%12 = -1.15187640641189978210990929581051101163754283146294112775741172809721964962888386716287501600958778358406258145319636636158556628221772009810354250600664642314184936162757838045954616077117519726659918865038158430763042857082965089589391244260713331334319644470891691577898010494085000475872497770607 E-48 + 1.95465377642385673889692837993873578915732653532733754796391388291232066671017212613888186441334422302514625626620589547231525535813945333191238790294780773808204574519304241723789506922860807547687791607750976972833274201200602575449150873900563882476920717978414329146914922590992184541446162429337 E-47*I

exp(Abl_L(3,0.3333 + 0.2333*I,100,100))

%13 = -1.15187640641189978210990929581051101163754283146294112775741172809721964962888386716287501600958778358406258145319636636158556628221772009810354250600664642314184936162757838045954616077117519726659918865038158430763042857082965089589391244260713331334319644470891691577898010494085000475872497770607 E-48 + 1.95465377642385673889692837993873578915732653532733754796391388291232066671017212613888186441334422302514625626620589547231525535813945333191238790294780773808204574519304241723789506922860807547687791607750976972833274201200602575449150873900563882476920717978414329146914922590992184541446162429337 E-47*I

A quick note, is that moving \( n,k \) for large numbers shifts the constant \( z_0 \) where \( F_\lambda(z_0) = 0 \); so for that it can move the z variable around. But it keeps the functional equation.
#3
I'll be updating this as frequently as I can; but for the moment I thought I'd put down some graph's using Mike's graphing tool that Gottfried Linked me to.

Here is, beta_function(z,log(2),100) + tau_K(z,log(2),100,7) graphed over the region \( -1 \le \Re(z) \le 3.5 \) and \( 1\le\Im(z)\le 4 \)

   

And Here is, exp(beta_function(z-1,log(2),100) + tau_K(z-1,log(2),100,7)) graphed over the same region.

   


You'll notice they are virtually identical. Ergo; the functional equation is being satisfied. I've tried to edit some of my buggy code a bit. I'll update when I can make slightly better code. I'm trying to make more graphs at the moment, which are in the same flavour of this.


Wow, Mike's program is quite beautiful,

   


Here's my solution to the Abel equation with \( \log(2) \) multiplier over \( -1 \le \Re(z) \le 4 \) and \( 0.5 \le \Im(z) \le 2.5 \):

Here's the same solution, graphed over \( -1 \le \Re(z) \le 3.5 \) and \( -2.5 \le \Im(z) \le 2.5 \); which is more symmetrical:


   


And attached here is the solution with multiplier \( 1 +0.1i \) graphed over the region \( -1 \le \Re(z) \le 6 \) and \( 3.2 \le \Im(z) \le 9 \). You can see the singularities forming on the boundary; which the Riemann mapping will effectively remove, by pasting all the solutions together. This function has a period of \( \frac{2\pi i}{1+0.1i} \); so this graph repeats in the up/down direction at a slight angle.

   
#4
To complement the last post, I've created a graph with the multiplier \( 1-0.1i \) over the region \( -1 \le \Re(z) \le 6 \) and \( -3.2 \ge \Im(z) \ge -9 \). This graph was done with further iterations, but it's slightly less accurate; we can see the Devaney hairs more exactly. It is very similar to the case when the multiplier is \( 1 + 0.1i \); but we reflected the \( z \) value. I graphed this a tad more accurately; but it produced more errors elsewhere. Nonetheless, the hairs look nice; even if they're slightly exaggerated.

   

All we've really done, in a mapping sense, is rotate the angle slightly to down. And; we've received a slight error in the code, where the Devaney Hairs are exaggerated.
#5
I've attached in this update a nearly flawless code. I figured out how to write a kind of exception statement in my code. This required me, sort of guessing, a cut off point for at least 200 point accuracy. This required me, sort of, guessing the parameter k; and making a different recursion. I've reduced all of my exception statements into a clear code.

Code:
\\This is the asymptotic solution to tetration. z is the variable, l is the multiplier, and n is the depth of recursion
\\Warning: z with large real part looks like tetration; and therefore overflows very fast. Additionally there are singularities which occur where l*(z-j) = (2k+1)*Pi*I.
\\j,k are integers

beta_function(z,l,n) =
{
    my(out = 0);
    for(i=0,n-1,
        out = exp(out)/(exp(l*(n-i-z)) +1));
    out;
}

\\This is the error between the asymptotic tetration and the tetration. This is pretty much good for 200 digit accuracy if you need.
\\modify the 0.000000001 to a bigger number to make this go faster and receive less precision. When graphing 0.0001 is enough
\\Warning: This will blow up at some points. This is part of the math; these functions have singularities/branch cuts.

tau(z,l,n)={
    if(1/real(beta_function(z,l,n)) <= 0.000000001,
        -log(1+exp(-l*z)),
        log(1 + tau(z+1,l,n)/beta_function(z+1,l,n)) - log(1+exp(-l*z))
    )
}

\\This is the sum function. I occasionally modify it; to make better graphs, but the basis is this.

Abl(z,l,n) = {
    beta_function(z,l,n) + tau(z,l,n)
}


This is the final result of my code for the Abel function for varying period \( 2\pi i / \lambda \). I'm still trying to find a way to effectively code \( \text{tet}_\beta \). But I'm getting there.


Disclaimer: these graphs are very slow to produce using Mike's program. As I was inputting my x,y values on the box, I may have put the y values in backwards on some of these graphs; if  so, said erroneous graphs, the imaginary argument should be flipped; and the pictures flipped. I apologize. I wasn't being careful.I'll get around to flipping them and changing the domains; but this just means recompiling the graphs. Which takes a very long time. Sorry; that was stupid of me. Nonetheless; it still shows their holomorphy, which is more to the point.




Here's a graph of \( F_{\log(2)-i}(z) \) over the region \( 0 \le \Re(z) \le 2 \) and \( -1 \le \Im(z) \le 1 \).

   

This was done with \( F_{\log(2)-i}(z) = \text{Abl(z,log(2)-I,100)} \).

And here's a graph of \( F_{1+i}(z) \) over the region \( 0 \le \Re(z) \le 3 \) and \( -1.5 \le \Im(z) \le 1.5 \)

   

This was done with \( F_{1+i}(z) = \text{Abl(z,1+I,100)} \).

And here is a very crazy graph of \( F_{1+5i}(z) \) over the region \( 1 \le \Re(z) \le 4 \) and \( 0 \le \Im(z) \le 3 \). The reason this graph looks so crazy is because it has a period of \( \pi(5+i)/13 \); which means it repeats on a pretty small strip. You can also see how bananas these tetration functions start to look when we vary the multiplier around. You can also see the many branch cuts which start to form. This graph should look a bit more level in the lower half plane, but it blows up pretty fast so my code tends to short circuit before we get there.

   



I've attached here my temporary code for \( \text{tet}_\beta \). It is not normalized yet; I'm having trouble making it normalized efficiently. Therefore, there's a real number \( x_0 \in \mathbb{R} \) which shifts us to the normalized tetration. As to that, this is code for \( \text{tet}_\beta(s-x_0) \).


Code:
Tet(z,n) ={
    if(1/real(beta_function(z,1/sqrt(1+z),n)) <= 0.00000001,
        beta_function(z,1/sqrt(1+z),n),
        log(Tet(z+1,n))
    )
}


This needs to be attached to the code above. It's pretty shoddy on the real line; but it seems to work well in the lower and upper half plane; where it doesn't overflow.

At this point I am confident this tetration function IS NOT KNESER's. As to that, expect \( \text{Tet(z,n)} \) to overflow as we increase the imaginary argument. It is not normal, it does not tend to a fixed point as \( \Im(z) \to \infty \). At least, as far as I can tell.


From this point, I no longer may have made a flipping error when using Mike's graphing program. The following graph is how it should be.




This is a graph of \( \text{Tet(z,100)} \) over the domain \( 0\le \Re(z)\le 4 \) and \( 1 \le \Im(z) \le 5 \).


I'm still having trouble with the real line; and mapping for large arguments. My code doesn't like this function; I definitely need to fiddle with it a bit more. I'm trying alternative expressions at the moment. But I'm trying to keep it as optimized as possible. Nonetheless, It should look something like this,

   

I need to work on accuracy with this though; it's failing on the real line for a reason. My guess is that the code is choosing the principal branch; and it's causing errors. I need to think of a way to make this a logarithm of \( log(1+w) \) for \( w \) small rather than \( log(X) \) for \( X \) large; but every way I've tried hasn't worked.


Even if this is Kneser's tetration, it's pretty crazy Kneser can be constructed with 1 for-loop and 2 if statements--and a whole swath of recursion...




UPDATE
I fiddled with the tetration code a bit more and updated it. This is more so the standard form. I realize I had a dangling negative sign, which forced a bunch of errors. The correct graph and code is posted now. It was a long night last night, and I got ahead of myself a bit. Nonetheless, without the negative sign everything was correct.
#6
Hey, everyone!

So I successfully implemented a Taylor Series Method with this tetration. This code is definitely suboptimal; but it'll get the job done on the real-line. The previous protocol for tetration is desired for \( |\Im(z)| > 1 \); this method is intended for a domain near the real-line. I'm still working on making everything perfect, but it seems to be working. I'm able to get about 20 digit precision with the Taylor Series Method. But it works perfectly up to that. I had to modify the code significantly. So I'll attach here everything you need for the Taylor series approach at calculating \( \text{tet}_\beta \).

But first, here's a graph of \( \text{tet}_\beta(s-x_0) \) for \( |\Re(z)| \le 0.5 \) and \( |\Im(z)| \le 0.5 \). (The value \( x_0 \approx 1 \).) There is absolutely no short circuiting; as was happening before; and it agrees perfectly with the old protocol on the real line (upto 20 digits, I mean).

   

I'm going to work on compiling all my code into a nice bundle. But for the moment, I'll post how I'm grabbing the Taylor series.


Code:
\ps 50  //set series precision to 50
\p 200 //set precision to 200; this makes the Taylor series work better

//This code estimates the depth of recursion needed for the iteration to work about A.
//The value 0.0001 can be made smaller, but it risks an overflow for other A's.
//This, ideally, is meant for A a real number; and expanding a taylor series about a real number.
//If you plug in complex values it suffers the same fate as Tet(z,n) for z near the real line.

Tet_GRAB_k(A,n) ={
    my(k=0);
    while( 1/real(beta(A+k,n)) >= 0.0001, k++);
    return(k);
}

//This code will run the code of Tet; but the value k fixes the depth of recursion. The goal is to guess the value of k; which the previous function does.

Tet_taylor(z,n,k) = {
    my(val = beta(z+k,n));
    for(i=1,k,val = log(val));
    return(val);
}

//This code will create an array of size 50, of the Taylor coefficients. 50 seems good enough for 20-digit precision.

TAYLOR_SERIES(A,n) = {
    my(ser = vector(50,i,0));
    for(i=1,50, ser[i] = polcoeff(Tet_taylor(A+z,n,Tet_GRAB_k(A,n)),i-1,z));
    return(ser);
}

Here's a print out of the first 50 taylor coefficients with 200 digit accuracy, which is precise to about 20 digits, when \( A=1 \) and \( |z| \le 1/2 \). And we're summing,

\(
\text{tet}_\beta(z-x_0) = \sum_{j=0}^\infty a_j (z-1)^j\\
\)

Code:
0.03259650901600602497933792676384984393747893444466645403696826747470297040854769936473446193681691248

1.064566018894822439459121680191873880002289031679825196946198206070023707777321010554487553032188483

-0.2873378195642740566591744183636999465745954491815838645952044835638702375101315742562840132950780683

0.3152722331191694593039884823535663354341852791199594632219815957400875277477832477436751140233192657

-0.1912034198920797389025720655858521446366122592140768736192754196270843593541158129665657022944663944

0.1619918403700412498509111387209839079941649912583560681431800605720864827433793280246263923436743791

-0.1262163455189004868459401209337496191299366830614718566765284130020367126614265873127329696536157169

0.1044653488807110876475492150686866411102413623395495490272876641738572317913172227633315131401064049

-0.08733328434526905389340303692959998276888166793580935879374633926022136575415214261209435922826275733

0.07460983885218181536888705301714058156066365377105739122286955734344649436580505565342731178070780132

-0.06385214613393493096309699077056600326042728006359540631260197727085308946293722549497997570104686583

0.05584292203656403606738361461169393618013925564282390784859052371546558131893225346778837473196118510

-0.04880796343274970338929850813288239733060398934463190211262688968317823742267508116412402751932379650

0.04309932338600195524049288050944188016833858457979135846544038360040926931155629955997071292428022235

-0.03836288250206062445743409316837435779512360706231417782251640670997374821226368193903394779896190165

0.03414127036657301402422303831687666740221027559909571372373149058905979224249550074347273615496214416

-0.03069522622171308269630829912891224715429752245194023229286808911937110258445926524550448977794744532

0.02759375745266422726118225992120618363220714252770049081615845122719145775754339880175834020956738241

-0.02492617683524951521384931704396322919512477426297414985555943492446140532956626890241342331965554889

0.02260944798137610203011561258334045637792244076638107195283804928857101930667143540416620563344565252

-0.02051443069879937879358919403991008741427450376929415937239993164187695461889097139222808157603848946

0.01871122311131724455122719115829998942301702290119051139161984198969836748857220416433024168321356494

-0.01706959667516856134124994903315279611782904024218956441922221953292585605952962724651491389042347434

0.01561994985039635186551165730379966039719974867622587242645674972512851648211230613394203641113469957

-0.01432130821329467892537670772690581758787995707953189298649421682632774172536242451570451830060698315

0.01314305305378573482201731126080476037691584362534816014478356824281129180389484034717370060585726668

-0.01209419674262023220620604277621194028505496041471285114801258074184151950668828161221211019673908803

0.01113446063464791017496717262273878524224087807001607561484469692046388808602937292590708489153519

-0.01027177825698029634945241545416194028926566253220631587711693285129602573457352908514000813325236

0.00948554026951784021932271629606265315583202623162078666770541477188584788948472277477937939261755

-0.00876924305244973819428878673743763136633512039059965031310993411723357173965037597340389886699910

0.00811836994934045377382219801027782412783351395669177993408403933627506723654952031327315848090202

-0.00752097480042581180458331761661491607110136420100651539654375972538389108283416093167217

0.00697653013196725069606403082910994014202208454398167966042508502307861922184575867005168

-0.00647608174330626609178073722833717332946722585389221061966196514967615733226758876659904

0.00601730806494098532697053082930376298911109911831601294817232594413120893191339059905892

-0.00559561035789612544223858523526849098745113423254497751918500879709532326991322382953728

0.00520715774192393629034481322963471856892946870441754073184973868152261126269263188964829583040308

-0.00484963807894188028406096728853717513711689584750842349309013691627059863006789646318091836105510

0.00451941514123402440729759380767835181551519496077503656778038395338897904527310819739578881138017

-0.00421472016398154557571846072651557894283659267333744536428062158973293025758837360235950204463475

0.00393287207039965802503591588793998487242520417728294106069505912293856803564083499663414276993643

-0.00367211384407260308328762267960709649479709692174821929443893886185591220648922535783095

0.00343059491434320974783908324753863858631797220134856101166159261928687594307698346141573

-0.00320665678967362029287788735422501873023228546726837654473350815807603600822657166236526

0.00299893436559208312179496900769575872183387995037574408487383855378855588570984246495324

-0.00280600822357148559421654579155275805475430922305551027052238345832515078755002310908973

0.0026267702248372845574683329373504113189254022449433106454567513375222751843952

-0.0024600709693595691499146078894532492077947610127580314247455176664484908226700

0.0023049646597304620702001627477045893100607516041809170910352492486428007634527
After defining a recursive function. Here's the graph of \( \text{tet}_\beta(z-x_0) \) for \( 1 \le \Re(z) \le 6 \) and \( |\Im(z)| \le 0.7 \). Again \( x_0 \approx 1 \).

   

I think we're well on the way; that this tetration is holomorphic. At this point; Is it Kneser's tetration?




Users browsing this thread: 1 Guest(s)