Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Arguments for the beta method not being Kneser's method
#11
I've made a post on Mathoverflow asking for clarification on the monodromy theorem I applied in showing Kneser is not the beta method. I'll link it here and update this post if anyone gives me an answer.

https://mathoverflow.net/questions/40188...n-about-te
Reply
#12
James,
What is the range in analyticity of the tet_beta method?  I assume tet_beta has some singularities in the upper half of the complex plane. Here's my heuristic "conjecture", which applies to any alternative tetration.  I don't know if Samuel Cogwill and William Paulsen have proven this rigorously or not.


Theta is defined at the real axis for z>-2 since both tetrations are increasing there.  And theta can be extended to be a 1-cyclic function that is defined at the real axis.  In the complex plane, theta might only be defined at the real axis, but then tet_beta is only defined at the real axis.  Assuming theta is defined in the complex plane, then theta can either be entire or have singularities in the upper half of the complex plane.  

If theta is entire than z+theta(z) will take on every integer's value an infinite number of times and this introduces an infinite number of new singularities for when z+theta(z)=-2, z-1+theta(z-1)=-3, z-2+theta(z-2)=-4, and these are all new singularities that are in the upper half of the complex plane.  The argument is more complex if theta has singularities, but it would seem that theta's singularities also probably lead to singularities for tet_beta(z) ....  Moreover theta must also avoid taking on integer values due to the first argument 

Iterated exponentiation isn't well defined or increasing in the complex plane unless we are only iterating real values.  If Tet(z) gets arbitrarily large positive then somewhere nearby, Tet(z) gets arbitrarily large negative, and exp(Tet(z)) gets arbitrarily close to zero.  So the behavior as imaginary z increases would depend on the real value as well, and there will be nearly zero values interspersed with very large values ... 

Perhaps James can comment which case he thinks applies to the Beta method.
- Sheldon
Reply
#13
So, to begin, we have the asymptotic solutions,



Where, . In which,



Now, we begin by creating a sequence,



In which, uniformly for when . In which, this function will satisfy the equation,




These tetrations will have a plethora of singularities; making a grid of some shape of essential singularities/branchcuts/zeroes. What we want to do from here, is map away from these singularities. The bad points are,



So if we "move" lambda while we move s, we can effectively dodge the singularities. This I did with . So we take the function,



Which is holomorphic for . This function still satisfies the asymptotic relation,



So we can expect it to look closer and closer like tetration. We insert an error term similarly (this time its more difficult to do), but we get a,



And now we pull back with logarithms; which can be done without causing singularities.



Now, as to your statements with theta mappings, I'm going to point out William Paulsen and Samuel Cowgill's theorem:



I am simply removing the condition that,



It is absolutely necessary that you have this condition in Kneser's uniqueness--as per following William Paulsen and Samuel Cowgill. I think it would be quite remarkable if we could remove this condition and still have Kneser--I am not convinced though. Especially considering that their proof of this uses a similar theta mapping argument.


If we look at



I am not very sure what this will look like. It looks like a good approach though to sus out if there are singularities. I'm personally of the feeling that this will produce branch cuts. or would at least have singularities. This I would feel happen because,



Where this infinity is interpreted as "non-normality", or divergence. And because of this, we will get arbitrarily close to every fixed point of exp, which will cause slog_K to hit infinity infinitely often. So I imagine there are a plethora of singularities/ branch cuts--and then we cannot apply your argument of getting arbitrarily close to singularities at the naturals (at least as far as I can see--we won't have entire function theory at our disposal).

This is to say that, in the words of Samuel Cowgill and William Paulsen:



If it did, we'd be back in their case--and their uniqueness condition would say it must be kneser. Instead, we have that,




Now, the simple answer to when we have a singularity... we're limiting towards where a fixed point/cyclic point of exp where . This would return to your discussion of being the ONLY real super logarithm upto a theta mapping. Here is where I'm saying that may not be true. Specifically because of how William Paulsen and Samuel Cowgill made their paper.



The other way I'd approach describing this, is more topological in nature. First of all, going back to our periodic tetrations, we have the following functions. I'll refer to these as for and,




There will be essential singularities/branchcuts/zeroes all along the lines . Each of these functions look something like this; but stretched wider and wider with smaller and smaller :


   


Then, the goal of the beta method, is just to stretch this cylinder to infinity. I cannot see any reason this would be impossible.



As to your comments on iterated exponentials not having a well placed structure. I point towards the papers:

M. Lyubich. (1987). The measurable dynamics of the exponential map. J.

Math. 28, 111-127.

And

M. Rees. (1986). The exponential map is not recurrent. Math. Zeit. 191,

593-598.

These papers show first of all that, for almost all (upto a set of measure zero in the lebesgue measure), the orbits get arbitrarily close to the orbit . The missing points are the periodic points and cycles.   And also that the julia set of the exponential is the entire complex plane.

Now, the importance of this with beta, is that we can use this, not to show that tetration grows towards infinity, but that does. However, slowly, however many dips to (almost) zero; which is because there are no cycles in the beta function. This allows us to say (through quite a lot of work) that,



is a contraction mapping on the beta function. This will ensure convergence on a sector including the real line.

From here, we enter the problem you are talking about; that we are not guaranteed that pulling back with logarithms won't arise in a singularity. To which I had a sketch of a theorem that I posted here https://math.eretrandre.org/tetrationfor...p?tid=1348 --and also in my paper where I explained it a tad quicker.



Very happy to have you back, Sheldon! I apologize if the post is pretty long--I wanted to cover all the bases, and the things you have missed. There's still a large amount of things to explain and figure out about this function. Your criticisms and commentaries always help though! Thanks again for showing an interest.

Regards, James
Reply
#14
Your construction sounds very  interesting, but I don't understand it. Actually any construction that yields an analytic  Tetaratiin for  bases greater than exp(1/e) sounds interesting!  Can varying lambda to avoid singularities also yield different analytic solutions?

Also, your earlier iterated function method lead to a conjectured nowhere analytic function, and this seems somewhat similar, but I don't remember the details.  Perhaps the two methods are related?
- Sheldon
Reply
#15
(09/16/2021, 01:41 AM)JmsNxn Wrote: ..., the orbits get arbitrarily close to the orbit . The missing points are the periodic points and cycles.   And also that the julia set of the exponential is the entire complex plane.
Just commenting on this small thing.
You bring it up often.
And I have mixed feeling about it :^)
The thing is , it is correct and relevant to many tetration related ideas. It has its upsides and downsides.
However notice for MOST  , the orbits USUALLY DO NOT get arbitrarily close to the orbit where is a positive real that is not an integer.
---
IMO the " beta method " or whatever we wanna call it , is by design made to have no singularities in certain locations.
Those locations depends on the (analytic) " helping functions " we picked ( the functions that go fast to 1 so our base gets close to e ).
This implies that different " helping functions " give different (connected) locations.
Those different locations give different (connected) boundaries ; and those boundaries can be functionally inverted , which implies different slog's.
So I conclude for instance that my gaussian method is distinct from earlier type solutions.
( I planned future similar solutions which might further strengthen or clarify that idea ... more later )
---
I have to think about the remaining things everyone said. I just wanted to comment this.
regards
tommy1729
Reply
#16
(09/16/2021, 10:54 PM)tommy1729 Wrote: Just commenting on this small thing.
You bring it up often.
And I have mixed feeling about it :^)
The thing is , it is correct and relevant to many tetration related ideas. It has its upsides and downsides.
However notice for MOST  , the orbits USUALLY DO NOT get arbitrarily close to the orbit where is a positive real that is not an integer.

tommy1729

I know it looks that way tommy, that's not the truth. Which is what I'm saying.

The points where do not aggregate towards are of zero measure in the lebesgue measure. This means they are isolated points, and are precisely the cyclic points of exp which are not dense, but are sparse. Despite that they look numerous is nothing to us seasoned mathematicians--they are sparse. So if you pick a point at random in ; then almost surely does its orbit aggregate towards . This means, that pretty much all the points do aggregate.

Now I only use this to say that,



This allows us, to reiterate again, that the beta function ; because there are no cyclic points of beta due to it's functional equation. This is a central fact. This does not mean that tetration tends to infinity necessarily; but almost everywhere it will.

The trouble is there are A LOT of cyclic points (they're more like neutral points as produces a different orbit than but they still bounce around and not hit infinity). Like a lot a lot of them. But if you deleted them, you'd still have the complex plane almost everywhere.

It's important to remember if this weren't the case, then there would be no holomorphic slog. As slog has a singularity at each cyclic point. If what you were saying is true, that these points are dense in or "most of the points", then slog would be a holomorphic function with a dense amount of singularities. No such holomorphic function can exist then.
Reply
#17
(09/16/2021, 07:23 PM)sheldonison Wrote: Your construction sounds very  interesting, but I don't understand it. Actually any construction that yields an analytic  Tetaratiin for  bases greater than exp(1/e) sounds interesting!  Can varying lambda to avoid singularities also yield different analytic solutions?

Also, your earlier iterated function method lead to a conjectured nowhere analytic function, and this seems somewhat similar, but I don't remember the details.  Perhaps the two methods are related?

Hey, Sheldon. Not a problem if you're confused. It's fairly difficult. I'll answer your questions as I best can.

1.) Can varying lambda to avoid singularities also yield different analytic solutions?

The way I defined my solution was--which will all be equivalent (up to a normalization constant),



I just use for simplicity; but any such limit will result in the same function, due to my use of Banach's fixed point theorem. This is a very subtle argument though. But no matter how I code these solutions, they all look the same. So varying epsilon graphically has no effect (sometimes barely even changing the normalization constant).

2. Perhaps the two methods are related?

They are absolutely related. Let me refresh your memory. Originally I had considered,



which satisfies the equation:



I constructed this using "infinite compositions", but the manner you did it is equally as valid. In fact, the manner you did it is a tad more natural, and definitely helped me in constructing the beta method.

As per your derivation--we have a function,



and we can iteratively define the taylor series at zero; then the phi function is just,



Then we took for a normalization constant . And I had erroneously thought this would be holomorphic; it is not, it's only on --(at least as conjectured by you, which is 1000% confirmed by numerical calculations).

This sort of put a wrench in the gears, and had me a little downtrodden. Then tommy kept on experimenting with more functions and more infinite compositions; and a lightbulb went off. The phi method fails because,



We have a dangling infinity which over shoots past tetration. So what I did to remedy this was use a function such that,



This produces an analytic function for and when extended to the complex plane it has singularities at for . Now why this function is better, is because it approximates tetration for large arguments much better. Then, at infinity,



so it acts as a "quasi fixed point" at infinity. Then our beta function is just,




Which satisfies,



And note that this gets closer and closer to tetration's functional equation as we increase the real argument! These are what I call "asymptotic solutions to the tetration equation"; phi is not an asymptotic solution--tommy's function and beta are both asymptotic solutions. And from this we can think of as a (kind of) fixed point on the Riemann sphere. And this allows us to make Schroder functions about infinity, in which we can discover



Where this limit must be taken in a specific manner avoiding the poles and the such; but is otherwise correct to say .

This gives us a family of Schroder functions about infinity. The trouble is, these solutions are dependent on and the singularities that vary for each . So we want to "dodge the singularities."  In my (remember its a preprint) paper I used Banach's fixed point theorem; in which any function and is holomorphic on a sector including ; and satisfies  for (this is the most crucial fact for my proof); any lambda like this will produce the same function.

I just chose, again, for simplicity.

So to answer your question, this is very much the phi method--which you dubbed ; but a much more refined version. And remember that actually looks like tetration for large arguments. The phi function DOES NOT. It overshot tetration. This neatly approaches tetration. And all we're doing is calculating a small error between it and tetration. We're actually trying to find something small.

In which its easy to find a function,



in which and,





If you have any more questions please feel free to ask! I have a lot of notes and literature on how this works; and the best way to sus it out is for people to ask me questions. I can forget what is obvious and what is not. Glad you're back, sheldon.

Sincere regards, James.
Reply
#18
I know three posts in a row is poor etiquette. But each post is on a vastly different problem. I'd like to look at Sheldon's idea of a theta mapping much more closely. Using the analytic implicit function theorem and the monodromy theorem,



We have a locally holomorphic function everywhere,



Each tetration sends surjectively. They both have a nonzero derivative; so let's try to paste these local solutions using the monodromy theorem.

Now, what Sheldon is thinking (as far as I can tell), is that this should be able to be made into a single function (with singularities or without (without this program would be dead in the water; as he correctly noted)). The way I'd look at this, is that we have a riemann surface defined by and there is no projection to that fits perfectly; without branching out in some manner.

This does not mean that the beta method diverges or has singularities. It means the theta mapping does. This is to say, there is no entire/meromorphic function,



This is because as the left hand side limits to ; but on the right hand side, this doesn't force ; it forces to a fixed point.




Now, very importantly, Sheldon is calling on:



And, if you're wondering what happens at singularities; it's rather plain.



Where is a point in which is recurrent. This means it's either part of a cycle or in the preimage of a cycle. To think about this, the moment that and we're going to run into enormous trouble by trying to just compare them functionally. It doesn't really resonate that,



BUT!!!! locally this makes perfect sense. So without assuming this is one function; locally this expression will work. So long as we're away from cyclic points and the such. As to such, Sheldon is very much correct in his observation. But he's assumed that when we have a singularity in we aren't just limiting towards infinity in some manner, where we approach (completely valid values) recurrent values.


I just thought I'd post this because I agree with Sheldon's points, I think he's just assumed this must create an EVERYWHERE perfect version of . The point of the beta method is to be an exception to these theta mappings.
Reply
#19
(07/23/2021, 04:05 PM)JmsNxn Wrote: What I was pointing out is that Samuel Cogwill and William Paulsen proved a uniqueness condition....
This implies that ... is Kneser's solution; unless it has singularities in the upper half plane.

http://myweb.astate.edu/wpaulsen/tetration2.pdf

James, thanks for pointing out that Cowgill/Paulsen have proven this uniqueness criteria!  Very nice.

I talked with James on a zoom call, and I hope to understand Jame's Beta method well enough to generate a Taylor series for case, first for James periodic function, and then for his Tetration solution generated from .  This function is very interesting to me all by itself!  Also, if I understand Cowgill's proof, then if Jame's solution can be arbitrarily extended to arbitrarily larger then 2pi i imaginary periods with other different values of , then in the limit it must be Kneser's solution otherwise Jame's solution must have singularities in the upper half of the complex plane.
- Sheldon
Reply
#20
I'm very excited to see your take on the manner, Sheldon.

If this turns out to be Kneser, I think that would be really cool. It would imply there are 2 completely different construction methods to Kneser. The theta mapping way, and the infinite composition way. I'm very excited.

I'm also excited to see how you would code this--I'm stuck in a loop in how I've coded it so far. I can't seem to make my code any better; only worse. I'm sure you'll have a much better approach.

Regards, James.


Also in the code I sent you, if you run,

Code:
Abel(0,1)  /*my code is a little iffy, you have to compute a pointwise value before finding Taylor series*/
%22 = 1.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
A = Abel(1+z,1)
%23 = 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427 + 2.967675001354652819296127594565742072515371100795384098700715100444872879616355748110954897095595657*z + 2.357732782678439856207373109446921705172559642201103246412154136509264671254302636421825731027053196*z^2 + 1.972756783820104355828140430188695319933557754505278322338940812272450781077344349039966600447249928*z^3 + 1.521148090181033468925522415936749561332256099614758939957677219761076742954975893121293477491051523*z^4 + 1.132367955367112637558087096821006031223027452473402246898760925916582808906408997101656980327848171*z^5 + 0.8121563103806010230222350911479254900003271460190619053912747122944366657987535989209973326663293676*z^6 + 0.5668056036040309827190929604067804496981862292586028412031220460363419392521542100001466557118432624*z^7 + 0.3861511679501009246106947222279153380577461794798721487665136643264317909051790675972715195270677629*z^8 + 0.2578543148265740343595500532568696208110639088041807087643069410037215153740124745789417971886590220*z^9 + 0.1692037214481119873018313319326908299532593192138533430241298678694714253686213481327033825033120650*z^10 + 0.1093357623807869255131700383434340592586729366813005280713744356159964346365114562733492505352421528*z^11 + 0.06967215724586660529832773486190949838180034194657284765751618014624119610304266010332651433679916640*z^12 + 0.04383424929004270639859632439331405447985109394155763330954570580677886752537726245690955919412589625*z^13 + 0.02725230945725728928653041646527817396175457376567607162605711289589544420029923862597058334312778471*z^14 + 0.01675456714810252605320540728988177648406970282527886136238692483366793046381635208878625743590500442*z^15 + 0.01019336089994092319953180954011473686533749121550453627689868645567636329984469654296962486610763327*z^16 + 0.006142643202590189051063058559475426272016144588458199122627496275555023356411510907892559475747711113*z^17 + 0.0036709175040100352088[+++]
This will give the taylor series about 1, similarly with,


Code:
Abel(z,1)
%24 = 1.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 + 1.091746621076809013884974261863482866515963611560536324138994111751239098282697066968792471034239886*z + 0.2714060762070193171437424955943827931638503717136848165830532882692408669826080900482469467615488910*z^2 + 0.2125525861313469888021478425802562090387068291121784641878304794743661021504702004547199878572094071*z^3 + 0.06977521648514998263674987921378838226309211345190041781551961351764919973313522659458688303371573846*z^4 + 0.04404163417774007948974320281475423239471012247835111677418601284865041866096275647089240474660889988*z^5 + 0.01480629466783694024680606804770874466095025847943622314359234215211918566929634587982547822697524125*z^6 + 0.008561268049536550787110621350376164411249768075478143351453259439601273283709683201952517632614868019*z^7 + 0.002814905681249618769344334929755303131301111679210593232530174245343075739294184023761363767325649788*z^8 + 0.001600984980242967980058067512988028998626599587465129469228613980725879543699254151351823397240636449*z^9 + 0.0005240150126752845344816864863220032432383929844600032134916998405913413929359719890727966534676820259*z^10 + 0.0003017151070992455198885590074112687198266940835895138851311257194250982324280807827139264047511680096*z^11 + 9.024605643550261986785556669340450988716179032759129805040792164935223712423512110510982647458269450 E-5*z^12 + 4.859419988807502511049138781325890892877814919176570632479411035916500915789129780855922960206661452 E-5*z^13 + 8.084794728794402624489481277553499323658745901241299037539670393969783591119684627779672866000859303 E-6*z^14 + 2.704836036866106315173902706242762078649072551241974645493836652948459913867590456573805827341354940 E-6*z^15 - 2.450720613547027454196739715567577929122842285508726144924924557910470116337318420497467650134284414 E-6*z^16 - 6.892515381942296218498173521547907785286934797376940481495402983799448402762944660107195839612213041 E-7*z[+++]



this will give the taylor series about 0. **Note that series precision caps at about 35 for the abel function**

At least, these are the current write ups. They are still a little glitchy, but it's a good base to work from.

You can also run the beta taylor series by writing,


Code:
beta(z,1)
%25 = 0.3047812723715631626923015792598253346867960359900467240879774471150853396719969197860001160926829229 + 0.2581832782858992635060868047102413763939613483259195472236688617584221959696516525587310611841169650*z + 0.09463888612428387194740730932988071510659020889882707705631953397406128888864303628546778237151637377*z^2 + 0.01753438256356050427729870239873798484572480762843257387281852305017368763353319548602777334650882279*z^3 + 0.001759337414570557303275880706685272797080553330810727735490510525753989483713645400676209108170740531*z^4 + 0.0006863979038845097698444711663854233512563664146092614073874910940146297168329600528402038121379649848*z^5 + 0.0004405208937310109260538168072078485540501946830998265386870605812414649715990773648692052914360015455*z^6 + 0.0001283781560448517843991019441521080275738359019385970868544988158381758328687415885543735580411983813*z^7 + 1.127655436026842630431776398940366363564725327178739718557710956416489691941780787467224357901104767 E-5*z^8 - 1.680604395022457192516126485203231558086281982229394781595439105089015023134334729499726466033969271 E-6*z^9 + 9.650906392345079107385085456472781342983407911841549695223564000172509762318796556544110640700563151 E-7*z^10 + 9.390318846181570376336441805537276985437465935052966380388337419322862330907488299598441553527099334 E-7*z^11 + 2.260026187217772870389604410623321795747373698093413868436196203110557225843048456519771194581000847 E-7*z^12 - 1.395526583269469714590317681698606511218377633332715636128915926089879887286949315239326751537315812 E-8*z^13 - 1.638439359347133272252748735159358815820601685906309637165751377872048994785992909128642872108212928 E-8*z^14 + 1.078238513266699145859423752733334882012425749056914213162490031836708930273501784634604646858594594 E-10*z^15 + 2.067291475308126749957840316954606375230489485841092217044166407481420466493201124477973575581880773 E-9*z^16 + 5.23322442162135048345507171457926250010740682929602430387049975932742122583728359988991580006[+++]


which gives the taylor series at 0, again. You can work similarly with beta(A+z) for any point A; also with Abel.  **beta will allow 100 series precision, not sure why this discrepancy is happening**



Also, I know that you were asking me to show the singularities at for . And for some reason it wasn't looking like it was diverging. Here's a graph of,


Code:
ploth(X=0,3.14, P = beta(I*X+2,1); [real(P),imag(P)])
%28 = [0.E-307, 3.140000000000000124, -1.3915858263219127444, 1.4403735137825326440]

   

We can clearly see the essential singlarity. And if you run the command:


Code:
ploth(X=0,3.14, P = beta(I*X+1.5,1); [real(P),imag(P)])
%29 = [0.E-307, 3.140000000000000124, 0.E-307, 1.0709593063171574112]



you see that there's no singularity here at beta(Pi*I+1.5,1)

   


And now if you run


Code:
ploth(X=0,1.5, P = beta(X+3.14*I,1); [real(P),imag(P)])
%31 = [0.E-307, 1.5000000000000000000, -189.91240846940786468, 380.2303726792904968]

you can see the pole at (the other singularities are essential, the only poles which occur are at [ for ).

   

This is a better look at how the singularities appear at beta(j+Pi*I,1) for ; and upto the period these are the only singularities.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Tommy's Gaussian method. tommy1729 24 3,688 11/11/2021, 12:58 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 129 10/29/2021, 11:44 PM
Last Post: JmsNxn
  The Generalized Gaussian Method (GGM) tommy1729 2 337 10/28/2021, 12:07 PM
Last Post: tommy1729
  tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 448 09/20/2021, 04:29 AM
Last Post: JmsNxn
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 313 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 698 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Improved infinite composition method tommy1729 5 1,217 07/10/2021, 04:07 AM
Last Post: JmsNxn
  Generalized Kneser superfunction trick (the iterated limit definition) MphLee 25 8,070 05/26/2021, 11:55 PM
Last Post: MphLee
  Alternative manners of expressing Kneser JmsNxn 1 866 03/19/2021, 01:02 AM
Last Post: JmsNxn
  A different approach to the base-change method JmsNxn 0 710 03/17/2021, 11:15 PM
Last Post: JmsNxn



Users browsing this thread: 2 Guest(s)