Tetration Forum

Full Version: Searching for an asymptotic to exp[0.5]
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Back to basics

In addition to post 17,18 notice that

D exp^[1/2](x) = D exp^[1/2] (exp(x) ) * exp(x) / exp^[1/2](exp(x)).

That follows from ln exp^[a] ( exp(x) ) = exp^[a](x) and the chain rule for derivatives.

By induction / recursion this gives a nice way ( product ) to compute the derivative.

This strenghtens the conclusions from post 17 , 18 and shows that

1 + o(1) <<_n 2.

( smaller after only a few iterations n )

We conclude by noting that the Taylor T

T = Sum_{K=4}^{oo} d_k x^k

With d_k = exp( - k^2 )

grows slower then exp^[1/2](x) , yet faster then any polynomial.

T has growth 0 , like exp( ln^2 (x) ) and similar.

Did we meet T yet ?? I believe T was a fake of Some elementairy like exp( ln^2 ) or such ...
This again leads to the desire of inverse fake or its related integral transforms ...

---

Regards

Tommy1729


I request complex plots of f(x) = fake exp^[1/3](x) , f(f(x)) and f(f(f(x))).

Like sheldon did for fake exp^[1/2](x) in one of the early posts in this thread.

It is very important !!

( potentially new results/conjectures based on those plots ! )

Regards

Tommy1729

Ps make sure to make backups of this website/content ? Bo ?
Im currently into this 

f(x) = integral from 1 to +oo [ t^x g(t) dt ]

and the related

f(x) = a_0 + a_1* 2^x + a_2* 3^x + a_3 4^x + ...

( for suitable f(x) )

This is ofcourse similar to finding taylor series and fake function theory ( so far ).

The idea floating around of finding approximate entire dirichlet series with a_n >=0 is ofcourse tempting.

However maybe the inequality

a_n < min ( f(x)/n^x )

might be less efficient here ??

What do you guys think ?

NOTICE the integral transform is NOT the mellin transform.

Anyone knows an inverse integral transform for this ?

I think f(x) = gamma(x,1) + (constant) might make an interesting case ...

regards

tommy1729
(07/18/2021, 11:47 PM)tommy1729 Wrote: [ -> ]Im currently into this 

f(x) = integral from 1 to +oo [ t^x g(t) dt ]

and the related

f(x) = a_0 + a_1* 2^x + a_2* 3^x + a_3 4^x + ...

( for suitable f(x) )

This is ofcourse similar to finding taylor series and fake function theory ( so far ).

The idea floating around of finding approximate entire dirichlet series with a_n >=0 is ofcourse tempting.

However maybe the inequality

a_n < min ( f(x)/n^x )

might be less efficient here ??

What do you guys think ?

NOTICE the integral transform is NOT the mellin transform.

Anyone knows an inverse integral transform for this ?

I think f(x) = gamma(x,1) + (constant) might make an interesting case ...

regards

tommy1729

As a small example :

 integral from 1 to +oo [ t^x g(t) dt ]

with g(t) = exp(- ln(t)^2 ) 

equals :

(1/2) * ( erf((x+1)/2) +1) *  sqrt(pi) * exp( (1/4)* (x+1)^2 ).

I find this fascinating.

btw g(t) reminds me again of the binary partition function.

Differentiation under the integral sign (dx) will probably be a usefull trick too.

More math must exist.

regards

tommy1729
(07/21/2021, 05:11 PM)tommy1729 Wrote: [ -> ]
(07/18/2021, 11:47 PM)tommy1729 Wrote: [ -> ]Im currently into this 

f(x) = integral from 1 to +oo [ t^x g(t) dt ]

and the related

f(x) = a_0 + a_1* 2^x + a_2* 3^x + a_3 4^x + ...

( for suitable f(x) )

This is ofcourse similar to finding taylor series and fake function theory ( so far ).

The idea floating around of finding approximate entire dirichlet series with a_n >=0 is ofcourse tempting.

However maybe the inequality

a_n < min ( f(x)/n^x )

might be less efficient here ??

What do you guys think ?

NOTICE the integral transform is NOT the mellin transform.

Anyone knows an inverse integral transform for this ?

I think f(x) = gamma(x,1) + (constant) might make an interesting case ...

regards

tommy1729

As a small example :

 integral from 1 to +oo [ t^x g(t) dt ]

with g(t) = exp(- ln(t)^2 ) 

equals :

(1/2) * ( erf((x+1)/2) +1) *  sqrt(pi) * exp( (1/4)* (x+1)^2 ).

I find this fascinating.

btw g(t) reminds me again of the binary partition function.

Differentiation under the integral sign (dx) will probably be a usefull trick too.

More math must exist.

regards

tommy1729

Ofcourse im not trying directly to solve the integral transform for exp^[0.5].
What I am trying here is to solve for standard functions ... and then use them.

As in TT(exp^[0.5](s)) = TT (sum over standard functions) = sum over TT (standard functions)

Where TT stands for " tommy-transform " the integral transform mentioned above.

If this is the best strategy , I do not know.  It just felt natural.

regards

tommy1729
Hi tommy
I read many of your posts, you're inspiring, respect!
I'm not here trying to disappoint you, please don't take my later words offensive or something Wink no offense at all

I came up with 2 ideas

First, that, exp^0.5 has no asymptotic to any combination of elementary functions,
I think it may be true because, the exp(z) and log(z) are elementary, too, they're only asymptotic to themselves, or adding descending terms like exp(z)~exp(z)+1, exp(z)~2sinh(z), etc.
I mean, you can never write an asymptotic of exp(exp(z)) not in the form that exp(exp(z)+h(z))+g(z) where h(z)~0 and g(z)=o(1) in small o notation, alternatively exp(exp(z)+h(z))*k(z)+g(z) or something, with h,j,g all elementary.
So this pattern may also apply to exp^0.5, just a guess lol
summary: exp^0.5~exp^0.5
And I read your post about using 2sinh(z) to approximate tetration, which is fantastic
So you may tell that if g(g(z))=2sinh(z), g(z)~exp^0.5(z) right?
Again if g(g(z))=exp(z)-1, will g(z)~exp^0.5(z)?
but since g(z) is not elementary... this is a loop though
so a question, if g(z)~exp^0.5(z), will g(g(z))~exp(z)?

Second, I wonder if you attempted finding the asymptotic of the Maclaurin Series of exp^0.5(z), it may give us some hints, assume , and asking an asymptotic of a_n, may help.
Indeed, asking the coefficients of the term is called Z-Transform in analytic theories, so I guess Z-transform may help, its inverse transformation is an integral, due to Cauchy's integral formula.
here's what I've found:
first generate the series, we can use Bell matrix to generate about correct 130 terms in 2 minutes, giving a full list of length 150
Code:
A={0.49856, 0.87634, 0.24755, 0.024572, -0.00095215, 0.00025335, \
0.000070930, -0.000048184, 2.6322*10^-6,
5.9669*10^-6, -1.3088*10^-6, -7.4742*10^-7, 2.6850*10^-7,
1.1251*10^-7, -4.8065*10^-8, -2.2028*10^-8, 8.1704*10^-9,
5.3099*10^-9, -1.2339*10^-9, -1.4183*10^-9, 1.0362*10^-10,
3.8903*10^-10, 3.5690*10^-11, -1.0434*10^-10, -2.9030*10^-11,
2.6037*10^-11, 1.3863*10^-11, -5.4973*10^-12, -5.5413*10^-12,
6.6662*10^-13, 1.9785*10^-12,
1.9479*10^-13, -6.3466*10^-13, -2.0676*10^-13, 1.7667*10^-13,
1.1330*10^-13, -3.7497*10^-14, -4.9753*10^-14, 2.0904*10^-15,
1.8867*10^-14, 3.7167*10^-15, -6.1889*10^-15, -2.9031*10^-15,
1.6492*10^-15,
1.5168*10^-15, -2.6212*10^-16, -6.5323*10^-16, -6.0239*10^-17,
2.4101*10^-16, 8.4076*10^-17, -7.4249*10^-17, -5.2086*10^-17,
1.6496*10^-17,
2.5018*10^-17, -3.1515*10^-19, -1.0167*10^-17, -2.4372*10^-18,
3.5095*10^-18, 1.8744*10^-18, -9.6023*10^-19, -1.0067*10^-18,
1.4189*10^-19, 4.4848*10^-19,
5.2808*10^-20, -1.7171*10^-19, -6.3925*10^-20, 5.5352*10^-20,
3.9864*10^-20, -1.3334*10^-20, -1.9715*10^-20, 8.6590*10^-22,
8.3933*10^-21, 1.5899*10^-21, -3.1262*10^-21, -1.3720*10^-21,
9.9393*10^-22, 7.9156*10^-22, -2.4138*10^-22, -3.8173*10^-22,
1.9922*10^-23, 1.6322*10^-22,
2.4944*10^-23, -6.3026*10^-23, -2.2692*10^-23, 2.1918*10^-23,
1.3517*10^-23, -6.6752*10^-24, -6.7915*10^-24, 1.6199*10^-24,
3.0858*10^-24, -1.8466*10^-25, -1.3073*10^-24, -1.1289*10^-25,
5.2539*10^-25, 1.1521*10^-25, -2.0279*10^-25, -7.0774*10^-26,
7.6010*10^-26, 3.6514*10^-26, -2.8008*10^-26, -1.7171*10^-26,
1.0301*10^-26, 7.6128*10^-27, -3.8536*10^-27, -3.2347*10^-27,
1.4963*10^-27, 1.3266*10^-27, -6.1243*10^-28, -5.2460*10^-28,
2.6499*10^-28, 1.9782*10^-28, -1.1963*10^-28, -6.9077*10^-29,
5.4911*10^-29, 2.0741*10^-29, -2.4784*10^-29, -4.0879*10^-30,
1.0546*10^-29, -6.5566*10^-31, -3.9676*10^-30, 1.3218*10^-30,
1.1431*10^-30, -8.8712*10^-31, -1.1506*10^-31,
3.8371*10^-31, -1.2479*10^-31, -8.3825*10^-32,
9.0056*10^-32, -2.0598*10^-32, -1.9684*10^-32,
2.0423*10^-32, -7.9655*10^-33, -5.8940*10^-34,
2.9461*10^-33, -2.2985*10^-33, 1.1897*10^-33, -4.7822*10^-34,
1.5768*10^-34, -4.3727*10^-35, 1.0324*10^-35, -2.0862*10^-36,
3.6089*10^-37, -5.3258*10^-38, 6.6558*10^-39, -6.9619*10^-40,
5.9902*10^-41, -4.1332*10^-42, 2.2005*10^-43, -8.4909*10^-45,
2.1140*10^-46, -2.5506*10^-48};
I took and made a list plot, only concentrating on the 20th to 130th terms, we can see the plot is PRETTY asymptotic to a linear function, by calculation, mma deduced that MAYBE for large n, , since then we MAY tell, that a boundary exists, by solving the sum of a_n*z^n, we arrive at
where g(z) denotes the first 20 terms' summation, showing an asymptotic upper bound around z~0
This expression is quite not precise, though

Regards
Leo
(08/05/2021, 05:50 PM)Leo.W Wrote: [ -> ]Hi tommy
I read many of your posts, you're inspiring, respect!
I'm not here trying to disappoint you, please don't take my later words offensive or something Wink no offense at all

I came up with 2 ideas

First, that, exp^0.5 has no asymptotic to any combination of elementary functions,
I think it may be true because, the exp(z) and log(z) are elementary, too, they're only asymptotic to themselves, or adding descending terms like exp(z)~exp(z)+1, exp(z)~2sinh(z), etc.
I mean, you can never write an asymptotic of exp(exp(z)) not in the form that exp(exp(z)+h(z))+g(z) where h(z)~0 and g(z)=o(1) in small o notation, alternatively exp(exp(z)+h(z))*k(z)+g(z) or something, with h,j,g all elementary.
So this pattern may also apply to exp^0.5, just a guess lol
summary: exp^0.5~exp^0.5
And I read your post about using 2sinh(z) to approximate tetration, which is fantastic
So you may tell that if g(g(z))=2sinh(z), g(z)~exp^0.5(z) right?
Again if g(g(z))=exp(z)-1, will g(z)~exp^0.5(z)?
but since g(z) is not elementary... this is a loop though
so a question, if g(z)~exp^0.5(z), will g(g(z))~exp(z)?

Second, I wonder if you attempted finding the asymptotic of the Maclaurin Series of exp^0.5(z), it may give us some hints, assume , and asking an asymptotic of a_n, may help.
Indeed, asking the coefficients of the term is called Z-Transform in analytic theories, so I guess Z-transform may help, its inverse transformation is an integral, due to Cauchy's integral formula.
here's what I've found:
first generate the series, we can use Bell matrix to generate about correct 130 terms in 2 minutes, giving a full list of length 150
Code:
A={0.49856, 0.87634, 0.24755, 0.024572, -0.00095215, 0.00025335, \
0.000070930, -0.000048184, 2.6322*10^-6,
5.9669*10^-6, -1.3088*10^-6, -7.4742*10^-7, 2.6850*10^-7,
1.1251*10^-7, -4.8065*10^-8, -2.2028*10^-8, 8.1704*10^-9,
5.3099*10^-9, -1.2339*10^-9, -1.4183*10^-9, 1.0362*10^-10,
3.8903*10^-10, 3.5690*10^-11, -1.0434*10^-10, -2.9030*10^-11,
2.6037*10^-11, 1.3863*10^-11, -5.4973*10^-12, -5.5413*10^-12,
6.6662*10^-13, 1.9785*10^-12,
1.9479*10^-13, -6.3466*10^-13, -2.0676*10^-13, 1.7667*10^-13,
1.1330*10^-13, -3.7497*10^-14, -4.9753*10^-14, 2.0904*10^-15,
1.8867*10^-14, 3.7167*10^-15, -6.1889*10^-15, -2.9031*10^-15,
1.6492*10^-15,
1.5168*10^-15, -2.6212*10^-16, -6.5323*10^-16, -6.0239*10^-17,
2.4101*10^-16, 8.4076*10^-17, -7.4249*10^-17, -5.2086*10^-17,
1.6496*10^-17,
2.5018*10^-17, -3.1515*10^-19, -1.0167*10^-17, -2.4372*10^-18,
3.5095*10^-18, 1.8744*10^-18, -9.6023*10^-19, -1.0067*10^-18,
1.4189*10^-19, 4.4848*10^-19,
5.2808*10^-20, -1.7171*10^-19, -6.3925*10^-20, 5.5352*10^-20,
3.9864*10^-20, -1.3334*10^-20, -1.9715*10^-20, 8.6590*10^-22,
8.3933*10^-21, 1.5899*10^-21, -3.1262*10^-21, -1.3720*10^-21,
9.9393*10^-22, 7.9156*10^-22, -2.4138*10^-22, -3.8173*10^-22,
1.9922*10^-23, 1.6322*10^-22,
2.4944*10^-23, -6.3026*10^-23, -2.2692*10^-23, 2.1918*10^-23,
1.3517*10^-23, -6.6752*10^-24, -6.7915*10^-24, 1.6199*10^-24,
3.0858*10^-24, -1.8466*10^-25, -1.3073*10^-24, -1.1289*10^-25,
5.2539*10^-25, 1.1521*10^-25, -2.0279*10^-25, -7.0774*10^-26,
7.6010*10^-26, 3.6514*10^-26, -2.8008*10^-26, -1.7171*10^-26,
1.0301*10^-26, 7.6128*10^-27, -3.8536*10^-27, -3.2347*10^-27,
1.4963*10^-27, 1.3266*10^-27, -6.1243*10^-28, -5.2460*10^-28,
2.6499*10^-28, 1.9782*10^-28, -1.1963*10^-28, -6.9077*10^-29,
5.4911*10^-29, 2.0741*10^-29, -2.4784*10^-29, -4.0879*10^-30,
1.0546*10^-29, -6.5566*10^-31, -3.9676*10^-30, 1.3218*10^-30,
1.1431*10^-30, -8.8712*10^-31, -1.1506*10^-31,
3.8371*10^-31, -1.2479*10^-31, -8.3825*10^-32,
9.0056*10^-32, -2.0598*10^-32, -1.9684*10^-32,
2.0423*10^-32, -7.9655*10^-33, -5.8940*10^-34,
2.9461*10^-33, -2.2985*10^-33, 1.1897*10^-33, -4.7822*10^-34,
1.5768*10^-34, -4.3727*10^-35, 1.0324*10^-35, -2.0862*10^-36,
3.6089*10^-37, -5.3258*10^-38, 6.6558*10^-39, -6.9619*10^-40,
5.9902*10^-41, -4.1332*10^-42, 2.2005*10^-43, -8.4909*10^-45,
2.1140*10^-46, -2.5506*10^-48};
I took and made a list plot, only concentrating on the 20th to 130th terms, we can see the plot is PRETTY asymptotic to a linear function, by calculation, mma deduced that MAYBE for large n, , since then we MAY tell, that a boundary exists, by solving the sum of a_n*z^n, we arrive at
where g(z) denotes the first 20 terms' summation, showing an asymptotic upper bound around z~0
This expression is quite not precise, though

Regards
Leo

I do not take your compliments as an offense lol.
Thanks.

You have some nice posts too.
Welcome here.

As for idea 1 of you :

First I want to point out that there is the idea of dimension.

Are two functions asymptotic for the real line ( 1 dimension ) ?
Or for the positive reals ( 0 dim ? )

OR a a half-plane ?

This matters alot because just as limits ( 0 dimensional asymptotics in a (philosofical ?)  way ) and convergeance , there are many kinds of asymptotics.

Is 2sinh(z) asymptotic to exp(z) ?

Well for positive reals, yes.
For the real line , already no !
For the complex plane ? NO !

In that respect, entire functions are NEVER asymptotic to another, at best close or locally.
A similar thing happens with riemann surfaces.
And thus ofcourse certainly for standard functions.

let A(z) and B(z) be entire.

then A-B(z) is also entire.

But nonconstant entire functions are never bounded. 

QED for entire functions.

Analogue proofs exists for generalizations.

In fact if exp(z) and 2sinh(z) were asymptotic over the whole complex plane , the solutions would be trivially analytic.
SO in a way not being asymptotic everywhere is the weak spot of the 2sinh.

Notice a fake of an entire is also entire.

SO there are serious limits.

---

consider a set of elementary functions.
consider their sums ,  products , compositions and integrals APPLIED A FINITE AMOUNT.

This can never give a growth rate of exp^[0.5].
Indeed as you say ( locally bc of what I WROTE above ) : functions are only asymptotic to themselves.
( a fixpoint at oo can be a misleading idea !! )

Your intuition is very correct !!

Another way to look at it , is saying you need infinite many correcting terms.
But since they do not cancel you get infinite series that are not elementary functions.
Or likewise with products or integrals etc.

Although integrals often have no closed form , they are asymptotic to their integrands and a finite set of functions by finite sum product and compositions.

This also makes sense because the taylor series can be given by an integral approximation , so when the coefficients have closed forms we never get exp^[0.5].
Fake function theory confirms this.
And other series expansion must have this similar property.

IN a way the derivatives are also asymptotics of the function combined( sum product composition )  with standard functions , which makes sense considering fake function theory , what i wrote about integrals etc.
and therefore that is reflected in taylor series.

your use of big and small o works just as well.

the answers to your questions are all yes LOCALLY.
and keep in mind there are many half-iterates , they need to be matched to be asymptotics ofcourse.

---

second , your tail of the taylor series is related to the singularity and radius bound it approaches.

so if you have log at Z ;  YOUR SERIES WILL START TO APPROXIMATE ( the larger coefficients ) THE TAYLOR OF A LOG AT Z , GIVING ALSO THE RADIUS MAX TO THE POINT Z.
UNLESS ANOTHER ISSUE OCCURS CLOSER ( another singularity or pole ).


IN that sense the taylor series looses information and beauty imo.
which was also an esteatic reason i liked to introduce fake function theory.

I HOPE this was a somewhat satisfiying answer.

Sorry if I was not complete or formal.
But that would be too long and complicated ( differential galois theory type ideas are related ).
you get the idea now I think.

regards

tommy1729

Tom Marcel Raes
ps : liouvillian functions is something you might want to look at.

regards

tommy1729
While thinking about fake exp(x^a) for various a>1 , I had many ideas.

Ofcourse one way is to do this :

1) fake (exp(x^a)) = exp( fake(x^a) ) 
2) fake (x^a) = [fake(exp(x) x^a)] exp(-x) ( this has already been discussed here for a = 1/2 !! with remarkable results ! )

But I felt I was missing something.

So I considered lim x to +oo of exp(x^a)/f(x) = *constant*.

I had some ideas , but I still felt I was missing some ideas , so I went online and found this imo interesting post ;

https://math.stackexchange.com/questions...infty?rq=1

---

While trying to improve the gaussian method ( not the gaussian method of fake function theory ! but tetration by f(s+1) = exp(t(s) f(s-1)) as explained in a recent thread )
I felt the need for a kind of sqrt but with the "quadratic symmetry" ... hence a fake( (x^2)^(1/4) ).

This relates to - similiar to the case fake ( sqrt(x) ) discussed before here - :

[sum_n   x^n / gamma(n-1/4)] exp(- x)

( as approximation to [ fake (exp(x) x^(1/4)) ] exp(- x) ) 

and then simply replace x by x^2 :

fake ( (x^2)^(1/4) ) = [sum_n   x^(2n) / gamma(n-1/4)] exp(- x^2)

or something like that.

Notice that by (assuming) all zero's of fake ( x^(1/4) ) are negative reals , then by the construction above ( replacing x  by x^2 ) we have that the zero's are strictly imaginary !

Under the assumption that this fake is very close , not only for real x , but also arg(z) close to the imaginary line , this might improve the erf used in the gaussian method ... as follows ... :

Let V(s) be a function that grows much much faster to 0 than exp(-s) for re(s) > 1.
Let F(s) be fake ( (z^2)^(1/4) ).

then   P(s) = integral V(t^2) dt from 0 to s  is erf-like.

Now  (1 + P(F(s)) ) /2  is an improvement to t(s) and hence the gaussian method.

see also :

 https://math.stackexchange.com/questions...-i-exp-a-4


Notice that we have V(s) as gamma(- s - 1 , 1) as a slight improvement to the gaussian method.
( and thereby combining my gamma idea with the gaussian erf idea ) 
But that is why I said " much much faster to 0 " ; more like exp(-r^4) speed... because we want to use a fake sqrt  ( F(s) ) !!
If the function is not fast enough the sqrt will make it slower than gaussian afterall !

Hence the question at MSE.

Im aware that maybe such a fast one does not even exist.

special thanks to my friend mick for talking about it and posting the question.

regards

tommy1729
...

see here for the gaussian method with f(s+1) = exp( t(s) f(s) ) :

https://math.eretrandre.org/tetrationfor...p?tid=1339

regards

tommy1729
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21