Tetration Forum

Full Version: Searching for an asymptotic to exp[0.5]
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
(06/30/2014, 11:56 PM)tommy1729 Wrote: [ -> ]
(06/30/2014, 03:21 PM)JmsNxn Wrote: [ -> ]
(06/30/2014, 12:56 AM)tommy1729 Wrote: [ -> ]Well not from Hadamard alone , but from the asymptotes of the zero's yes.

Maybe I'm misinterpreting your response but a huge lemma that hadamard uses in his proof is that:


then NECESSARILY, (by doing some magic with jensen's formula and some other neat complex analysis), if are the zeroes of f

Very intresting.

Well I was simply saying that it did not follow from what your wrote alone.
Hadamard's proof is very intresting.

By the way the constant C is simply f(0).

Let t_n be the absolute value of the zero's.
So if we show that prod (1 + z/t_n) grows slower than exp(x^a) for any real a > 0 then we have

fake exp^[0.5] = f(0) prod (1 + z/t_n).

And I believe that to be the case.

The logic behind that is simply this :

f(0) exp(Az) prod [ (1 + z/t_n) exp(-z/t_n) ]

Now the exp terms in the prod are just there to make the prod convergant IFF that is not yet the case.
But here it already is so we rewrite :

f(0) exp(Az) prod [ (1 + z/t_n) exp(z/t_n) ] =
f(0) exp((A+B)z) prod [ (1 + z/t_n) ]

where B = 1/t_1 + 1/t_2 + ...

prod (1 + z/t_n) grows slower than exp(x^a) for any real a > 0 then we have

A+B = 0

=> f(0) prod [ (1 + z/t_n) ]

And the sky is blue Smile



That's exactly what I was hoping for! I wasn't sure if you could say it was order zero though, that makes a lot of sense.
Lets assume t_n - the absolute value of the zero's a_n - eventually(!) grows faster than n^A for any positive real A.

then if exp(B z) ~ (1+z/t_1)(1+z/t_2)...

We have either B is positive , negative or 0.
Now positive and negative have the same order , so lets assume B is positive.

Then we consider the order by taking Re(z) >> 1.

So we are allowed to consider exp(B z) ~ (1+z/t_1)(1+z/t_2)...
and then just take the log of it
( since B > 0 , Re(z) >> 1 => (1 - z/a_n) is far from 0 )

B z + C ~ ln(1 + z/t_1) + ln(1 + z/t_2) + ...

Now replace t_n with n^A to get an upper bound

B z + C ~ ln(1 + z/1^A) + ln(1 + z/2^A) + ...

Now take the derivative with respect to z :

B + o(D) ~ 1 / (1^A + z) + 1 / (2^A + z) + ...

However now take the limit Re(z) -> oo on the right side and replace A with 2 + A^2.

Then we get B + o(D) = 0.

So B goes to 0.
And thus if t_n > n^A then we have order 0.

That settles the case if that assumption t_n > n^A is true.

However Im not sure if sheldon or anyone else has a proof of that assumption WITHOUT assumptions about the Hadamard product for the fake exp^[1/2] ... to avoid circular reasoning.

It seems the attention is now 100% at the zero's as far as product expansions are concerned.



(07/01/2014, 10:10 PM)tommy1729 Wrote: [ -> ]...
That settles the case if that assumption t_n > n^A is true.

However Im not sure if sheldon or anyone else has a proof of that assumption WITHOUT assumptions about the Hadamard product for the fake exp^[1/2] ... to avoid circular reasoning.

It seems the attention is now 100% at the zero's as far as product expansions are concerned.
edit fixed lots of typos, getting back into "half iterate" mode; sorry. The zeros of the asymptotic of the half iterate of sexp(z) are on the negative real axis, approximately where, for negative values of z, using the Kneser half iterate, where we follow the branch counterclockwise around and above the fixed point, towards the negative real axis, where it is complex valued instead of real valued, and where the real part of that number is zero.
approximate zeros of the entire half iterate, for negative values of z
log(-z)=log(|z|)+pi i
if Re(exp(z))=0, then Im(z)=(n+0.5) pi i, this is for n>=0

So understanding the zeros of the entire half iterate of the negative axis behaves turns out to be equivalent to understanding how the half iterate of (log(|z|)+pi i) behaves. This approximation gets arbitrarily accurate as z gets larger, based on empirical results. Also, using this branch of the Kneser half iterate for the negative real axis, we can say that the entire half iterate
only for negative values of z

This ratio also gets arbitrarily accurate, approaching arbitrarily close to 2. This might be justifiable/proveable based on the algorithm used for version V, my best entire half iterate approximation.

Here is a sample calculation of the approximate fourth zero of the entire half iterate, compared with the actual zero of the entire asymptotic. The approximation says we want imag(halfk(log(z)))=3.5pi, where halfk is the Kneser half iterate. This is true for z~=-43.661481, where halfk(log(-43.66148105371))=halfk(3.776466272986+pi i)=
3.390550517856 + 10.99557428756i, which has imag part=3.5pi. The actual third "zero" of the entire half iterate is at z=-43.661461, just a tiny bit away from this approximation.

For the 10th zero, the approximation gives z=-2623.212352003, which is accurate to 1.5E-14. I will repost an updated list of these zeros; the old list in post#28, uses a less accurate asymptotic half iterate.
I have to come back to sheldon's ideas when I find the time.

But I wanted to comment quickly that I suspect the truth of a generalized Carlson's theorem.

Carlson talks about f(integer) = 0 but what if we relax this and say the number of zero's on the real line grows like O(a |z| + b) ?
In other words linear ?
Lets call that " strong Carlson ".

Then I assume :

( by lack of knowledge of it being named already )

Tommy-Carlson conjecture :

Let |f(z)| < exp( A |z|^B )
for real A > 0 and some B = 1/m for a positive integer m.

If the number of zero's for f(z) grows like O( p(|z|) ) where p is a real polynomial of degree m then f(z) is Constant.

Now since we know that exp^[0.5](z) < exp( A |z| ^ B ) for sufficiently large A then by combining the previous posts we can conclude the the product expansion for the fake exp^[0.5] is indeed

f(0) ( 1 - z/a_1 ) ( 1 - z/a_2 ) ...

Notice that [1] := exp( A | z^B | ) resembles [2]:= exp ( A |z|^B ).
Because [1] seems to follows from the " Strong Carlson " and the resemblance with [2] I suspect we can say :

" Tommy-Carlson theorem "

In other words the conjecture is probably true !

Even stronger , it is true for EVERY fake exp^[0.5] , not just the one we used here ... but for instance also the one with alternating signs in the derivatives ( see posts 35 , 38 ) !


The final chapter on fake exp^[0.5] into a Hadamard product is the consideration of the zero's off the real line.

Thats the tricky part and Its not immediate how to take care of it.
Perhaps a further generalization of Carlson ?

Note that I do NOT intend to say we already understand the zero's on the real line completely.

But I wonder what would happen if we " forget " these zero's in our product ? What kind of function would we get ?
I have some ideas , but its mainly just handwaving at the moment.


A visual satisfaction would be a video showing fake exp^[1/k](z)^[k] for k slowly growing from 0 to 99 , the analogue of sheldon's pictures.

Talking about that picture of fake exp^[0.5](z)^[2] - exp(z) it seems that for z near the negative line the function grows faster than polynomial.

I cant help comparing to 2sinh^[0.5](z) where we got that 2sinh^[0.5](z)^[2] - exp(z) grows like exp(|z|) near the negative real line.

There seem to be many ideas from those type of comparisons.
For instance some type of conservation law saying that if an approximation is better somewhere it must be worse somewhere else , and you cannot have one that is worse everywhere (or better everywhere).

Such ideas seem complex right now.


I think I'm very close to the ideal definition for the entire half iterate, using Kneser's sexp(z). This version should also be robust enough to pin down all the zeros as well, for the Weierstrass infinite product form.
(05/29/2014, 11:09 PM)sheldonison Wrote: [ -> ]Well, version V works a lot lot better, but it I was hoping for something with more theoretical power. So there's this discontinuity for Kneser's half iterate at the negative real axis. Why not get rid of it, with some sort of mapping that still converges to the half iterate as real(z) increases? Well, I got rid of most of it; not all. Here halfk refer's to Kneser's half iterate...

Here is the algorithm for the version V half iterate approximation.

The new version, call it version VI, says as |z| gets arbitrarily large, there is some optimal number of terms, k, for the following series, where k gets arbitrarily large as |z| gets arbitrarily large, and where the error term gets arbitrarily small.

The error term is and we choose k to give the smallest error term. For various values of k, the error term will decrease for awhile, before hitting a minimum, and then start increasing. The previous algorithm, half(v), arbitrarily used k=1, which is pretty darn accurate all by itself.

Now, take the Cauchy circle integral at radius r=|z|, to define all of the Taylor series coefficients for the entire half iterate. The conjecture is that as z gets arbitrarily large, all of the Taylor series coefficients will converge, and in fact, the Taylor series I posted for version V, is pretty much the same as you would get for version VI.

For example, if we were interested in calculating the 100th Taylor series coefficient for the previous version V, we would have used a radius~=exp(24.72). Version V has an imaginary offset of -6iE143, at negative number, -exp(24.72). At the positive number exp(24.72), the value is ~=2E334, so the 100th derivative can already be calculated accurate to 190 decimal digits using version V, in spite of the large imaginary error term. But version VI allows calculate all of the first 100 derivatives to a theoretical precision of 1000 decimal digits at this radius. For version VI, at a radius of exp(24.72), the ideal value for k to use is k=7, and the imaginary offset at -exp(24.72) shrinks to -3iE-1039, which is a truly tiny number. This extra precision using k=7 allows calculating all of the first 100 Taylor series coefficients accurate to nearly a 1000 digits. Not that version V was all that inaccurate to begin with; the lowest precision Taylor series coeffient for version V was a0, which was accurate to around 14 or 15 decimal digits.

Actually, the function has a Laurent series, but we throw away the Laurent coefficients as unhelpful for the entire half iterate approximation. The Laurent series needs to be taken into account when comparing the half(VI) approximation with the entire function generated using the Taylor series generated from that approximation.

Also, as a practical matter, it is difficult to calculate the lower Taylor series coefficients at the larger radii, due to precision errors. But if infinite precision were used, they would converge (This is a conjecture), so that VI defines all of the Taylor series coefficients for the entire half iterate asymptotic exactly.
Hi, if You need some code that uses "infinite" precision, then C/C++ MPFR can be used there - You can have 1M precision easy - only RAM is Your limit.

I'm very interested in tetration, but not so good at maths to understand all.
You can send me pseudocode - just formulas what needs to be computed in which order - I can write C/C++ code that will do that. I'm software developer - this is no problem for me.

Just write mail to lukaszgryglicki@o2.pl if You want.
(05/16/2014, 07:27 AM)sheldonison Wrote: [ -> ]The next graph is f(z), the asymptotic half iterate itself, using the same grid coordinates. You can see zeros for f on the real axis, as black dots, at -0.71, -4.26, and -15.21. The pattern goes on forever, as f grows at the negative real axis, oscillating between positive and negative.

From seeing the plot I was reminded of 3 ideas I had.

Basicly I find the interpretation in terms of polar cordinates often more intresting.

This leads to 3 ideas and a remark.

remark : you say " oscillating between positive and negative ".

Now I know that if a closed jordan curve path has all arguments n times then we have n zero's within that closed path.

So that suggests that all real roots have multiplicity 1 and you are in the possesion of a proof ?

Also you have claimed that there are no zero's off the real line ?

Does that have a proof ?

I think it should be provable for Re(z) > 0.

As for these 3 ideas , I consider f(z) somewhat as between z and exp(z) but also as in between a line and a circle.

Let me explain : if we consider id(z) = z then abs id(z) has contour circles of absolute value |z|.

exp(z) has vertical contour lines of absolute value.

The idea is that for Re(z) >> 0 , f(z) slightly bends those circles towards the lines and never crosses the lines by doing so.

Also the lines never intersect , just as they dont for id(z) and exp(z).

So we have for a >> 0 , |f(a)| =< |f(a+bi)|.
That also implies there are no zero's for f when a >> 0 !!

So far the absolute value and the zero's of f.

What else seems logical ?

also for a >> 0 ;
idea 2 : While b increases arg(f(a+bi)) takes on all values infinitely often.
The number of times arg(f(a+bi)) takes on a particular angle (for fixed a) is of the form O ( x0 + x1 b + x2 f(x3 b) + x4 exp(x5 b) ) where the x_i are constants.

This idea also comes from considering the bending of the circles to the lines and from the periodicity of exp. And also from the winding , because f has an infinite amount of zero's in the left plane.

In particular the integral f ' (z)/f(z) over the path from - oo i to + oo i is intresting.
Unfortunately this is not a zeta function nor general dirichlet series ?

Speaking of dirichlet series :

idea 3 : from the previous ideas it seems f(z) cannot be of the form

ln(a0 + a1 (c1)^z + a2 (c2)^z + ...)

for Re(z) >> 0 and all a_i , ln(c_i) > 0.

If however the ideas about absolute value are wrong it is possible !

Those are the ideas I had.
It seems some theorems from complex analysis and root finding algoritms might be needed.
Although as Always the functions considered are nontrivial , nonelementary etc. but general theorems might work.

Now for the truncated Taylor series ( polynomials ) we get that abs has a simple structure for large z ; we approach a circle.
Same for the truncated hadamard , since that is also a polynomial.

So , should we study polynomials and or small z together with the absolute value ?

Or is that the wrong way ?

Another issue is that polynomials have zero's while we might want to consider f where it is not 0. that might affect the quality of the approximation of the abs.

Final remark :

Isnt there some theorem that says all zero's are have negative real part for polynomials with positive coefficients or such ?

I seem to recall something like that.


A competitor for the 2sinh method is ofcourse


If we take more ln and exp iteration we also end up with a c^oo function.
But it seems to converge faster.


Let f be the fake half iterate of exp^[0.5].

About post nr 48 , I was thinking about stable polynomials and stable Taylor series.

Most theorems about roots however are about simple polynomials or relate a function with its derivative.

Notice that a function like 1 + x + x^2 + x^3 does have roots with a nonnegative real part.
Also 2 x^3 + 5 x^2 + 7 x + 31 = 0 has roots with strict positive real parts. Hence the situation is not trivial.

Certainly info about a functions derivative affects the position of its zero's , but then that info is often about the zero's of the derivative and therefore almost circular reasoning.

Now if one could say there is Always a positive integer M such that :

| a_M x^M | > f(x)/2.

where a_M is the M th Taylor coëfficiënt ,

that would be helpfull. But probably that is not the case here ??

At least not when x has the absolute value of one of the zero's on the negative line !!

Another way might be to show | f(x + yi) | > |f(x)| for x > 0.
But that is similar to a previous post.

I even considered the " pseudoinvariants " : f(ln(f)+1) , f(ln(f)+i) but with no success sofar.

Im running out of ideas.
These tetration type functions seem immume to many polynomial ideas despite the hadamard product form being one.

Its not immediately Obvious if theorems about polynomials can be extended to entire functions Q_n(z) that satisfy :
|Q_n(z)| < | C exp(z) |.

I Always liked Jensen's theorem about the roots of a derivative.


Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21