Migration of inflection points in y = b # x, for e^(1/e) < b < +oo
#1
Dearest Administrator and Dear Friends !

Concerning the behaviour of the tetrational function y = b # x, in the domain of base b where b >= e^(1/e), i.e. Eta (;->), as described according to the various models and approximations discussed in this Forum, I think that it would be interesting to start putting together an overall general picture. In fact, in this domain, y = b # x is (or it seems to be) a one-value always increasing smooth "function". The "general behaviour" of this "function" seems to show first derivatives y' with one minimum and, accordingly, second derivatives y" with one zero point, corresponding to one inflection point of y.

Moreover, it can be shown that, for bases b "near to" (and greater than) e^(1/e), the "critical path" can be chosen to be approximatly linear (see the attached notes). In particular, for exactly b = Eta = e^(1/e), this approximation can be made with any required precision.

In this respect, I should like to draw your attention to the fact that, in all the most effective implementations, approximations or simulations of y = b # x (b-tetra-x) presented in the Forum, its second derivative (y") shows one zero (y" = 0) for values of x strongly depending from base b.

In particular, I qualitatively detected that we have y" = 0:
- for b = e^(1/e) = 1.44466.., when x -> +oo
- for b = 1.47 when x -> 31 (about)
- for b = 2 at x = 0
- for b = e at x = - 0.5 (about?)
- for b > 10 when x -> - 1

The attached notes are only a provocation and were prepared for inviting the Participants to a deeper analysis of these interesting aspects of the problem (if this had not already been done).

To be more precise (and ... serious), I should like to see with your collaboration if we can obtain the "exact" coordinates of the following points, concerning y = e # x (e-tetra-x). In fact, looking at the y = e # x function, as described by Andrew, we can see that:
- y" = 0 for x = - 0.5 (perhaps ..., to be verified); or:
- y" = 0 for x = - 0.4446678.. (...who knows!)
- y'(-1) = y'(0) (this can be demonstrated);
- y'(-1) = y'(0) > 0 (this needs a demonstration);
- y'(-0.5) < 0 (idem, as before).

The same observations on the "andydude's" plots show that, concerning y = 2 # x (2-tetra-x), we might have:
- y" = 0 for x = 0 (perhaps again ..., to be verified).

Can somebody confirm or infirm (!) these observations/conjectures of mine? They seem to be superficial observations, but they are not. In fact, they may suggest some very interesting new research strategies.

Thank you for your interest.

GFR


Attached Files
.pdf   Migration of flexion points.pdf (Size: 154.68 KB / Downloads: 773)
#2
Some additional ideas:

We can observe, in the various tetrational plots obtained so far showing y = b # x together with y", a kind of strong correlation between y(x) and its second derivative. Would it be possible to try and identify a possible "... simple" relationship between the two? Something like:
F[y(x), y"(x)] = 0.
In this case, we might end with the identification of a "mother" differential equation, the solutions of which, for different "bases", would give the various appropriate tetrational expression.
Very ambitious program! Perhaps the differential equation does not exist, perhaps it exists but it is difficult to be integrated, perhaps the problem is very simple. Who knows!?! I don't know how to proceed.

GFR
#3
Well, I have constructed a function to find the values that GFR is looking for. The function is defined such that: \( y''(f(b)) = 0 \) where \( y(x) = {}^{x}b \). I have attached a plot of this function for the first few approximations (n=5..9) of tetraiton using the natural/inverse-slog tetration method. It looks as though this function is very well-behaved for \( b > e \), but is quite slow to converge for lower bases. So my guess is that GFR's conjecture that f(2)=0 is probably not true.

Two bases for which this function seems to converge quickly are \( (e, \pi) \), so I have included numerical data for these. I have been meaning to implement my own version of Jay's accelerated natural slog, but until I do that I can only use very small approximations.

\(
\begin{tabular}{c|ccc}
n & f_n(2) & f_n(e) & f_n(\pi) \\
\hline
5 & -0.2351 & -0.5129 & -0.6231 \\
6 & -0.1167 & -0.5037 & -0.6276 \\
7 & -0.0387 & -0.5012 & -0.6282 \\
8 & -0.0056 & -0.5108 & -0.6337 \\
9 & +0.0154 & -0.5170 & -0.6357
\end{tabular}
\)

@GFR
Finding such a differential equation would be amazing! But maybe we should model it after the differential equation of exponentiation:
\( \frac{d}{dt}(x^y) = x^{y-1}\left(y \frac{dx}{dt} + x \ln(x) \frac{dy}{dt}\right) \)
although I think your idea has a better chance of working.

Andrew Robbins


Attached Files
.pdf   inflection_zeros.pdf (Size: 20.66 KB / Downloads: 705)
#4
andydude Wrote:......................................
(About the inflection zeros, ... or ... flexion !? I am pertubeted by the French language. GFR comment.)
It looks as though this function is very well-behaved for \( b > e \), but is quite slow to converge for lower bases. So my guess is that GFR's conjecture that f(2)=0 is probably not true.
Two bases for which this function seems to converge quickly are \( (e, \pi) \), so I have included numerical data for these. I have been meaning to implement my own version of Jay's accelerated natural slog, but until I do that I can only use very small approximations.
.....................................

(About finding a possible "mother" differential equation. GFR comment.)
@GFR
Finding such a differential equation would be amazing! But maybe we should model it after the differential equation of exponentiation:
\( \frac{d}{dt}(x^y) = x^{y-1}\left(y \frac{dx}{dt} + x \ln(x) \frac{dy}{dt}\right) \)
although I think your idea has a better chance of working.

Andrew Robbins

Sorry for the flexion/inflection business. What a pity, but, unfortunately, I think you are right!

Thank you for your second observation. Perhaps somebody might have some new good ideas about that. My ... animal instinct suggests that the solution of the problem of extending tetration to the reals includes this fantomatic equation, together with the implementatiomn of a continuous iteration of the exponential function. Maybe, they are two aspects of the same problem.

GFR
#5
andydude Wrote:Finding such a differential equation would be amazing! But maybe we should model it after the differential equation of exponentiation:
\( \frac{d}{dt}(x^y) = x^{y-1}\left(y \frac{dx}{dt} + x \ln(x) \frac{dy}{dt}\right) \)
although I think your idea has a better chance of working.

Andrew Robbins

If we look at this equation and make substitution x=x, y=1/x we get:

d/dt(x^(1/x) = x^((1-x)/x) ( (1/x)dx/dt+xlnx (d(1/x)/dt)

d/dt(x^(1/x) = x^((1-x)/x) ( (1/x)dx/dt-(1/x)*lnx (dx/dt))

d/dt(x^1/x) = x^((1-x)/x)) (1/x)*(1-lnx)) dx/dt

let us apply infinite tetration operation to both sides and use the conjecture that if t is real time, infinite tetration does not depend on real time even if variable x does :

d/dt(h(x^1/x)) = h(x^((1-x)/x)(1/x)*(1-lnx)) dx/dt

then we can get rid of parameter t here:

d(h(x^(1/x))/dx = h((x^((1-x)/x)(1/x)*(1-lnx)

but h(x^(1/x)) = x at least for real x in convergence region, so

h((x^((1-x)/x)*(1/x)*(1-ln(x)))= 1

h((x^((1-2x)/x))*(1-ln(x)))=1

h((x^((1/x)-2)))*(1-ln(x)))=1[/b]

I guess I should have taken imaginary argument on the left side such that x=i/p, y=p/i so that left side will look (i/p)^(p/i) but that I will do later when I understand if it makes any sense- and it must, since by above equation, ( if it is correct) , we only cover small region of interest.
#6
Ivars Wrote:[quote=andydude]
Finding such a differential equation would be amazing! But maybe we should model it after the differential equation of exponentiation:
\( \frac{d}{dt}(x^y) = x^{y-1}\left(y \frac{dx}{dt} + x \ln(x) \frac{dy}{dt}\right) \)
although I think your idea has a better chance of working.

Andrew Robbins

If we look at this equation and make substitution x=I/p, y=p/I we get:

d/dt((I/p)^(p/I) =(I/p)^((p-I)/I)((p/I) dx/dt+I/plnI/p (d(p/I)/dt)

d/dt((I/p)^(p/I) =(I/p)^((p-I)/I)((p/I)(-I/p^2)dp/dt+(I/p)lnI/p*(1/I )dp/dt))

d/dt((I/p)^(p/I) =(I/p)^((p-I)/I)(-1/p+(1/p)ln(I/p))dp/dt

Again, apply h on both sides over time:

d/dt h(((I/p)^(p/I) =h((I/p)^((p-I)/I)(-1/p+(1/p)ln(I/p))) dp/dt

and as h((I/p)^(p/I))= I/p if p> 1 and removing dt

d(I/p) = h((I/p)^(p-2i)/i* ln(I/e*p) dp

Now sinse time is not present any more , and I is not dependent on time, we can calculate:

d(I/p) = (dI*p +Idp)/p^2

so (dI/p+ I dp/p^2) )/dp = h((I/p)^(p-2i)/i* ln(I/e*p) or

dI/dp= p*(h((I/p)^(p-2I)/I* ln(I/e*p) - I/(p^2)

There are so many brackets I lost count, but I/(p^2 is outside tetration.

Which probably is a differential equation linking x=p and I in complex plane to be used.

if p=const, dI=0.

We could have taken also p/I and I/q - but then things get more complicated as infinite tetration of ( (I/p)^(q/I)) I do not know, must be rational numbers if p,q integers, but they can be any>1.

That was so long there has to be some mistakesSmile
#7
Oooops, sorry ... ! I changed my mind.

After a carefully re-thinking, allow me to come back, shortly to my conjecture, which I propose to formulate as follows:
"In the domain b >= e ^ (1/e) [b >= Eta], the plot of the real branch of y = b # x [b-tetra-x] show a smooth one-valued always increasing real function. Its first derivative is always >= 0 and its second derivative is a real function with a zero at x = xf. The value of xf against b is a smooth always decreasing function, with a horizontal asymptote for b -> + oo, at x = -1 and a vertical asymptote for b -> Eta, wher x = + oo". But, Andydude said:

andydude Wrote:Well, I have constructed a function to find the values that GFR is looking for. The function is defined such that: \( y''(f(b)) = 0 \) where \( y(x) = {}^{x}b \). I have attached a plot of this function for the first few approximations (n=5..9) of tetraiton using the natural/inverse-slog tetration method. It looks as though this function is very well-behaved for \( b > e \), but is quite slow to converge for lower bases. So my guess is that GFR's conjecture that f(2)=0 is probably not true.

After the examination of the attached Andrew's plot (Andydude first test) , I think that the shown approximations seem not appropriate enought. Concerning my erratic behaviour, Father Euler would say: "Errare humanum est, perseverare diabolicum". Henryk would probably say: " It is not necessary to be crazy to be here, but it helps !!". Andydude will be patient, I hope!

The problem is that the perturbations of the Andydude tests in the upper-left corner of his figure (lower bases) are strange and probably due to computing overflow. In fact, the reptilian part of my brain and my ... religious beliefs, together with the simulation of y = b# x for b = Eta = e^(1/e) = 1.44456..., assure me that y -> e for x -> +oo, with an always increasing behaviour. Both the y first and second derivatives seem to become zero at x -> +oo, when we shall have y = e. This should mean that for b = Eta, we must have y = e and y" = y' = 0 at x = + oo. Either Andrew's derivation procedures were not accurate enought, or they went overflow or the function itself to which they were applied is not sufficiently appropriate, for low bases.

Please see the "GFR qualitative draft", in annex. For b < Eta we are in trouble, for several reasons (multiple and/or complex values).
Maybe, next time!

@IVARS. I haven't studied your developments concerning hunting for a possible "mother" differential equation. For the moment I leave the possibility to interact to other Participants. I'll be back on that soon. I think it is a very important issue.

GFR


Attached Files
.pdf   Andydude first test.pdf (Size: 20.66 KB / Downloads: 722)
.pdf   GFR qualitative draft.pdf (Size: 18.34 KB / Downloads: 747)
#8
I'd like to finish it even if it is wrong:

So we take :
dI/dp= p*(h((I/p)^(p-2I)/I* ln(I/e*p) - I/(p^2))

And put it =0 to find minimum:

We have 2 solutions : p=0 and

h((I/p)^(p-2I)/I* ln(I/e*p) - I/(p^2)=0

h(((I/p)^(p-2I)/I)*ln(I/e*p))= I/(p^2) we can make substitution p^2=q

Than from earlier, if h( function) = I/q then from another thread http://math.eretrandre.org/tetrationforu...hp?tid=110

function = (I/q)^(q/I)

so (((I/(q^(1/2))^(q^(1/2)-2I)/I))*ln(I/(e*(q^(1/2)))) = (I/q)^(q/I)

So we can find q and p, probably both complex numbers.

The idea was, this should give the minimum of Gottfrieds curve in the left upper corner , where Re ( imaginary zeroes of real x^1/x when x> e^1/e) <0 and since q is a square root, both conjugate minimums along imaginary axis.

May be not yet.
#9
GFR Wrote:"In the domain b >= e ^ (1/e) [b >= Eta], the plot of the real branch of y = b # x [b-tetra-x] show a smooth one-valued always increasing real function. Its first derivative is always >= 0 and its second derivative is a real function with a zero at x = xf. The value of xf against b is a smooth always decreasing function, with a horizontal asymptote for b -> + oo, at x = -1 and a vertical asymptote for b -> Eta, where x = + oo".
Agreed! You really don't need any proof of the asymptotes (at least for me), since I've worked with tetration enough to visualize it. But the two values I would like to see are \( f(2) \approx 0 \) and \( f(e) \approx 0.5 \), which would be interesting if there were a proof of either of these.

GFR Wrote:The problem is that the perturbations of the Andydude tests in the upper-left corner of his figure (lower bases) are strange and probably due to computing overflow. ... Either Andrew's derivation procedures were not accurate enough, or they went overflow or the function itself to which they were applied is not sufficiently appropriate, for low bases.

No, it is not due to overflow. As Jay D. Fox noted (I don't remember where), My approximations are making the assumption that \( \left[\frac{d^n}{dx^n} \text{slog}_b(x)\right]_{x=0} \) is zero for all n greater than the approximation number (which now that I think about it, is actually true for n=infinity, although I don't know how to prove this), and so its not so much that my computing is approximate (my computations are exact for the graph I attached above), the problem is that the coefficients are inexact, and this inaccuracy in the coefficients causes inaccuracies in the whole computation. But since the coefficients of my approximations are rational for all base b such that \( b = e^q \) where q is rational, they can be represented exactly (technically as pairs of big-ints) but this rational number coefficient is only an approximation to the actual real number coefficient (I'm guessing they are real), so the error propagates through the exact computations.

Does this make sense to you?

If it does, then I could confuse you even more: There is also a possibility that since I am using so many series, one of them was inverted at a point that was outside its radius of convergence. But I never did that analysis, so I should probably do that before coming to any conclusions. Smile

Andrew Robbins
#10
andydude Wrote:... the problem is that the coefficients are inexact, and this inaccuracy in the coefficients causes inaccuracies in the whole computation. But since the coefficients of my approximations are rational for all base b such that \( b = e^q \) where q is rational, they can be represented exactly (technically as pairs of big-ints) but this rational number coefficient is only an approximation to the actual real number coefficient (I'm guessing they are real), so the error propagates through the exact computations.

Does this make sense to you?

If it does, then I could confuse you even more: There is also a possibility that since I am using so many series, one of them was inverted at a point that was outside its radius of convergence. But I never did that analysis, so I should probably do that before coming to any conclusions. Smile

Yes, unfortunately,... it does! Smile

GFR


Possibly Related Threads…
Thread Author Replies Views Last Post
  Down with fixed points! Daniel 1 561 04/29/2023, 11:02 PM
Last Post: tommy1729
  Iteration with two analytic fixed points bo198214 62 21,252 11/27/2022, 06:53 AM
Last Post: JmsNxn
Question The Different Fixed Points of Exponentials Catullus 22 7,184 07/24/2022, 12:22 PM
Last Post: bo198214
Question Two Attracting Fixed Points Catullus 4 1,924 07/04/2022, 01:04 PM
Last Post: tommy1729
  Are tetrations fixed points analytic? JmsNxn 2 8,219 12/14/2016, 08:50 PM
Last Post: JmsNxn
  Removing the branch points in the base: a uniqueness condition? fivexthethird 0 4,021 03/19/2016, 10:44 AM
Last Post: fivexthethird
  cyclic points tommy1729 3 8,853 04/07/2011, 07:57 PM
Last Post: JmsNxn
  Branch points of superlog mike3 0 4,428 02/03/2010, 11:00 PM
Last Post: mike3
  Complex fixed points of base-e tetration/tetralogarithm -> base-e pentation Base-Acid Tetration 19 58,275 10/24/2009, 04:12 AM
Last Post: andydude
  Iterating at fixed points of b^x bo198214 28 51,959 05/28/2008, 07:37 AM
Last Post: Kouznetsov



Users browsing this thread: 1 Guest(s)