tetration base conversion, and sexp/slog limit equations
#21
I was thinking over the topic and try to put it on its feet.
The paradigm is that \( \lim_{x\to\infty}\text{slog}_a(\text{sexp_b}(x)) - x \) shall exist for each \( a,b>\eta \).

So if this limit exists, we let \( x=x+n \), \( a=e \)
\( c_{a,b}=\lim_{n\to\infty}\text{slog}_a(\text{sexp}_b(x+n)) - (x+n) \)

\( \lim_{n\to\infty}\log_b^{\circ n}(\text{sexp}_a(x+n+c_{a,b}))=\text{sexp}_b(x) \) (Jay's change of base formula)

\( \lim_{n\to\infty}\log_b^{\circ n}(\text{sexp}_a(x+n))=\text{sexp}_b(x-c_{a,b}) \)

\( \lim_{n\to\infty}\log_b^{\circ n}({\exp_a}^{\circ n}(y)) =\text{sexp}_b(\text{slog}(y)-c_b) \)

\( \kappa_{b,a}(y) = \lim_{n\to\infty} {\log_b}^{\circ n}({\exp_a}^{\circ n}(y)) \), \( \kappa_{a,b}={\kappa_{b,a}}^{-1} \)


Change of base is just the application of a function (I think one could show that \( \kappa \) is analytic):

\( \text{slog}_b(\kappa_{b,a}(x)) = \text{slog}_a(x)-c_{a,b} \)
\( \kappa_{b,a}(\text{sexp}_a(x))=\text{sexp}_b(x-c_{a,b}) \)

While \( \kappa_{a,b} \) does not depend on any \( \text{sexp} \) or \( \text{slog} \), \( c_{a,b} \) does. we can define it by setting \( x=c_{a,b} \) in the second equation:

\( \kappa_{b,a}(\text{sexp}_a(c_{a,b}))=1 \)
\( c_{a,b}=\text{slog}_a(\kappa_{a,b}(1)) \).

So we need to show in your case that the limit:
\( \lim_{b\to\eta^+}\text{slog}_b(\kappa_{b,a}(x)) \) exists for \( x \) in some initial range, where \( \text{slog}_b \) is your linear approximation.

To show this we could use the Cauchy criterion. It should work in the form \( \lim_{b\to\eta} f_b \) exists, if for each \( \epsilon>0 \) there exists a \( \delta>0 \) and a \( b_0>\eta \)such that for all \( \eta < b,b' <b_0 \) and \( |b-b'|<\delta \): \( |f_b\circ f_{b'}^{-1}-\text{id}|<\eps \).

Where we put \( f_b=\text{slog}_b\circ \kappa_{b,a} \).
Suprisingly but happily the compositional difference is independent on \( a \):
\( f_b\circ f_{b'}^{-1} = \text{slog}_b\circ \kappa_{b,a}\circ \kappa_{a,b'}\circ \text{sexp}_b = \text{slog}_b\circ \kappa_{b,b'}\circ \text{sexp}_b \).

Well, I dont pretend that it helps you, but it helped me at least somewhat in understanding Wink
#22
bo198214 Wrote:I was thinking over the topic and try to put it on its feet.
The paradigm is that \( \lim_{x\to\infty}\text{slog}_a(\text{sexp_b}(x)) - x \) shall exist for each \( a,b>\eta \).
....
\( \lim_{n\to\infty}\log_b^{\circ n}({\exp_a}^{\circ n}(y)) =\text{sexp}_b(\text{slog}(y)-c_b) \)
I was able to follow up to this point, but shouldn't it be this, where slog(y) substitutes for x?
\( {\exp_a}^{\circ n}(\text{slog}y)))= \)
Quote:\( \kappa_{b,a}(y) = \lim_{n\to\infty} {\log_b}^{\circ n}({\exp_a}^{\circ n}(y)) \), \( \kappa_{a,b}={\kappa_{b,a}}^{-1} \)
sadly, I didn't understand this step.
Quote:.....
So we need to show in your case that the limit:
\( \lim_{b\to\eta^+}\text{slog}_b(\kappa_{b,a}(x)) \) exists for \( x \) in some initial range, where \( \text{slog}_b \) is your linear approximation.

To show this we could use the Cauchy criterion....
How is this different from:
\( \lim_{b\to\eta^+}\text{slog}_b(\text{sexp}_a(x)) \) exists ...
Is it because we don't know if sexp is an analytic function, but there are better reasons to believe that \( \kappa_{b,a} \) is an analytic function?

I have a plot of the wobble for sexp_version_a\( _e \)(x)- sexp_version_b\( _e \)(x) in the critical section, (-1 to 0), where sexp_version_a is derived from sexp\( _{1.45} \)
The difference is in the "odd" portion of the function; it is larger in the sexp derived from a base conversion from base 1.45
[Image: wobble_e.gif]
#23
sheldonison Wrote:
bo198214 Wrote:\( \lim_{n\to\infty}\log_b^{\circ n}({\exp_a}^{\circ n}(y)) =\text{sexp}_b(\text{slog}(y)-c_b) \)
I was able to follow up to this point, but shouldn't it be this, where slog(y) substitutes for x?
\( {\exp_a}^{\circ n}(\text{slog}y)))= \)
More detailed: we have
\( \lim_{n\to\infty}\log_b^{\circ n}(\text{sexp}_a(x+n))=\text{sexp}_b(x-c_{a,b}) \)
this translates into
\( \lim_{n\to\infty}\log_b^{\circ n}(\exp_a^{\circ n}(\text{sexp}_a(x)))=\text{sexp}_b(x-c_{a,b}) \)
then I substitute \( y=\text{sexp}_a(x) \), \( x=\text{slog}_a(y) \).
Quote:
Quote:\( \kappa_{b,a}(y) = \lim_{n\to\infty} {\log_b}^{\circ n}({\exp_a}^{\circ n}(y)) \), \( \kappa_{a,b}={\kappa_{b,a}}^{-1} \)
sadly, I didn't understand this step.
Thats just the definition of an ancillary function. You can verify the property \( \kappa_{a,b}^{-1} = \kappa_{b,a} \). The limit has to exist if our paradigm works.

I introduced this function because now we can separate the limits (which went into \( \kappa \)) from the sexps/slogs.

Quote:
Quote:.....
So we need to show in your case that the limit:
\( \lim_{b\to\eta^+}\text{slog}_b(\kappa_{b,a}(x)) \) exists for \( x \) in some initial range, where \( \text{slog}_b \) is your linear approximation.

To show this we could use the Cauchy criterion....
How is this different from:
\( \lim_{b\to\eta^+}\text{slog}_b(\text{sexp}_a(x)) \) exists ...
Is it because we don't know if sexp is an analytic function, but there are better reasons to believe that \( \kappa_{b,a} \) is an analytic function?

Apart from you omitted the \( -x \) the difference is that we have no \( \text{sexp}_a \) at that moment. All we have are linear (or higher order) approximations of an \( \text{slog}_b \). But what we have is \( \kappa \).

\( \kappa_{a,b} \) is a function which is bigger then \( x\mapsto x \) exactly if \( a<b \). In this case it decreases extremely slowly towards \( -\infty \) (so slowly that one could think its converging to a constant) and extremely fast towards \( +\infty \). The more \( a \) towards \( \eta \) the bigger is \( \kappa_{a,b} \) (and infinity for \( a=\eta \)).
Thoug I am not sure how to prove analyticity, \( \kappa \) has straight algebraic properties, like

\( {\kappa_{a,b}}^{-1} = \kappa_{b,a} \)
\( {\kappa_{a,b}\circ \kappa_{b,c} = \kappa_{a,c} \)
#24
bo198214 Wrote:Apart from you omitted the \( -x \) the difference is that we have no \( \text{sexp}_a \) at that moment. All we have are linear (or higher order) approximations of an \( \text{slog}_b \). But what we have is \( \kappa \).

\( \kappa_{a,b} \) is a function which is bigger then \( x\mapsto x \) exactly if \( a<b \). In this case it decreases extremely slowly towards \( -\infty \) (so slowly that one could think its converging to a constant) and extremely fast towards \( +\infty \). The more \( a \) towards \( \eta \) the bigger is \( \kappa_{a,b} \) (and infinity for \( a=\eta \)).
Thoug I am not sure how to prove analyticity, \( \kappa \) has straight algebraic properties, like

\( {\kappa_{a,b}}^{-1} = \kappa_{b,a} \)
\( {\kappa_{a,b}\circ \kappa_{b,c} = \kappa_{a,c} \)
what does \( {\kappa_{a,b} \)
tell us about limit expression for sexp,
\( \text{sexp}_e(x) =
\lim_{b \to \eta^+}\text{ } \lim_{n \to \infty}
\text{ln(ln(ln(}\cdots
\text{sexp}_b (x + \text{slog}_b(\text{sexp}_e(n)))))) \)

I've been an engineer for the last 25 years, but I'd like to take some classes in higher mathematics; maybe when I retire. I have a lot of catching up to do, to follow the posts on these forums. I understand Andy's matrix equations, and Jay's suggestions about the odd derivatives of the sexp extension, but I don't understand Dmitrii Kouznetsov's equations. And I don't understand much of the details of the complex plane graphs, aside from the radius of convergence.
#25
I think there's a reasonable chance the higher derivatives will misbehave, and I think there's a reasonable chance the limit for the sexp limit may not even converge. The graph below is converting to base 1.6, from base 1.4453

The anomaly in the third derivative of the graph seems larger than the error term. I need to do some more error term estimates, and also see if converting from other bases (1.45 and 1.485, the other two spreadsheets I have generated), give similar results. Smaller bases are much more sensitive to the problems, which aren't seen in the third derivative for base e. The anomaly in the third derivative is caused by the fact that the inflection point for this sexp estimate is on the "wrong" side of the critical section -- the inflection point is on the right of the critical section instead of the left of the critical section.
[Image: base_16.gif]

I suspect that with a base close enough to \( \eta \) converted from a base even closer to \( \eta \), even the first derivative will eventually show an anomaly, which might invalidate the error term equation estimates I derived. I saw a similar result converting from base 1.45 to base 1.485, where the inflection point wound up on the "wrong" side of the critical section, also causing the anomaly. But in the base 1.6 case, the results seem to be outside the range of the error term.

All this is hypothetical. I probably need to download a version of mathematica to generate the results in higher precision arithmetic to be certain, but maybe Jay is correct. Maybe the sexp base conversions are always going to have anomalies, the question is how large they are.
#26
sheldonison Wrote:what does \( {\kappa_{a,b} \)
tell us about limit expression for sexp,
\( \text{sexp}_e(x) =
\lim_{b \to \eta^+}\text{ } \lim_{n \to \infty}
\text{ln(ln(ln(}\cdots
\text{sexp}_b (x + \text{slog}_b(\text{sexp}_e(n)))))) \)

This expression has mixed limits, one inside and one outside. The limits can be separated by the following manipulation:
\( \lim_{n \to \infty}
\ln^{\circ n}(\text{sexp}_b (x + \text{slog}_b(\exp^{\circ n}(0))) \)
\( \lim_{n\to\infty}\ln^{\circ n}(\text{sexp}_b(x+n+\text{slog}_b(\exp^{\circ n}(0))-n)) \)
\( \lim_{n\to\infty}\ln^{\circ n}(\exp_b^{\circ n}(\text{sexp}_b(x+c_{b,e})))=\kappa_{e,b}(\text{sexp}_b(x+c_{b,e})) \)

Now we can seperately investigate the behaviour of \( \kappa \) and of the linear approximation \( \text{sexp}_b \) and perhaps come to the conclusion that \( \kappa_{e,b}\circ \text{sexp}_b \) converges for \( b\to\eta \).

Though I doubt that it is analytic; each \( \kappa_{e,b}\circ \text{sexp}_b \) is a piecewise analytic function (provided that \( \kappa \) is analytic). Somehow it would be strange (or at least difficult to show) if a sequence of only piecewise analytic functions converges to an analytic function. Though it may well be that it is infinitely differentiable, if the jumps reduce to zero for \( b\to\eta \). From you description I would expect that \( b\to\eta \) makes \( \kappa_{e,b}\circ \text{sexp}_b \) smooth.

Quote:I've been an engineer for the last 25 years, but I'd like to take some classes in higher mathematics; maybe when I retire. I have a lot of catching up to do, to follow the posts on these forums. I understand Andy's matrix equations, and Jay's suggestions about the odd derivatives of the sexp extension, but I don't understand Dmitrii Kouznetsov's equations. And I don't understand much of the details of the complex plane graphs, aside from the radius of convergence.

Hey Sheldon, your are welcome! I really appreciate your contributions, your approach of \( b\to\eta^+ \) is completely new and enriches the forum. I hope it will work out!
#27
sheldonison Wrote:I think there's a reasonable chance the higher derivatives will misbehave, and I think there's a reasonable chance the limit for the sexp limit may not even converge. The graph below is converting to base 1.6, from base 1.4453

The wobble in the third derivative of the graph seems larger than the error term. I need to do some more error term estimates, and also see if converting from other bases (1.45 and 1.485, the other two spreadsheets I have generated), give similar results. ....
The results from base 1.45 to base 1.6 are very close to the results from base 1.44533, with the same 3rd derivative anomoly, and the same sexp function, consistent to about 3.0*10^-6. What this means is that even though the 3rd derivative misbehaves, there is still a chance that the sexp results has b approaches \( \eta \) may converge.

Even if the sexp limit definition converges, the higher derivatives for all bases will not have the property from Dimitrii's solution has, where the odd derivatives are positive for all values of x>-2. Also, an interesting set of graphs would be the wobble for the Dimitrii solution for base n, compared to the sexp generated for base n from a base approaching \( \eta \). Each base will have a wobble function. For base "e", the wobble has a peak to peak value of 0.0008, for smaller bases, the wobble gets smaller. For base=1.6, its about 0.00025, and for base 1.45, the relative wobble is about 0.000007. My theory is that the wobble for base conversions using Dimitrii's solutions becomes the anomoly in the third derivative for the sexp defined to have constant base conversions!
#28
More empirical data on the 3rd order derivatives shows that they're all sinusoidal, and the sinusoidal relative contribution to the third order derivative is seems more or less constant across a large range of sexp base values. This means the 3rd order derivative sinusoid may not change the convergence equations, and may give more direction on the value of the error terms in proving the convergence of:

\( \text{sexp}_e(x) =
\lim_{b \to \eta^+}\text{ } \lim_{n \to \infty} \text{ln(ln(ln(}\cdots
\text{sexp}_b (x + \text{slog}_b(\text{sexp}_e(n)))))) \)

So, here's the graphs, centered on the critical section for several different base conversions. The 3rd derivative is the lowest derivative to show the sinusoid, and it seems to be about 1/6th of the value of the 3rd derivative, and the phase varies. Beginning with the fifth derivative, the sexp extension via base conversion will not have have positive values for odd deriivatives for all x values >-2; so this is clearly a different extension of sexp to reals then Dimittri's extension of sexp to real numbers. Also, the graphs, but for base 1.485 and base 1.6, the results are consistent, converting from base 1.45 and base 1.44533. For converting to base "e", the same 3rd derivative sinusoid is probably there, but it is difficult to see because the 3rd derivative climbs so quickly to infinity on either side of the critical section.

[Image: base_1447_3rd.gif]
[Image: base_145_3rd.gif]
[Image: base_1485_3rd.gif]
[Image: base_16_3rd.gif]
#29
so, why the wobble??? And how do you convert between constant an extension of sexp to real numbers that has a constant base conversion, and an extension of sexp to real numbers that has all positive odd derivatives?

The limit equation for the conversion I have defined for sexp to real numbers with an exact base changes comes with its own wobble, which becomes noticeable in the third derivative. This violates the requirement that the odd derivatives are positive for all values. In particular, the 5th derivative of this sexp extension will have negative values. The wobble gets smaller as the base approaches (1/e), and gets smaller faster the linear approximation, so the wobble does not seem to effect my earlier convergence error term estimates.

Even though I do not yet understand the wobble, I can characterize it. Lets take Dimitrii's taylor series expansion for sexp_e, and use it to convert to base 1.45, a little bigger than \( \eta \). Compare the converted critical section of this base 1.45 with a wobble free estimate of the critical section of base 1.45. The comparison is a simple subtraction. When converting to bases approaching \( \eta \), the conversion wobble appears to be a perfect sine wave! The relative amplitude of this sine wave stays constant as the base converted to approaches \( \eta \). The phase changes in a predictable way, and the relative height also changes in a predictable way. Also, I know how to lock these down (amplitude, phase, relative height), by requiring the limit use only values for b that are integer multiples, instead of fractional multiples, in the conversion process. One possible source of confusion is that the wobble down converting from base e to base 1.45 is about one 100 times bigger than the wobble inherent to base 1.45, and about 1000 times bigger than the wobble inherent to base 1.44533.

I haven't done it yet, but the upshot is that once I know the conversion sinusoid, I can convert from a base approaching \( \eta \) to any other base and get either an sexp with a constant base conversion factor, or Dimitrii's sexp function, where the odd derivatives are positive for all values of x>-2!

[Image: delta_taylor_to_145.gif]
#30
Are you sure that it is 1-periodic?

I mean it is well known that
\( f^{-1}(g(x))-x=\theta(x) \) must be 1-periodic
for two superexponentials f and g.

This implies that
\( g(x)=f(\theta(x)+x) \)
But
\( f(\theta(x)+x)-f(x) \) does not look 1-periodic?
\( f(\theta(x+1)+x+1)-f(x+1)=\exp(f(\theta(x)+x)))-\exp(f(x))\neq f(\theta(x)+x)-f(x) \) mostly


Possibly Related Threads…
Thread Author Replies Views Last Post
  Two types of tetration : sexp ' > or < 1. tommy1729 6 728 10/17/2023, 02:05 PM
Last Post: leon
  Limit when x approaches 0 saudinho 4 600 10/12/2023, 10:36 PM
Last Post: leon
  Real tetration as a limit of complex tetration Daniel 6 1,559 10/10/2023, 03:23 PM
Last Post: leon
  Simple limit approximation to exp(x) tommy1729 0 417 05/16/2023, 11:13 PM
Last Post: tommy1729
  [2sinh] exp(x) - exp( - (e-1) x), Low Base Constant (LBC) 1.5056377.. tommy1729 3 993 04/30/2023, 01:22 AM
Last Post: tommy1729
  Semi-group iso , tommy's limit fix method and alternative limit for 2sinh method tommy1729 1 881 12/30/2022, 11:27 PM
Last Post: tommy1729
Question When Does \(\displaystyle\int_{-1}^0\text{sexp}(x)\,\mathrm{d}x\) Equal \(\frac12\)? Catullus 0 712 10/31/2022, 11:47 PM
Last Post: Catullus
Question E^^.5 and Slog(e,.5) Catullus 7 2,729 07/22/2022, 02:20 AM
Last Post: MphLee
Question A Limit Involving 2sinh Catullus 0 748 07/17/2022, 06:15 AM
Last Post: Catullus
Question Slog(Exponential Factorial(x)) Catullus 19 7,023 07/13/2022, 02:38 AM
Last Post: Catullus



Users browsing this thread: 2 Guest(s)