Posts: 94
Threads: 15
Joined: Apr 2009
04/30/2009, 04:15 AM
(This post was last modified: 04/30/2009, 04:34 AM by Base-Acid Tetration.)
I just wanted to know if pentation would have any asymptotes like tetration does and I just discovered this interesting property.
So, tet(1) for base b = b, tet(0) = 1 and tet(-1) = 0, so tetra-logarithm (superlogarithm) of b is 1, tetlog(1) is 0, and tetlog(0)=-1.
pent(1) = b for any b, pent(0) = tetlog(pent(1)) = tetlog(b) = 1,
pent(-1) = tetlog(1) = 0,
pent(-2) = -1.
pentlog(b) = 1
pentlog(1) = 0
pentlog(0) = -1
pentlog(-1) = -2
Continuing this for higher operations (verification is left for the reader), for base greater than 1,
for hexation, hex(1) = b, hex(0)=1, hex(-1)=0, hex(-2)=(-1), hex(-3)=-2;
for heptation, hept(-1) = 0, hept(-2)=-1, hept(-3)=-2, hept(-4)=-3;
oct(-5)=-4, non(-6)=-5, dec(-7)=-6, ...
So here is my conjecture (theorem?)... for n>=3, b[n]-n+3 = -n+4; for n>=4, b[n]-n+2=-n+3, etc.
If you graph these hyper operation functions for integers, you will notice a linear-ish part in a larger domain for increasing n. (specifically around the domain [-n+3,0]), so my conjecture can be stated as:
for k>=3, we have b[k]n=n+1 for any natural n which is in the interval [-k+3,0].
What is the implication of the growing quasi-linear part for the real or complex analytic extensions of those higher hyper-operations pentation, hexation, etc? Is it a good thing or a bad thing?
Also would pentation have any asymptotes?
Posts: 510
Threads: 44
Joined: Aug 2007
Tetratophile Wrote:Also would pentation have any asymptotes?
Yes, see this post, this post and this recent post about pentation.
Posts: 510
Threads: 44
Joined: Aug 2007
04/30/2009, 05:53 AM
(This post was last modified: 04/30/2009, 05:54 AM by andydude.)
Tetratophile Wrote:What is the implication of the growing quasi-linear part for the real or complex analytic extensions of those higher hyper-operations pentation, hexation, etc? Is it a good thing or a bad thing?
I haven't really looked at pentation much, except for the last link I gave above. But for tetration at least, the quasi-linear interval between (-1) and 0 is not linear in the more analytic/differentiable/holomorphic methods. Using natural iteration or Kouznetsov's method, the derivative \( \frac{d}{dx}({}^{x}e) \) is approximately 1.09176735, at both (-1) and 0, and since the "average slope" between those points is 1, the intermediate value theorem requires that the derivative is exactly 1 at least twice, and is less than 1 at some point in the interval (-1, 0). This numerical evidence is a strong indication that the quasi-linear interval you are talking about will not be linear for "holomorphic pentation" if such an extension exists. However, it would be interesting if this interval does actually get flatter with hexation, heptation, etc... but it would also be just as interesting if it gets less flat and starts oscillating wildly...
Andrew Robbins
Posts: 27
Threads: 3
Joined: Apr 2009
andydude Wrote:Tetratophile Wrote:What is the implication of the growing quasi-linear part for the real or complex analytic extensions of those higher hyper-operations pentation, hexation, etc? Is it a good thing or a bad thing?
I haven't really looked at pentation much, except for the last link I gave above. But for tetration at least, the quasi-linear interval between (-1) and 0 is not linear in the more analytic/differentiable/holomorphic methods. Using natural iteration or Kouznetsov's method, the derivative \( \frac{d}{dx}({}^{x}e) \) is approximately 1.09176735, at both (-1) and 0, and since the "average slope" between those points is 1, the intermediate value theorem requires that the derivative is exactly 1 at least twice, and is less than 1 at some point in the interval (-1, 0). This numerical evidence is a strong indication that the quasi-linear interval you are talking about will not be linear for "holomorphic pentation" if such an extension exists.
It should be possible to choose the "natural" versions of the higher operations so that they are approximately linear on the intervals in question, since (e ^k^ x) + 1 = e ^k-1^ (e ^k^ x) = e ^k^ (x+1) would be approximately true by induction, and we always have e ^k^ 0 = 1, and e ^^ x = x+1 approximately on [-1, 0]. The degree of approximation might decay a bit at each level, I suppose.
None of the functions could be exactly linear, of course; only a linear function can have a linear segment and still be holomorphic.
Finally, this behaviour is both a good thing and a bad thing, I think. It is good in that the higher functions will have larger radii of convergence, but bad in that they must have increasingly more fiddly power series coefficients, if they cancel so cancel closely on one interval, but increase so rapidly outside it.
Posts: 94
Threads: 15
Joined: Apr 2009
05/01/2009, 10:22 AM
(This post was last modified: 05/01/2009, 10:55 AM by Base-Acid Tetration.)
BenStandeven Wrote:It should be possible to choose the "natural" versions of the higher operations so that they are approximately linear on the intervals in question, since (e ^k^ x) + 1 = e ^k-1^ (e ^k^ x) = e ^k^ (x+1) would be approximately true by induction, and we always have e ^k^ 0 = 1, and e ^^ x = x+1 approximately on [-1, 0]. The degree of approximation might decay a bit at each level, I suppose.
None of the functions could be exactly linear, of course; only a linear function can have a linear segment and still be holomorphic.
Finally, this behaviour is both a good thing and a bad thing, I think. It is good in that the higher functions will have larger radii of convergence, but bad in that they must have increasingly more fiddly power series coefficients, if they cancel so cancel closely on one interval, but increase so rapidly outside it.
Kinda what I was thinking too. yeah, you are right that no non-linear holomorphic function can be linear anywhere. That's why I think the apparently increasing "linear part" may be a bad thing for holomorphic extensions. I have a hunch that a holomorphic solution might be a function that oscillates or even have nonreal answers in its non-integer values...I don't know. Anyway, the requirement should be that at least in the most intuitive definition we should have b[k]n=n+1 exactly for -k+3 < any NATURAL n < 0, to keep b[k](x-1) = [k-1]log_b b[k]x true.
Posts: 1,625
Threads: 104
Joined: Aug 2007
Tetratophile Wrote:So here is my conjecture (theorem?)... for n>=3, b[n]-n+3 = -n+4; for n>=4, b[n]-n+2=-n+3, etc.
I proved your conjecture in the thread that Andrew already referred to. Its the post:
http://math.eretrandre.org/tetrationforu...94#pid1494
BenStandeven Wrote:It should be possible to choose the "natural" versions of the higher operations so that they are approximately linear on the intervals in question, since (e ^k^ x) + 1 = e ^k-1^ (e ^k^ x) = e ^k^ (x+1)
We use notation e [k] x here on the forum for the k-th hyperoperation.
Quote:would be approximately true by induction, and we always have e ^k^ 0 = 1, and e ^^ x = x+1 approximately on [-1, 0]. The degree of approximation might decay a bit at each level, I suppose.
Hm, you mean there is a function that is closest (say by maximum difference or by area between graphs) to the linear function on the interval [-1,0] among the solutions of f(x+1)=e[k]f(x)?
Posts: 27
Threads: 3
Joined: Apr 2009
(05/01/2009, 01:34 PM)bo198214 Wrote: BenStandeven Wrote:would be approximately true by induction, and we always have e ^k^ 0 = 1, and e ^^ x = x+1 approximately on [-1, 0]. The degree of approximation might decay a bit at each level, I suppose.
Hm, you mean there is a function that is closest (say by maximum difference or by area between graphs) to the linear function on the interval [-1,0] among the solutions of f(x+1)=e[k]f(x)?
Yeah; I'm thinking that it will probably be e[4] or e[5], since for the higher functions I would expect the deviation from linearity to be greatest on the outer intervals like [-1, 0], and least near the middle.
Posts: 1,625
Threads: 104
Joined: Aug 2007
(05/02/2009, 08:11 PM)BenStandeven Wrote: (05/01/2009, 01:34 PM)bo198214 Wrote: BenStandeven Wrote:would be approximately true by induction, and we always have e ^k^ 0 = 1, and e ^^ x = x+1 approximately on [-1, 0]. The degree of approximation might decay a bit at each level, I suppose.
Hm, you mean there is a function that is closest (say by maximum difference or by area between graphs) to the linear function on the interval [-1,0] among the solutions of f(x+1)=e[k]f(x)?
Yeah; I'm thinking that it will probably be e[4] or e[5], since for the higher functions I would expect the deviation from linearity to be greatest on the outer intervals like [-1, 0], and least near the middle.
What I rather meant is whether this would be a uniqueness criterion. I.e. whether there is only one function with minimal distance to the linear function [-1,0]? What do you think?
Posts: 27
Threads: 3
Joined: Apr 2009
(05/03/2009, 08:20 PM)bo198214 Wrote: (05/02/2009, 08:11 PM)BenStandeven Wrote: (05/01/2009, 01:34 PM)bo198214 Wrote: BenStandeven Wrote:would be approximately true by induction, and we always have e ^k^ 0 = 1, and e ^^ x = x+1 approximately on [-1, 0]. The degree of approximation might decay a bit at each level, I suppose.
Hm, you mean there is a function that is closest (say by maximum difference or by area between graphs) to the linear function on the interval [-1,0] among the solutions of f(x+1)=e[k]f(x)?
Yeah; I'm thinking that it will probably be e[4] or e[5], since for the higher functions I would expect the deviation from linearity to be greatest on the outer intervals like [-1, 0], and least near the middle.
What I rather meant is whether this would be a uniqueness criterion. I.e. whether there is only one function with minimal distance to the linear function [-1,0]? What do you think?
I doubt it; I'd think by composing with the right 1-periodic function, you can always move closer to linearity on that interval.
|