"Natural boundary", regular tetration, and Abel matrix
#1
Hi.

One point that was brought up here was the idea that the regular tetration \( \mathrm{reg}_{^\infty b}[\exp_b^z](1) \) at the attracting fixed point has a natural boundary of analyticity that blocks all continuation outside the "Shell-Thron" region of bases in the complex plane.

Yet, this seems at odds with the observation that the Abel-matrix tetration works for all \( b \in (1, \infty) \) and also for some region of the complex plane around that (which I suspect, based on some crude numerical experiments, is shaped roughly like a "trumpet horn" but constricting to a point at \( b = 1 \), if you get my drift). But this would mean that it'd converge to an analytic function (note that each approximant is analytic, and a sequence of complex-analytic functions always converges pointwise to a complex-analytic function) in that region, which means that at least part of the STR boundary is not a natural boundary for this tetration. Yet it seems to agree with regular iteration in \( b \in (1, e^{1/e}] \), and so by the analytic continuation theorem it could be interpreted as an analytic continuation of regular iteration outside the STR. This makes me think:

1. the STR boundary is not a natural boundary, i.e. it has at least one gap, the end-points (on said boundary) of which are possibly delimited by the two points at which the aforementioned "trumpet curve" intersects it, or

2. Abel-matrix tetration does not actually agree with regular iteration but merely approximates it very well, so a great deal of numerical precision is needed to determine the difference (have AM and reg. been proven equivalent/nonequivalent for the appropriate range of \( b \)?).

What do you think? Certainly if (1) is true it would be very exciting, since it would mean the regular formulas already contain, in a sense, the definition of tetration for the entire complex plane -- and it would provide a good target for further research, so as to extend them out there. (2) would also be interesting, since it would raise some curious questions, like why does it come so close to regular iteration but still "miss the mark". And it would also prove regular iteration a red herring, at least insofar as making a whole-plane tetrational binary operation that all "fits together" nice and "naturally" goes.
#2
Hm, I think that the intuitive method converges for bases \( b>1 \) does not necessarily mean that it is holomorphic in b (in the trumpet area). Why cant there be an invisible boundary at which it is merely continuous and not holomorphic? I really dont know, but do you have some more evidence that the intuitive iteration is holomorphic in b?
#3
(06/16/2010, 11:48 AM)bo198214 Wrote: Hm, I think that the intuitive method converges for bases \( b>1 \) does not necessarily mean that it is holomorphic in b (in the trumpet area). Why cant there be an invisible boundary at which it is merely continuous and not holomorphic? I really dont know, but do you have some more evidence that the intuitive iteration is holomorphic in b?

Hmm. I dug up this:

http://www.math.wustl.edu/~sk/limits.pdf

It says that if a function converges pointwise in a region, the convergent need only be holomorphic on a dense, open subset. The region without its boundary and with a slit in it that cuts it into two pieces would still be a dense, open subset, dense since even the points in the slit have points in the set "near" them (namely, "right next to" them, i.e. in any neighborhood no matter how small), and open because it's the union of two open sets (those on each side of the slit). So it could conceivably be holomorphic on all but a slit. But there's no proof, or proof of the opposite...

Another theorem is if it converges uniformly on compact subsets of the region (e.g. consider closed disks \( D \subset \Omega \), where \( \Omega \) is the region), the convergent is holomorphic on the whole region, also called "compact convergence". This is the manner in which a Taylor series converges. Compact convergence is stronger than just convergence, and more difficult to prove. (Is the AM even proven convergent?) I suppose one would need an closed-form-for-the-terms formula for the AM convergents, no (i.e. for the inverses of each partial matrix), to analyze this problem and determine the character of the convergence (and also, perhaps, the precise region, too...)?

Failing that, could examining the derivatives on the part of the STB where it converges be useful? If the convergent is not holomorphic on the STB, then wouldn't these derivatives explode, go nuts, etc.?
#4
(06/16/2010, 07:39 PM)mike3 Wrote: Hmm. I dug up this:

http://www.math.wustl.edu/~sk/limits.pdf

It says that if a function converges pointwise in a region, the convergent need only be holomorphic on a dense, open subset.

Oh thats an interesting one, I didnt yet hear of it (I only know the compact convergence holomorphy). Really interesting.

Note that dense can *not* imagined here as something like \( \mathbb{Q} \) is dense in \( \mathbb{R} \), because a function holomorphic in a point, implies the function being holomorphic in some neighborhood! Here it rather can be considered as that the set of points where the limit is not holomorphic does not contain inner points (i.e. there can be no small disk where the limit is not holomorphic).
This is really a strong result!

However, as you wrote, in our case there may well be a boundary where the limit is not holomorphic (and that may even be different from the STB if the intuitive iteration differs from regular iteration).

Quote:Failing that, could examining the derivatives on the part of the STB where it converges be useful? If the convergent is not holomorphic on the STB, then wouldn't these derivatives explode, go nuts, etc.?

One would have to show that \( \limsup_{n\to\infty}\sqrt[n]{\left|a_n\right|}=\infty \), i.e. that there is some subsequence of coefficients that goes to infinity \( \sqrt[n_k]{\left|a_{n_k}\right|} \to \infty \).

But actually the intuitive iteration is really difficult to handle, that's why there are nearly no results about it. No convergence proof, not even a proof that \( \log_b \) is the intuitive Abel function of \( bx \) (which is a really simple function compared to \( \exp_b \) which is the interest for tetration).
#5
(06/17/2010, 04:14 AM)bo198214 Wrote: But actually the intuitive iteration is really difficult to handle, that's why there are nearly no results about it. No convergence proof, not even a proof that \( \log_b \) is the intuitive Abel function of \( bx \) (which is a really simple function compared to \( \exp_b \) which is the interest for tetration).

I suppose the big trouble is due to the matrix inversion, right? How come it seems that all these tetration methods are so difficult to analyze? Smile
#6
(06/16/2010, 11:09 AM)mike3 Wrote: Yet it seems to agree with regular iteration in \( b \in (1, e^{1/e}] \),

According to my newest computations, intuitive iteration (I would like this to be the official name, deprecating Abel-matrix iteration) does not coincide with regular iteration. What does that mean for your theses?
#7
(06/21/2010, 01:45 PM)bo198214 Wrote:
(06/16/2010, 11:09 AM)mike3 Wrote: Yet it seems to agree with regular iteration in \( b \in (1, e^{1/e}] \),

According to my newest computations, intuitive iteration (I would like this to be the official name, deprecating Abel-matrix iteration) does not coincide with regular iteration. What does that mean for your theses?

It'd mean that regular iteration could still have a natural boundary, and so still not be the "right" tetrational (which I'd expect to work with all \( b \in [1, \infty) \)).

It would be interesting to see if there was some way to plot the intuitive iteration on the complex \( z \)-plane for a base in \( [1, e^{1/e}) \). The behavior there may (should?) differ dramatically from the regular iteration. You might not even need to plot the tet function, just the slog function, which has formulas to extend to the whole plane.

Another question: is this intuitive iteration thing the same as what you get from solving the equations given by substituting a Taylor series with unknown coefficients for slog in \( \mathrm{slog}_b(b^z) = \mathrm{slog}_b(z) + 1 \) and solving the resulting system of linear equations? If so, then maybe one could also try coming at this from that direction: try plugging the coefficients for the regular slog into those equations and then seeing that they do not satisfy them (being linear, they have only one solution. If this is not \( \mathrm{rslog} \), then it should fail to satisfy them.). Just construct the Taylor series for \( \mathrm{rslog}_b(b^z) \) and see if it is the same as that for \( \mathrm{rslog}_b(z) + 1 \) for a base \( b \in [1, e^{1/e}) \).

And what method did you use to do that computation?
#8
(06/22/2010, 09:15 PM)mike3 Wrote: Another question: is this intuitive iteration thing the same as what you get from solving the equations given by substituting a Taylor series with unknown coefficients for slog in \( \mathrm{slog}_b(b^z) = \mathrm{slog}_b(z) + 1 \) and solving the resulting system of linear equations? If so, then maybe one could also try coming at this from that direction: try plugging the coefficients for the regular slog into those equations and then seeing that they do not satisfy them (being linear, they have only one solution. If this is not \( \mathrm{rslog} \), then it should fail to satisfy them.). Just construct the Taylor series for \( \mathrm{rslog}_b(b^z) \) and see if it is the same as that for \( \mathrm{rslog}_b(z) + 1 \) for a base \( b \in [1, e^{1/e}) \).

indeed , that needs to be done.
#9
(06/22/2010, 09:15 PM)mike3 Wrote: Another question: is this intuitive iteration thing the same as what you get from solving the equations given by substituting a Taylor series with unknown coefficients for slog in \( \mathrm{slog}_b(b^z) = \mathrm{slog}_b(z) + 1 \) and solving the resulting system of linear equations?
yes.
Quote: If so, then maybe one could also try coming at this from that direction: try plugging the coefficients for the regular slog into those equations and then seeing that they do not satisfy them (being linear, they have only one solution.
Its an infinite linear equation system.
In general it can have no, exactly one, or infinitely many solutions.
However in our case we know it has infinitely many solutions as \( \theta(\text{slog}(z))+\text{slog}(z) \) is another solution, for every 1-periodic \( \theta \).
All these solutions will satisfy the infinite linear equation system, particularly the regular one.

The specific about the intuitive Abel function is that it is obtained by solving the infinite linear equation system as limit of square-truncations of the equation system.
#10
Adding to Henryk's post -

the slog-matrix SLOG is the (supposed) inverse of the "Bell-matrix minus I" (B - I) , constructed by dismissing an empty column.
For each size nxn the truncated solution has the inverse only on information of (n-1)*(n-1) parameters, so we have information-loss. After inversion the matrix gets shifted to fit the nxn-size and the parameter -1 is filled into the empty column-head.

If that loss of information decreases when the size is increased then I think we have good reason to assume convergence of the methods. Andrew showed in his base-article the "stabilizing" of the coefficients for some truncations and bases.
However, if we multiply (B-I) * SLOG (or was it SLOG*(B-I) ?) we don't get exactly the identity matrix, but (systematically) a nonneglectable value in the last row. This value is far bigger than the last entry in the relevant column of SLOG and -in my opinion- disturbes the convergence-characteristic of the SLOG- function: we get diminuisihing coefficients with higher index, (and an apparent convergence but to a false value!) but the needed correction for the final value is in the last term of the truncated powerseries (and it cannot be neglected).

Now -how does this matter with respect to the asymptotics of the infinite size? Then a "last" term is not available and the "correction" can only be neglected if its value vanishes - well, since the parameter x of the SLOG-funcion can be between 0 and 1 its infinite power suppresses any influence of that correction-term, but for instance, if x=1 I don't know whether we can dismiss that effect.

I cannot really estimate the weight of that problem, and maybe it vanishes for some relevant range for the parameters. But I remember the problems Jay Fox reported with some difficult oscillating error terms when the SLOG-computation is accelerated and each bit of the result was examined...

Gottfried
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Matrix question for Gottfried Daniel 6 2,221 12/10/2022, 09:33 PM
Last Post: MphLee
  Constructing an analytic repelling Abel function JmsNxn 0 865 07/11/2022, 10:30 PM
Last Post: JmsNxn
Question Natural Properties of the Tetra-Euler Number Catullus 6 2,520 07/01/2022, 08:16 AM
Last Post: Catullus
Question Iterated Hyperbolic Sine and Iterated Natural Logarithm Catullus 2 1,652 06/11/2022, 11:58 AM
Last Post: tommy1729
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 7,575 03/08/2021, 07:13 PM
Last Post: JmsNxn
  Moving between Abel's and Schroeder's Functional Equations Daniel 1 4,673 01/16/2020, 10:08 PM
Last Post: sheldonison
  Approximation to half-iterate by high indexed natural iterates (base on ShlThrb) Gottfried 1 4,678 09/09/2019, 10:50 PM
Last Post: tommy1729
  Tommy's matrix method for superlogarithm. tommy1729 0 4,361 05/07/2016, 12:28 PM
Last Post: tommy1729
  Natural cyclic superfunction tommy1729 3 8,222 12/08/2015, 12:09 AM
Last Post: tommy1729
  Regular iteration using matrix-Jordan-form Gottfried 7 18,566 09/29/2014, 11:39 PM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)