 Status of proofs - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: Status of proofs (/showthread.php?tid=39) Pages: 1 2 Status of proofs - Daniel - 08/24/2007 Henryk brings up the issue of proofs being needed for extending the definition of tetration to the real numbers. I am interested to hear what people consider needs to be proven to establish tetration for real and complex numbers. A number of people have asserted that they have extended tetration. I worry that this is a race without a commonly accepted finish line. RE: Status of proofs - bo198214 - 08/24/2007 Daniel Wrote:I am interested to hear what people consider needs to be proven to establish tetration for real and complex numbers. Well, for me the following assertions need to be proved. Do the coefficients in Andrew's solution converge? Do the series made up of this coefficients converge? What is the convergence radius? Does the natural development at every point define the same analytic function? Is the piecewise defined function analytic? Is Andrew's solution for bases identical with the solution obtained by regular development at the lower fixed point? Is the solution analytic with respect to the base as argument (especially at the base )? I think each question can be answered with a yes (and there are strong numerical hints for it), however each proof is at least not obvious. I call it Andrew's solution merely to give it a common name, though the idea was published before him by Peter Walker. RE: Status of proofs - bo198214 - 08/24/2007 Apart from that I also would be interested how Jay's solution behaves for bases and whether it is analytic and generally how it compares with Andrew's solution. Jay's solution seems to never be considered before. RE: Status of proofs - Gottfried - 08/24/2007 [edit 25.8, minor textual corrections] Hmm, my thoughts may be amateurish, but I've time for a bit of discussion of it currently. That, what I called "my matrix method" is nothing else than a concise notation for the manipulations of exponential- series, which constitute the iterates of x, s^x, s^s^x . The integer-iterates. It comes out,that the coefficients for the series, which has to be summed to get the value for have a complicated structure, but which is anyway iterative with a simple scheme. It is a multisum, difficult to write, so I indicate it for the first three iterates m=1,m=2,m=3 m=1: m=2: m=3: In m>2 the binomial-term expands to a multinomial-term. When denoting then the notation for the general case m>1 can be shortened: m=arbitrary, integer>0: (hope I didn't make an index-error). This formula can also be found in wrong reference removed (update) Convergence: Because we know, that powertowers of finite height are computable by exponential-series we know also, that for the finite case this sum- expression converges. This is due the weight, that the factorials in the denominators accumulate. Bound for base-parameter s: However, in the case of infinite height we already know, that log(s) is limited to -e < log(s) < 1/e Differentability: In the general formula above we find the parameter for s as n'th power of its logarithm (divided by the n'th factorial). This simply provides infinite differentiablity with respect to s Differentiation with respect to the height-parameter m could be described when the eigensystem-decomposition is used (see below) (but may be I misunderstood the ongoing discussion about this topic completely - I kept myself off this discussion due to lack of experience of matrix-differentiation :-( ) In all, I think, that this description is undoubtedly the unique definition for the tetration of integer heights in terms of powerseries in s and x. ------------------------------------------------ For the continuous case my idea was, that it may be determined either by eigenvalue-decomposition of the involved matrices, or, where this is impossible due to inadmissible parameters s, by the matrix-logarithm. The eigenvalue-decomposition has the diasadvantage, that its properties are not yet analytically known, while matrix-logarithm is sometimes an option, if the eigenvalue-decomposition fails since it seems, that the resulting coefficients could be easier be determined. An example is the U-iteration U_s^(m)(x) = s^U_s^(m-1)(x) - 1, where s=e, which we discuss here as "exp(s)-1" - version. For this case Daniel already provided an array of coefficients for a series which allows to determine the U-tetration U_s^(m)(x) for fractional, real and even complex parameters m. These coefficients seem to be analytically derived/derivable, only the generating scheme was not documentd. However I provided a method how to produce these coefficients for finitely, but arbitrary many terms of the series for U_s^(m)(x)) by symbolic exponentiation of the parametrized matrix-logarithm, so at least that is a *basis* for the analysis of convergence and summability of the occuring series. Except for such borderline-cases I would prefer the definition of the continuous tetration based on the eigenanalysis of the involved matrix Bs. From the convergent cases -e