Status of proofs - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: Status of proofs (/showthread.php?tid=39) Pages: 1 2 Status of proofs - Daniel - 08/24/2007 Henryk brings up the issue of proofs being needed for extending the definition of tetration to the real numbers. I am interested to hear what people consider needs to be proven to establish tetration for real and complex numbers. A number of people have asserted that they have extended tetration. I worry that this is a race without a commonly accepted finish line. RE: Status of proofs - bo198214 - 08/24/2007 Daniel Wrote:I am interested to hear what people consider needs to be proven to establish tetration for real and complex numbers. Well, for me the following assertions need to be proved. Do the coefficients in Andrew's solution converge? Do the series made up of this coefficients converge? What is the convergence radius? Does the natural development at every point define the same analytic function? Is the piecewise defined function analytic? Is Andrew's solution for bases$b\le e^{1/e}$ identical with the solution obtained by regular development at the lower fixed point? Is the solution analytic with respect to the base as argument (especially at the base $b=e^{1/e}$)? I think each question can be answered with a yes (and there are strong numerical hints for it), however each proof is at least not obvious. I call it Andrew's solution merely to give it a common name, though the idea was published before him by Peter Walker. RE: Status of proofs - bo198214 - 08/24/2007 Apart from that I also would be interested how Jay's solution behaves for bases $1 and whether it is analytic and generally how it compares with Andrew's solution. Jay's solution seems to never be considered before. RE: Status of proofs - Gottfried - 08/24/2007 [edit 25.8, minor textual corrections] Hmm, my thoughts may be amateurish, but I've time for a bit of discussion of it currently. That, what I called "my matrix method" is nothing else than a concise notation for the manipulations of exponential- series, which constitute the iterates of x, s^x, s^s^x . The integer-iterates. It comes out,that the coefficients for the series, which has to be summed to get the value for $\hspace{24} t_s^{(m)}\left(x\right) = s^{s^{\dots^s^x}} \text{with m-fold occurence of } s$ have a complicated structure, but which is anyway iterative with a simple scheme. It is a multisum, difficult to write, so I indicate it for the first three iterates m=1,m=2,m=3 m=1: $\hspace{24} T^{\tiny(1)}_s(x)= \sum_{k=0}^{\infty} x^k * \frac{log(s)^k}{k!}$ m=2: $\hspace{24} T^{\tiny(2)}_s(x)= \sum_{k_2=0}^{\infty} \sum_{k_1=0}^{\infty} x^{k_2} * k_1^{k_2} * \begin{pmatrix} k_1+k_2 \\ k_2 \end{pmatrix} * \frac{ log(s)^{ k_1+k_2}}{(k_1+k_2)!}$ m=3: $\hspace{24} T^{\tiny(3)}_s(x)= \sum_{k_3=0}^{\infty} \sum_{k_2=0}^{\infty} \sum_{k_1=0}^{\infty} x^{k_3} * k_2^{k_3} k_1^{k_2} * \begin{pmatrix} k_1+k_2 +k_3 \\ k_2,k_3 \end{pmatrix} * \frac{ log(s)^{ k_1+k_2+k_3}}{(k_1+k_2+k_3)!}$ In m>2 the binomial-term expands to a multinomial-term. When denoting $\hspace{24} n = k_1 + k_2 + ... + k_m$ then the notation for the general case m>1 can be shortened: m=arbitrary, integer>0: $\hspace{24} T^{\tiny(m)}_s(x)= \sum_{ k_m...k_1 =0..\infty \\ k_1+k_2+...+k_m=n } x^{k_m} * \left( k_{m-1}^{k_m} \dots k_2^{k_3} k_1^{k_2} \right) * \begin{pmatrix} n \\ k_2,k_3,...,k_m \end{pmatrix} * \frac{ log(s)^n}{n!}$ (hope I didn't make an index-error). This formula can also be found in wrong reference removed (update) Convergence: Because we know, that powertowers of finite height are computable by exponential-series we know also, that for the finite case this sum- expression converges. This is due the weight, that the factorials in the denominators accumulate. Bound for base-parameter s: However, in the case of infinite height we already know, that log(s) is limited to -e < log(s) < 1/e Differentability: In the general formula above we find the parameter for s as n'th power of its logarithm (divided by the n'th factorial). This simply provides infinite differentiablity with respect to s Differentiation with respect to the height-parameter m could be described when the eigensystem-decomposition is used (see below) (but may be I misunderstood the ongoing discussion about this topic completely - I kept myself off this discussion due to lack of experience of matrix-differentiation :-( ) In all, I think, that this description is undoubtedly the unique definition for the tetration of integer heights in terms of powerseries in s and x. ------------------------------------------------ For the continuous case my idea was, that it may be determined either by eigenvalue-decomposition of the involved matrices, or, where this is impossible due to inadmissible parameters s, by the matrix-logarithm. The eigenvalue-decomposition has the diasadvantage, that its properties are not yet analytically known, while matrix-logarithm is sometimes an option, if the eigenvalue-decomposition fails since it seems, that the resulting coefficients could be easier be determined. An example is the U-iteration U_s^(m)(x) = s^U_s^(m-1)(x) - 1, where s=e, which we discuss here as "exp(s)-1" - version. For this case Daniel already provided an array of coefficients for a series which allows to determine the U-tetration U_s^(m)(x) for fractional, real and even complex parameters m. These coefficients seem to be analytically derived/derivable, only the generating scheme was not documentd. However I provided a method how to produce these coefficients for finitely, but arbitrary many terms of the series for U_s^(m)(x)) by symbolic exponentiation of the parametrized matrix-logarithm, so at least that is a *basis* for the analysis of convergence and summability of the occuring series. Except for such borderline-cases I would prefer the definition of the continuous tetration based on the eigenanalysis of the involved matrix Bs. From the convergent cases -e 1$ and each mxm truncation of $E_b$) and that the first row are the coefficients of a powerseries $e_b^t$ such that $\text{slog}_b(e_b^t(1))=t$ for Andrew's slog. An easier step into that direction could be to show that $e_b^t(x)$ is equal to the regular iteration at the lower fixed point of $b^x$ for $1. RE: Status of proofs - andydude - 08/29/2007 Well, I have given this a lot of thought, and I believe that it is easier to prove real-analytic tetration than complex-analytic tetration. The reason for this is that the real-valued tetration has a smaller domain than complex-valued tetration. The first realization is that tetration over real numbers can produce complex numbers. After this realization we can eliminate a great deal of the domain over which real-analycity must fail (because it is not continuous, and if its not continuous it can't be analytic). To show what I mean by this I have included a color-coded plot of the log(abs(b^^x)) where gray is a finite real number, blue is a complex number output, and red is indeterminate. I have also included a pretty-version of the domain where the circles indicate indeterminate outputs. The dark-gray quarter-plane is that largest domain over which real-analytic tetration can be defined. The medium-gray are regions which have real outputs, but the dotted line indicates a discontinuity, so this would make real-analycity fail if this were in the domain of real-tetration. The light-gray region is probably not real-valued, but it is real-valued with a first-approximation tetraiton (linear critical). A mathematical definition of the dark-gray domain is: $ D = \left\{ (b, x) \text{ where } b > 0 \text{ and } \begin{cases} x > -1 & \text{ if } b = 1, \\ x > -2 & \text{ otherwise}. \end{cases} \right\}$ So to summarize, I believe that what needs to be proven is that Tetration whose domain is the dark-gray region given above is real-analytic in both b and x. Once it is proven that real-analytic tetration exists over this domain, then we can worry about its uniqueness. Andrew Robbins RE: Status of proofs - Gottfried - 08/29/2007 Daniel Wrote:Great feedback, but I'm asking about proofs for an arbitrary approach. For example, I have my own approach and don't want to have to prove that my approach is equivant to someone elses. An exception is something like - R. Aldrovandi and L. P. Freitas, Continuous iteration of dynamical maps, J. Math. Phys. 39, 5324 (199 where I have experimentally results indicating that my work is consistent with Aldrovandi and Freitas's work. Daniel - that's exactly what I felt for my case, too. Luckily Henryk established some connections for my approach, so I feel more homely now. I just read into your pages again, but still unfortunately I could not follow your derivations in detail. If some more examples, actual examples how you did compute the terms of your series etc were there, it would be easier for me to see, whether -at a certain stage of reading- I'm still on the right track with understanding. It's a sort of redundancy, that I need (and what I'm missing in many articles: often they go into medias res very fast and use a very concise notation with a lot of notational implications where I cannot judge, whether or not I've already lost the line - so I decide to switch to the "skimming" mode of reading after such a point, at best ) So a complete example of how you compute your results would help to understand also the rationale behind - at least for me (but well, this doesn't say much, I'm surely not the most relevant case for this, and my difficulties are then not the most representative...). Just a short feedback - Gottfried