Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
imho a core issue
#1
one of the most important things imho is the sequence of derivatives of half-iterates of exp(x).

i have been thinking about it for a long time and its about time i ask about this.

there are some partial results but in general the question is quite open and tetration seems to be almost " immune " to standard calculus " tricks ".

in my imagination i always conjecture

tommysexp(tommyslog(x)+1/2) (around x=0) = a0^2 + a1^2 x + a2^2 x^2 + ...

where the squares indicate POSITIVE and the radius is assumed to be at least 1/3.

although the POSITIVE part seems unlikely , i am considering it.

now keep in mind , tetration is tricky and so is calculus.

for instance if someone writes a program it might make roundoff errors.

if someone uses a slighty different method to compute the derivates or to define the half-iterate that might give different results.

and one of the simple reasons is that asymptotics might have very different derivatives.

or might not be differentiable.

or might have a different radius.

or even more weird , might have fractals nth derivatives that finally cause the iterations to converge to the same function , be differentiable everywhere BUT the derivate of the function is not equal the the limit of derivatives.

for instance lim n->00 sin(n^2 * x)/n^2 gives f(x) = 0 but alot of nonzero derivatives.

i believe such things might occur in the research of tetration as well though i cannot prove it.

i remember having puzzling results looking like paradoxes and illusionary fractals when doing research.

for instance ln^[2](2sinh^[a](exp^[2](x))) seems very similar to my 2sinh method and numerically close , but seems to give very different results in many ways ...

im very skeptical of computer computations and computer graphics for the above mentioned reasons and others.

however i dont find much theory , references or talk about this.

is it obviously wrong to assume positivity ?

is it numerically easier to compute than slog or sexp ?

can we conclude something about the signs of half-iterates from the super or invsuper signs without simply composing those 2 ?

i tried many calculus tricks such as cauchy intermediate and also matrix methods and i dont know what else i can do ...

if someone would be willing to give the derivatives of a method mentioned here on the forum , in particular my own , that would be appreciated.

but as mentioned above , plz use the exact same method because it might deviate.

for instance sheldon has many algoritms including one too compute " a tommysexp " but it is slightly different from the way i compute it ...

i dont trust my own results at the moment ...

i wonder when it is allowed to use differences instead of derivatives for the series expansion since the half-iterate of exp(x) grows slower than exp(x) ...

in particular i experimented with newtons forward difference formula , q-difference , q-derivatives and other " q-stuff "

but although seemingly promising at first , finally without results.

one of the main issues - or so it appeared - is well demonstrated by the following :

ln( a^2 + b^2 * exp(x) )

for various a and b we might get quite different taylor series with even negative terms ( unexpected ? ) at different ( unexpected ? ) n'th derivatives.

i am aware of formula's for the n'th derivative of exp(f(x)) or ln(f(x)) and similar but against all odds it didnt seem to help me much ???

it seems this example is similar to the problems i came across.

wonder if im alone with that and what others did.

it would be nice to see my coefficients of my own method Smile

i know the coefficients of (non-linear) 2*sinh^[sin^2(z)](x) change sign oo often and so do the signs of any slog.

thanks in advance

tommy1729
Reply
#2
(12/16/2011, 08:51 PM)tommy1729 Wrote: one of the most important things imho is the sequence of derivatives of half-iterates of exp(x).

i have been thinking about it for a long time and its about time i ask about this.

there are some partial results but in general the question is quite open and tetration seems to be almost " immune " to standard calculus " tricks ".

in my imagination i always conjecture

tommysexp(tommyslog(x)+1/2) (around x=0) = a0^2 + a1^2 x + a2^2 x^2 + ...

where the squares indicate POSITIVE and the radius is assumed to be at least 1/3.

although the POSITIVE part seems unlikely , i am considering it.
....
tommy1729
The coefficients should ultimately settle into a pattern, determined by the nearest singularity. In the case of tommysexp(tommyslog(z)+0.5), the nearest singularity is only approximate, as increasing the number of iterated logarithms will bring the nearest singularity arbitrarily closer; but I would expect the approximate pattern could hold for dozens, or thousands, or even millions of derivatives, depending on where the half iterate is generated. For example, based on previous calculations I've done, the half iterate of 0.5 would be approximately one, where the nearest singularity would have a radius of approximately 0.46, and the pattern would hold for millions of derivatives.

I did some work iterating logarithms of superexponential of bases other than base "e", and approximating how the taylor series coefficients change as the sequence of functions converges as n increases, , or in the case of tommysexp, . I focused on cheta, the upper superexponential for base eta=exp(1/e), which is used for the base change sexp function, but it turns out, that iterated logarithms for cheta for n+2 logarithms behaves similar to iterated logarithms for tommysexp for n logarithms. I have some vacation time, so I will try and post the results later.
- Sheldon
Reply
#3
(12/20/2011, 09:23 AM)sheldonison Wrote: ... or in the case of tommysexp, . I focused on cheta, the upper superexponential for base eta=exp(1/e), which is used for the base change sexp function, but it turns out, that iterated logarithms for cheta for n+2 logarithms behaves similar to iterated logarithms for tommysexp for n logarithms. I have some vacation time, so I will try and post the results later.
- Sheldon

So Sheldon , did you find time to compute the results ?
Guess im a bit impatient , but i expected them sooner since you mentioned vacation time.

and i guess im not alone, considering this has about 100 views for only 2 posts.

i understand that the computations might not be so simple as they might appear at first sight as you or someone else assumed... ( not a guess but the actual numbers up to some roundoff )

i also want to point out again that my method of computation is somewhat different and might even be distinct at some places.

tommy1729
Reply
#4
(01/04/2012, 07:06 PM)tommy1729 Wrote: So Sheldon , did you find time to compute the results ?
Guess im a bit impatient , but i expected them sooner since you mentioned vacation time.
....
I've had some writer's block trying to decide how to proceed. Also, I have cleaned up some of the equations that I will eventually post, which will also include ways to calculate the Taylor series coefficients, and ways to calculate accurate approximations as n grows arbitrarily large (taylor series coefficients larger than 20 million). I have the numerical work done (for quite some time actually), but most of the numerical work I've done so far is on the basechange. I have the mathematical equations for tommysexp as well. The mathematical equations for Taylor series coefficients of TommySexp, tracks the taylor series of the basechange function, at twice the frequency, so that where is for tommysexp and is for basechange, for any value of , both scaled by the appropriate singularity distance.

Also, there's two different ways to present the results, both of which turn out to be equivalently difficult and equivalently manageable. One is to generate the basechange function or the tommysexp function by iterating logarithms, and look how the taylor series change as you increase the number of iterated logarithms. But what is more interesting to me is to calculate the slog of superfunction of exp(z-1), which is equivalent to cheta(z)/e-1, or the slog of the superfunction of 2sinh(z). The equation which quickly becomes 1-cyclic as z increases is , where slog(z) is the inverse of the Kneser's sexp(z) function, which is known to be analytic. As z increases, theta(z) quickly converges and with theta(z+1)=~theta(z). theta(z)'s high frequency Taylor series coefficients change in the same way as tommysexp(z) or the basechange(z), and it is also nowhere analytic but infinitely differentiable. So I had a bit of writer's bloc in trying to decide which way to post the results, but I think the I'll probably do both, which will make the results that much more verbose. I need to decide what graphs to present, and then post all of it together, with the pari-gp program, and some graphs, and the equations, and methods to do the approximations. So that's the reason for the delay. Thanks for the interest. I'm in the process of adding numerical results for tommysexp for large values of n, though I have previously posted the taylor series coefficients for tommysexp(z) at z=0, for values of n up to several dozen. I will extend that to several hundred, showing the patterns, and then at the crossover where the next singularity takes over, which I've predicted to be around z^13million or z^14million. update got the program working for tommysexp, the frequency where the approximate radius of conversion switches from 0.45729 to 0.034659 is . The log of the taylor series coefficient is 1058342.36381, and the coefficient is negative. I was probably remembering wrong, off by a factor of 10, when I thought it was 13-14million. So now all I need to do is post the equations...
- Sheldon
Reply
#5
that sounds good.
thank you for your intrest too.

but i am somewhat confused by the combination of your posts on this forum , in particular :

1 : you often claim radius of convergeance for tommysexp , tommyslog and similar expressions.

yet

2 : you compare my functions to basechange ...

3 : say basechange is not analytic ...

4 : it ( basechange ) is not proven to not be analytic

5 : even if basechange is not analytic , how about 1 : radius of convergeance ?

6 : if two functions are similar in property or close in value that does not imply they must both be analytic or non-analytic.

7 : as i often say , i compute it slightly different which might effect the millionth digit or the analytic properties , you need to understand that if you use a variant your properties are for that variant also.

8 : there are not enough singularities close enough to the real line to make my function exp^[1/2](x) NOT analytic in x near the real line x.
( i use analytic continuation to go to the complex plane )

9 : when you talk about non-analytic do you perhaps mean the other parameter z instead of x in exp^[z](x) ??


the combination of those 9 has kept me confused for quite a while.
i brought up those questions before ( and bo also replied that the basechange has not proven to be nonanalytic near the real line)

10 : it seems you avoid these remarks abit or maybe it was not clear to you what i meant.

otherwise great posts though.

regards

tommy1729
Reply
#6
(01/07/2012, 06:28 PM)tommy1729 Wrote: ....
4 : it ( basechange ) is not proven to not be analytic

5 : even if basechange is not analytic , how about 1 : radius of convergeance ?
Agree, its not proven. It is proven to be infinitely differentiable, and I want to show that the taylor series coeficients grow fast enough that the function has a zero radius of convergence.
Quote:6 : if two functions are similar in property or close in value that does not imply they must both be analytic or non-analytic.
I agree, and intend to treate sexp2sinh(z), which is what I had been calling tommysexp(z), completely independently, but using the same methods.
Quote:7 : as i often say , i compute it slightly different which might effect the millionth digit or the analytic properties , you need to understand that if you use a variant your properties are for that variant also.

8 : there are not enough singularities close enough to the real line to make my function exp^[1/2](x) NOT analytic in x near the real line x.
( i use analytic continuation to go to the complex plane )

9 : when you talk about non-analytic do you perhaps mean the other parameter z instead of x in exp^[z](x) ??
....
I'm working on a draft post with the equations. As to remark (9), my focus is would be on iterating logarithms of the superfunction of 2sinh(z),
define g(z) as the superfunction of 2sinh(z)

then,

My understanding is that sexp2sinh(z)=tommysexp(z), but maybe not, so sexp2sinh(z) is a better name to use for this alternative sexp(z) function. The sexp2sinh(z) function, as defined above, was inspired by our exchanges on the tetration forum concerning tommysexp(z).

My interest is in exploring the derivatives of sexp2sinh(z), and the basechange(z) function, which are both well defined at the real axis, and both look like tetration. Both functions have very interesting taylor series representations, whose coefficients appear to eventually grow faster than any analytic function, but do so in a most unusual way. The draft I am working on will show how I approximate the taylor series coefficients for these functions, in the hopes that this might be a step towards a rigorous proof that these functions are nowhere analytic.
- Sheldon
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  [2014] The secondary fixpoint issue. tommy1729 2 3,607 06/15/2014, 08:17 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)