one of the most important things imho is the sequence of derivatives of half-iterates of exp(x).

i have been thinking about it for a long time and its about time i ask about this.

there are some partial results but in general the question is quite open and tetration seems to be almost " immune " to standard calculus " tricks ".

in my imagination i always conjecture

tommysexp(tommyslog(x)+1/2) (around x=0) = a0^2 + a1^2 x + a2^2 x^2 + ...

where the squares indicate POSITIVE and the radius is assumed to be at least 1/3.

although the POSITIVE part seems unlikely , i am considering it.

now keep in mind , tetration is tricky and so is calculus.

for instance if someone writes a program it might make roundoff errors.

if someone uses a slighty different method to compute the derivates or to define the half-iterate that might give different results.

and one of the simple reasons is that asymptotics might have very different derivatives.

or might not be differentiable.

or might have a different radius.

or even more weird , might have fractals nth derivatives that finally cause the iterations to converge to the same function , be differentiable everywhere BUT the derivate of the function is not equal the the limit of derivatives.

for instance lim n->00 sin(n^2 * x)/n^2 gives f(x) = 0 but alot of nonzero derivatives.

i believe such things might occur in the research of tetration as well though i cannot prove it.

i remember having puzzling results looking like paradoxes and illusionary fractals when doing research.

for instance ln^[2](2sinh^[a](exp^[2](x))) seems very similar to my 2sinh method and numerically close , but seems to give very different results in many ways ...

im very skeptical of computer computations and computer graphics for the above mentioned reasons and others.

however i dont find much theory , references or talk about this.

is it obviously wrong to assume positivity ?

is it numerically easier to compute than slog or sexp ?

can we conclude something about the signs of half-iterates from the super or invsuper signs without simply composing those 2 ?

i tried many calculus tricks such as cauchy intermediate and also matrix methods and i dont know what else i can do ...

if someone would be willing to give the derivatives of a method mentioned here on the forum , in particular my own , that would be appreciated.

but as mentioned above , plz use the exact same method because it might deviate.

for instance sheldon has many algoritms including one too compute " a tommysexp " but it is slightly different from the way i compute it ...

i dont trust my own results at the moment ...

i wonder when it is allowed to use differences instead of derivatives for the series expansion since the half-iterate of exp(x) grows slower than exp(x) ...

in particular i experimented with newtons forward difference formula , q-difference , q-derivatives and other " q-stuff "

but although seemingly promising at first , finally without results.

one of the main issues - or so it appeared - is well demonstrated by the following :

ln( a^2 + b^2 * exp(x) )

for various a and b we might get quite different taylor series with even negative terms ( unexpected ? ) at different ( unexpected ? ) n'th derivatives.

i am aware of formula's for the n'th derivative of exp(f(x)) or ln(f(x)) and similar but against all odds it didnt seem to help me much ???

it seems this example is similar to the problems i came across.

wonder if im alone with that and what others did.

it would be nice to see my coefficients of my own method

i know the coefficients of (non-linear) 2*sinh^[sin^2(z)](x) change sign oo often and so do the signs of any slog.

thanks in advance

tommy1729

i have been thinking about it for a long time and its about time i ask about this.

there are some partial results but in general the question is quite open and tetration seems to be almost " immune " to standard calculus " tricks ".

in my imagination i always conjecture

tommysexp(tommyslog(x)+1/2) (around x=0) = a0^2 + a1^2 x + a2^2 x^2 + ...

where the squares indicate POSITIVE and the radius is assumed to be at least 1/3.

although the POSITIVE part seems unlikely , i am considering it.

now keep in mind , tetration is tricky and so is calculus.

for instance if someone writes a program it might make roundoff errors.

if someone uses a slighty different method to compute the derivates or to define the half-iterate that might give different results.

and one of the simple reasons is that asymptotics might have very different derivatives.

or might not be differentiable.

or might have a different radius.

or even more weird , might have fractals nth derivatives that finally cause the iterations to converge to the same function , be differentiable everywhere BUT the derivate of the function is not equal the the limit of derivatives.

for instance lim n->00 sin(n^2 * x)/n^2 gives f(x) = 0 but alot of nonzero derivatives.

i believe such things might occur in the research of tetration as well though i cannot prove it.

i remember having puzzling results looking like paradoxes and illusionary fractals when doing research.

for instance ln^[2](2sinh^[a](exp^[2](x))) seems very similar to my 2sinh method and numerically close , but seems to give very different results in many ways ...

im very skeptical of computer computations and computer graphics for the above mentioned reasons and others.

however i dont find much theory , references or talk about this.

is it obviously wrong to assume positivity ?

is it numerically easier to compute than slog or sexp ?

can we conclude something about the signs of half-iterates from the super or invsuper signs without simply composing those 2 ?

i tried many calculus tricks such as cauchy intermediate and also matrix methods and i dont know what else i can do ...

if someone would be willing to give the derivatives of a method mentioned here on the forum , in particular my own , that would be appreciated.

but as mentioned above , plz use the exact same method because it might deviate.

for instance sheldon has many algoritms including one too compute " a tommysexp " but it is slightly different from the way i compute it ...

i dont trust my own results at the moment ...

i wonder when it is allowed to use differences instead of derivatives for the series expansion since the half-iterate of exp(x) grows slower than exp(x) ...

in particular i experimented with newtons forward difference formula , q-difference , q-derivatives and other " q-stuff "

but although seemingly promising at first , finally without results.

one of the main issues - or so it appeared - is well demonstrated by the following :

ln( a^2 + b^2 * exp(x) )

for various a and b we might get quite different taylor series with even negative terms ( unexpected ? ) at different ( unexpected ? ) n'th derivatives.

i am aware of formula's for the n'th derivative of exp(f(x)) or ln(f(x)) and similar but against all odds it didnt seem to help me much ???

it seems this example is similar to the problems i came across.

wonder if im alone with that and what others did.

it would be nice to see my coefficients of my own method

i know the coefficients of (non-linear) 2*sinh^[sin^2(z)](x) change sign oo often and so do the signs of any slog.

thanks in advance

tommy1729