OK I do it slightly different.

I call the conjecture : the tommy-sheldon conjecture.

I feel the need to mention sheldon and I assume it will become a theorem very soon.

Let n be an integer > 3.

Let x be a real number > e^e.

A few words about this : first n > 3, this is justified because of the fact that only " the tail " matters as explained before.

Also x > 1 is important to me because the behaviour of x^n is very much influended by if x > 1 or not.

Also when we substitute x with ln(x) we need x > e for similar reasons.

Since the logic used has to do with approximations and inequalities we take the exp once more and get x > e^e just to be " safe ".

Also x and n are not to small to avoid fixpoint issues.

Let f(x) be the function we look for : the "fake" (entire) exp^[1/2](x) with all derivatives a_n > 0.

Now clearly for all such x and n we have :

f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ... a_n x^n + ...

and it follows that

a_n x^n < f(x)

both sides are > 0 hence we can take a logaritm on both sides without any issues.

ln(a_n) + n ln(x) < ln(f(x))

Now since x is sufficiently large we can approximate ln(f(x)) with f(ln(x)).

( you can compare this to how 2sinh(e^e) is close to e^(e^e) but sinh(0) is not close to exp(0) )

Thus we rewrite :

ln(a_n) + n ln(x) < f(ln(x))

Now we can substitute x with ln(x).

this is valid because x is sufficiently large : ln(x) > e.

(And e > 1 thus still justifying both f(ln(x)) = ln(f(x)) and the power remark about x^n )

( note Im not sure about sheldon's derivative , afterall substiting x with ln(x) and then taking the derivative without taking into account the substitution you just made ... in other words D f( ln(x) ) = f ' (ln(x) D ln(x) = f ' (ln(x)) / -x. However if you substitute ln(x) = y then you get f ' (y) dy ... where dy = dx/-x BUT (!) if you substitude ln(x) = x you get f ' (y) dx AND NOT f ' (y) / -x = f ' (ln(x)) / -x.

Although the division by x might not make a big difference in the conclusion , it is fundamentally wrong logic even if it leads to a correct result. And even if the result was correct , I think it created more confusion than clarity , sheldon even added a conjecture for it , WHEREAS with my method that is not neccessary. SO I believe this step is dubious and confused sheldon himself. Hence perhaps not a big issue I felt the need to communicate this. IN PARTICULAR BECAUSE THIS MIGHT WORK NOW BUT NOT IN A GENERALIZED CASE , HENCE THE MISTAKE MUST BE NOTED DUE TO DANGER OF BEING USED WRONGLY BY " UNEXPERIENCED MATHEMATICIANS ".

No offense to sheldon for all clarity. Btw writing 1/-x is also not very good , the irony ... but its valid for nonzero real x. Its better to say -1/x then 1/-x )

We continue

ln(a_n) + n ln(x) < f(ln(x))

ln(a_n) < f(ln(x)) - n ln(x)

As said we can substitute ln(x) = x.

( or ln(x) = y if you want , but it does not matter here because I do not take a derivative )

ln(a_n) < f(x) - n x

---

Now you see the issue with small n.

Say n = 1 or 2.

LHS = negative ... But RHS could be both negative or positive for small x and the inequality places a question mark on the " tail " argument and strong bounds on the a_n.

---

Since exp is a strictly rising function on the reals :

a_n < exp(f(x) - n x)

Now the trick is to write x as function of n.

AND NOT VICE VERSA !

ALSO NO DERIVATIVE !

Lagrange multiplication is also dubious here for similar reasons as mentioned above.

Let x = g(n).

Although the cardinality of the reals =/= the cardinality of the positive integers this is the way to go.

Differentiating this is (thus) also silly.

a_n < exp(f(x) - n x)

a_n < exp( f(g(n)) - n g(n) )

1/a_n > exp( n g(n) - f(g(n)) )

Now let g(n) = ln^[a](n).

Then clearly

n g(n) - f(g(n)) = n ln^[a](n) - f(ln^[a](n))

Remember f is close to exp^[1/2] since x is large enough.

=> n ln^[a](n) - (ln^[a-(1/2)](n))

Now this function is clearly maximized when a = 1/2.

=> n ln^[1/2](n) - n = n ( ln^[1/2](n) - 1 ).

SO our estimate is :

1/a_n > exp(n ( ln^[1/2](n) - 1 ) )

A better estimate is very very likely :

1/a_n > exp(n * ln^[1/2](n) )

The fact that a = 1/2 is a mini theorem.

( note the sequence is related to a discrete set and carlson might apply here to improve )

the conjecture is

the tommy sheldon conjecture

1/a_n = O( exp(n * (ln^[1/2](n))^B ) )

For some real B > 0.

The selfreference here is quite huge.

This also might be improved by recursion if we replace ln^[1/2](x) in the estimate or conjecture with the f(x).

Notice it is easy to prove 1/a_n = O( exp(n * (n^C) ) ) is false for any real C > 0.

Thereby destroying an earlier guess.

It very nice to know that this can be generalized to finding " fake " exp^[1/3] or other half-iterates of other functions etc etc.

I considered Collatz again... which gave me a headache.

regards

tommy1729

I call the conjecture : the tommy-sheldon conjecture.

I feel the need to mention sheldon and I assume it will become a theorem very soon.

Let n be an integer > 3.

Let x be a real number > e^e.

A few words about this : first n > 3, this is justified because of the fact that only " the tail " matters as explained before.

Also x > 1 is important to me because the behaviour of x^n is very much influended by if x > 1 or not.

Also when we substitute x with ln(x) we need x > e for similar reasons.

Since the logic used has to do with approximations and inequalities we take the exp once more and get x > e^e just to be " safe ".

Also x and n are not to small to avoid fixpoint issues.

Let f(x) be the function we look for : the "fake" (entire) exp^[1/2](x) with all derivatives a_n > 0.

Now clearly for all such x and n we have :

f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ... a_n x^n + ...

and it follows that

a_n x^n < f(x)

both sides are > 0 hence we can take a logaritm on both sides without any issues.

ln(a_n) + n ln(x) < ln(f(x))

Now since x is sufficiently large we can approximate ln(f(x)) with f(ln(x)).

( you can compare this to how 2sinh(e^e) is close to e^(e^e) but sinh(0) is not close to exp(0) )

Thus we rewrite :

ln(a_n) + n ln(x) < f(ln(x))

Now we can substitute x with ln(x).

this is valid because x is sufficiently large : ln(x) > e.

(And e > 1 thus still justifying both f(ln(x)) = ln(f(x)) and the power remark about x^n )

( note Im not sure about sheldon's derivative , afterall substiting x with ln(x) and then taking the derivative without taking into account the substitution you just made ... in other words D f( ln(x) ) = f ' (ln(x) D ln(x) = f ' (ln(x)) / -x. However if you substitute ln(x) = y then you get f ' (y) dy ... where dy = dx/-x BUT (!) if you substitude ln(x) = x you get f ' (y) dx AND NOT f ' (y) / -x = f ' (ln(x)) / -x.

Although the division by x might not make a big difference in the conclusion , it is fundamentally wrong logic even if it leads to a correct result. And even if the result was correct , I think it created more confusion than clarity , sheldon even added a conjecture for it , WHEREAS with my method that is not neccessary. SO I believe this step is dubious and confused sheldon himself. Hence perhaps not a big issue I felt the need to communicate this. IN PARTICULAR BECAUSE THIS MIGHT WORK NOW BUT NOT IN A GENERALIZED CASE , HENCE THE MISTAKE MUST BE NOTED DUE TO DANGER OF BEING USED WRONGLY BY " UNEXPERIENCED MATHEMATICIANS ".

No offense to sheldon for all clarity. Btw writing 1/-x is also not very good , the irony ... but its valid for nonzero real x. Its better to say -1/x then 1/-x )

We continue

ln(a_n) + n ln(x) < f(ln(x))

ln(a_n) < f(ln(x)) - n ln(x)

As said we can substitute ln(x) = x.

( or ln(x) = y if you want , but it does not matter here because I do not take a derivative )

ln(a_n) < f(x) - n x

---

Now you see the issue with small n.

Say n = 1 or 2.

LHS = negative ... But RHS could be both negative or positive for small x and the inequality places a question mark on the " tail " argument and strong bounds on the a_n.

---

Since exp is a strictly rising function on the reals :

a_n < exp(f(x) - n x)

Now the trick is to write x as function of n.

AND NOT VICE VERSA !

ALSO NO DERIVATIVE !

Lagrange multiplication is also dubious here for similar reasons as mentioned above.

Let x = g(n).

Although the cardinality of the reals =/= the cardinality of the positive integers this is the way to go.

Differentiating this is (thus) also silly.

a_n < exp(f(x) - n x)

a_n < exp( f(g(n)) - n g(n) )

1/a_n > exp( n g(n) - f(g(n)) )

Now let g(n) = ln^[a](n).

Then clearly

n g(n) - f(g(n)) = n ln^[a](n) - f(ln^[a](n))

Remember f is close to exp^[1/2] since x is large enough.

=> n ln^[a](n) - (ln^[a-(1/2)](n))

Now this function is clearly maximized when a = 1/2.

=> n ln^[1/2](n) - n = n ( ln^[1/2](n) - 1 ).

SO our estimate is :

1/a_n > exp(n ( ln^[1/2](n) - 1 ) )

A better estimate is very very likely :

1/a_n > exp(n * ln^[1/2](n) )

The fact that a = 1/2 is a mini theorem.

( note the sequence is related to a discrete set and carlson might apply here to improve )

the conjecture is

the tommy sheldon conjecture

1/a_n = O( exp(n * (ln^[1/2](n))^B ) )

For some real B > 0.

The selfreference here is quite huge.

This also might be improved by recursion if we replace ln^[1/2](x) in the estimate or conjecture with the f(x).

Notice it is easy to prove 1/a_n = O( exp(n * (n^C) ) ) is false for any real C > 0.

Thereby destroying an earlier guess.

It very nice to know that this can be generalized to finding " fake " exp^[1/3] or other half-iterates of other functions etc etc.

I considered Collatz again... which gave me a headache.

regards

tommy1729