08/01/2014, 11:20 PM

Let f(x) = a_0 + a_1 x + a_2 x^2 + ...

Let a_0 , a_1 , a_2 > 0

Assume a_2 > a_3 , a_3 > a_4 , ... > 0

Then the goal is to find a_n for n > 2.

Assume a_n > (n-1) a_(n+1)

Some motivation.

First we try to solve for a single variable.

Tetration is a difficult subject so multivariable ideas/equations are complicated in particular

when we are not working with matrices.

Another thing. We prefer not to solve for an equation that contains both a_(n-1) and a_n.

The reason is that we get disagreement.

example : F(a_10,a_11) = 0 F(a_11,a12) = 0 Now we have 2 conflicting values for a_11.

Besides solving for a_(n-1) when we already had its value is unlogical.

Solving for a_n and a_(n+1) makes a bit more sense since we did not yet have the value of a_(n+1).

example : given a_5 it makes sense to solve for a_6,a_7.

Keeping that in the back of our mind , what type of equation should we solve ?

Logical would be a truncated taylor series.

Sheldon approximately solved a_n x^n = exp^[0.5](x).

However the truncation is extreme here.

It is true that for every x , there must be an n such that a_n x^n is the most dominant term.

But a_n is independant of the value of x. Hence the equation a_n x^n = exp^[0.5](x) makes some sense.

However a more dominant approximation of a taylor series is a_q x^q ,

where q is between n and n+1 and a_q = a_n.

This shows that sheldons equation is probably valid within a ratio of x^(q-n).

On average that is x^(1/2).

So sheldon's solution S(x) probably satisfies S(x)/x^(3/4) - C < f(x) < S(x) x^(3/4) + C for some constant C.

In fact sheldon mentioned the correcting factor x^(1/2) himself.

A less extreme truncation would lead to better results.

How to get more dominant terms ?

We need to consider the contributions of a_m x^m for a general a_m from a random taylor series with positive a_i.

The contributions with respect to m look like a gaussian curve g where g(m - t) is smaller than g(q)

and g(m + t2) is smaller than g(q) for suff large t and assuming m = q + o(2).

the top of this curve at g(m) has growing m where m grows with x.

but a gauss curve is symmetric.

so a_n x^n + a_(n+1) x^(n+1) are the most dominant terms and both terms can have similar contribution !

Now perhaps you are thinking : why not solve a_n x^n + a_(n+1) x^(n+1) + a_(n+2) x^(n+2) + a_(n+3) x^(n+3) ?

Its a reasonable idea ... but It violates all logic above.

First there are too many variables.

Second we get a lot disagreement.

But most importantly its no longer likely that a_n x^n is the dominant term !!

Its quite more likely to be a_(n+1) x^(n+1) + a_(n+2) x^(n+2) which then is basicly the same as what is proposed now.

Notice that if a_n x^n is not one of the dominant terms , its more like the term a_(n-1) x^(n-1).

But then we violate another logic of above ; we solve for values we already have.

Notice the assumption "Assume a_n > (n-1) a_(n+1)" only makes this argument stronger.

These consideration lead me to

a_n x^n + a_(n+1) x^(n+1) = exp^[0.5](x).

2 unknowns is a bit problematic and I do not want to get disagreement.

So we use the assumption "Assume a_n > (n-1) a_(n+1)".

Notice this assumption makes sense since f grows more like a polynomial then an exponential.

The assumption is also consistant with sheldon's equation , plots and all results sofar.

a_n x^n + a_n/(n-1) x^(n+1) = exp^[0.5](x)

Now we need approximations for exp^[0.5](x) without having our f(x).

I use my 2sinh method for that when x is large.

We continue :

a_n x^n (1 + 1/(n-1) x) = exp^[0.5](x)

exp(X) = x , take ln on both sides :

ln(a_n) + n ln(x) + ln(1 + 1/(n-1) x) = ln(exp^[0.5](x))

replace x with exp(X)

Now the strenght of my 2sinh shows : ln(exp^[0.5](exp(X))) = exp^[0.5](X)

we get :

ln(a_n) + n X + ln(1 + 1/(n-1) exp(X)) = exp^[0.5](X)

We know from that ln(1 + "Large") is about ln("large") + 1/"large".

SO we can further reduce :

ln(a_n) + n X + ln(1/(n-1) exp(X)) + (n-1)/exp(X) = exp^[0.5](X)

Slightly less formal but it seems the term (n-1)/exp(X) is so small we can neglet it and remove it.

we then get :

ln(a_n) + n X + ln(1/(n-1) exp(X)) = exp^[0.5](X)

Simplify

ln(a_n) + n X - ln(n-1) + X = exp^[0.5](X)

Simplify even Further :

ln(a_n) + (n+1) X - ln(n-1) = exp^[0.5](X)

ln(a_n) = Max [ exp^[0.5](X) - (n+1) X + ln(n-1) ]

= Max [exp^[0.5](X) - (n+1) X] + ln(n-1)

exp^[0.5](X) - (n+1) X has one minimum value, where the derivative is zero.

At the minimum, the derivative will be equal 0, so defining

Now define

And then finally the improved solution is :

I considered further improvements but things got dubious and I stumbled upon problems with the principles used here.

In other words be careful.

regards

tommy1729

" truth is that what does not go away when you stop believing in it "

tommy1729

Let a_0 , a_1 , a_2 > 0

Assume a_2 > a_3 , a_3 > a_4 , ... > 0

Then the goal is to find a_n for n > 2.

Assume a_n > (n-1) a_(n+1)

Some motivation.

First we try to solve for a single variable.

Tetration is a difficult subject so multivariable ideas/equations are complicated in particular

when we are not working with matrices.

Another thing. We prefer not to solve for an equation that contains both a_(n-1) and a_n.

The reason is that we get disagreement.

example : F(a_10,a_11) = 0 F(a_11,a12) = 0 Now we have 2 conflicting values for a_11.

Besides solving for a_(n-1) when we already had its value is unlogical.

Solving for a_n and a_(n+1) makes a bit more sense since we did not yet have the value of a_(n+1).

example : given a_5 it makes sense to solve for a_6,a_7.

Keeping that in the back of our mind , what type of equation should we solve ?

Logical would be a truncated taylor series.

Sheldon approximately solved a_n x^n = exp^[0.5](x).

However the truncation is extreme here.

It is true that for every x , there must be an n such that a_n x^n is the most dominant term.

But a_n is independant of the value of x. Hence the equation a_n x^n = exp^[0.5](x) makes some sense.

However a more dominant approximation of a taylor series is a_q x^q ,

where q is between n and n+1 and a_q = a_n.

This shows that sheldons equation is probably valid within a ratio of x^(q-n).

On average that is x^(1/2).

So sheldon's solution S(x) probably satisfies S(x)/x^(3/4) - C < f(x) < S(x) x^(3/4) + C for some constant C.

In fact sheldon mentioned the correcting factor x^(1/2) himself.

A less extreme truncation would lead to better results.

How to get more dominant terms ?

We need to consider the contributions of a_m x^m for a general a_m from a random taylor series with positive a_i.

The contributions with respect to m look like a gaussian curve g where g(m - t) is smaller than g(q)

and g(m + t2) is smaller than g(q) for suff large t and assuming m = q + o(2).

the top of this curve at g(m) has growing m where m grows with x.

but a gauss curve is symmetric.

so a_n x^n + a_(n+1) x^(n+1) are the most dominant terms and both terms can have similar contribution !

Now perhaps you are thinking : why not solve a_n x^n + a_(n+1) x^(n+1) + a_(n+2) x^(n+2) + a_(n+3) x^(n+3) ?

Its a reasonable idea ... but It violates all logic above.

First there are too many variables.

Second we get a lot disagreement.

But most importantly its no longer likely that a_n x^n is the dominant term !!

Its quite more likely to be a_(n+1) x^(n+1) + a_(n+2) x^(n+2) which then is basicly the same as what is proposed now.

Notice that if a_n x^n is not one of the dominant terms , its more like the term a_(n-1) x^(n-1).

But then we violate another logic of above ; we solve for values we already have.

Notice the assumption "Assume a_n > (n-1) a_(n+1)" only makes this argument stronger.

These consideration lead me to

a_n x^n + a_(n+1) x^(n+1) = exp^[0.5](x).

2 unknowns is a bit problematic and I do not want to get disagreement.

So we use the assumption "Assume a_n > (n-1) a_(n+1)".

Notice this assumption makes sense since f grows more like a polynomial then an exponential.

The assumption is also consistant with sheldon's equation , plots and all results sofar.

a_n x^n + a_n/(n-1) x^(n+1) = exp^[0.5](x)

Now we need approximations for exp^[0.5](x) without having our f(x).

I use my 2sinh method for that when x is large.

We continue :

a_n x^n (1 + 1/(n-1) x) = exp^[0.5](x)

exp(X) = x , take ln on both sides :

ln(a_n) + n ln(x) + ln(1 + 1/(n-1) x) = ln(exp^[0.5](x))

replace x with exp(X)

Now the strenght of my 2sinh shows : ln(exp^[0.5](exp(X))) = exp^[0.5](X)

we get :

ln(a_n) + n X + ln(1 + 1/(n-1) exp(X)) = exp^[0.5](X)

We know from that ln(1 + "Large") is about ln("large") + 1/"large".

SO we can further reduce :

ln(a_n) + n X + ln(1/(n-1) exp(X)) + (n-1)/exp(X) = exp^[0.5](X)

Slightly less formal but it seems the term (n-1)/exp(X) is so small we can neglet it and remove it.

we then get :

ln(a_n) + n X + ln(1/(n-1) exp(X)) = exp^[0.5](X)

Simplify

ln(a_n) + n X - ln(n-1) + X = exp^[0.5](X)

Simplify even Further :

ln(a_n) + (n+1) X - ln(n-1) = exp^[0.5](X)

ln(a_n) = Max [ exp^[0.5](X) - (n+1) X + ln(n-1) ]

= Max [exp^[0.5](X) - (n+1) X] + ln(n-1)

exp^[0.5](X) - (n+1) X has one minimum value, where the derivative is zero.

At the minimum, the derivative will be equal 0, so defining

Now define

And then finally the improved solution is :

I considered further improvements but things got dubious and I stumbled upon problems with the principles used here.

In other words be careful.

regards

tommy1729

" truth is that what does not go away when you stop believing in it "

tommy1729