10/10/2015, 08:26 AM
10/11/2015, 07:17 PM
Let f(x) satisfy the conditions.
---
Let g(x) satisfy the conditions.
Let m be a positive integer.
Let s be a nonnegative real.
Let r be a positive real.
Notice min [ (f(x) + s x^m) / x^m ] = min[ f(x)/x^m ] + s.
Therefore using S9
Fake [ f(x) + s x^m ] = fake [ f(x) ] + s x^m.
Let ++ be close to addition.
Then by the above and
Fake( f(x) ) => a_n
==>
fake ( f(x) x ) => b_n = a_(n-1)
And also
Fake ( r f(x) ) = r fake ( f(x) )
We get
Fake [ f(x) + g(x) ] = Fake [ f(x) ] ++ Fake [ g(x) ]
This is a simple but important fact.
Notice this is not immediately clear from - with respect - the S9 formulation with all the ln's , exp and g and h.
It might be good homework to try and find these results with the g h formulation of S9.
Ofcourse the g h formulations has its OWN benefits.
But imho this simple notation of min ( f(x) / x^n ) i still use has been put back on the map.
Guess this should be called
Fake addition theorem
Another road of investigation should be how other methods like the gaussian or Tommy-sheldon iterations deal with this.
By now the idea of distributive must have come to your mind.
A wild conjecture would be
Tfdl =
Tommy's fake distributive law.
Law , conjecture or theorem ...
This refers to tommy's generalized distributive law.
See elsewhere on the forum.
And https://sites.google.com/site/tommy1729/...e-property
For S9 :
Notice fake [ fake( f(x) ) + fake( g(x) ) ] = fake^[2]( f(x) ) + fake^[2]( g(x) ).
Clearly related. Fake^[2] means fake fake.
I call that tommy's fake fake formula.
I think these results Will be useful.
For none S9 methods we can probably replace + on the RHS with ++.
This needs more investigation.
Using the squeeze theorem and comparison theorem ( from " limit calculus " ) Will be useful to know.
---
Coffee break :p
---
For the second half of this post I Will post An idea that is in my head since post 9.
In post 16 Sheldon improved the overestimate S9 with a division.
It also seems Natural to Try with a substraction.
For methods that are general and do not use zero's it seems unnatural to not use the S9 part.
( e.g. Gaussian = S9 / sqrt .. )
In particular after the properties given in the first half of this post, and in general after rereading this thread.
So this idea occured :
Tommy's tommynaci sequence
Start with 2 , 5 , ..
And use f
n) = f
n-1) + f
n-2).
Already known as the Evangelist series.
F
n-1) = ((3*sqrt(5)+1)*(((1+sqrt(5))/2)^n)+(3*sqrt(5)-1)*(((1-sqrt(5))/2)^n))/(2*sqrt(5)).
For those intrested.
Anyway
a_n = min [ ( f(x) - ... - a_(n_19) x^(n-19) - a_(n-12) x^(n-12) - a_(n-7) x^(n-7) -
a_(n-5) x^(n-5) - a_(n-2) x^(n-2) ) / x^n ]
Tommy's Evangelist recursion.
Or the tommynaci method.
---
Notice for exp(x) we get the Exact solution
By using a_n = min [ ( f(x) - a0 - a1 x - a2 x^2 - ... - a_(n-1) x^(n-1) ) / x^n ]
But that does not work for nonentire f or f with Some negative derivatives.
Hence Tommy's Evangelist recursion is used instead for such.
This implies the method does not perform well for exp.
---
I had Some gaussian variant Ideas about the above.
But way too many questions ...
---
How Well does the tommynacci method work ?
Time for medicine.
But as you probably can tell im feeling better and no longer confused.
Unfortunaly not at full potential. So i probably missed something trivial too add.
Take care
Regards
Tommy1729
---
Let g(x) satisfy the conditions.
Let m be a positive integer.
Let s be a nonnegative real.
Let r be a positive real.
Notice min [ (f(x) + s x^m) / x^m ] = min[ f(x)/x^m ] + s.
Therefore using S9
Fake [ f(x) + s x^m ] = fake [ f(x) ] + s x^m.
Let ++ be close to addition.
Then by the above and
Fake( f(x) ) => a_n
==>
fake ( f(x) x ) => b_n = a_(n-1)
And also
Fake ( r f(x) ) = r fake ( f(x) )
We get
Fake [ f(x) + g(x) ] = Fake [ f(x) ] ++ Fake [ g(x) ]
This is a simple but important fact.
Notice this is not immediately clear from - with respect - the S9 formulation with all the ln's , exp and g and h.
It might be good homework to try and find these results with the g h formulation of S9.
Ofcourse the g h formulations has its OWN benefits.
But imho this simple notation of min ( f(x) / x^n ) i still use has been put back on the map.
Guess this should be called
Fake addition theorem
Another road of investigation should be how other methods like the gaussian or Tommy-sheldon iterations deal with this.
By now the idea of distributive must have come to your mind.
A wild conjecture would be
Tfdl =
Tommy's fake distributive law.
Law , conjecture or theorem ...
This refers to tommy's generalized distributive law.
See elsewhere on the forum.
And https://sites.google.com/site/tommy1729/...e-property
For S9 :
Notice fake [ fake( f(x) ) + fake( g(x) ) ] = fake^[2]( f(x) ) + fake^[2]( g(x) ).
Clearly related. Fake^[2] means fake fake.
I call that tommy's fake fake formula.
I think these results Will be useful.
For none S9 methods we can probably replace + on the RHS with ++.
This needs more investigation.
Using the squeeze theorem and comparison theorem ( from " limit calculus " ) Will be useful to know.
---
Coffee break :p
---
For the second half of this post I Will post An idea that is in my head since post 9.
In post 16 Sheldon improved the overestimate S9 with a division.
It also seems Natural to Try with a substraction.
For methods that are general and do not use zero's it seems unnatural to not use the S9 part.
( e.g. Gaussian = S9 / sqrt .. )
In particular after the properties given in the first half of this post, and in general after rereading this thread.
So this idea occured :
Tommy's tommynaci sequence
Start with 2 , 5 , ..
And use f



Already known as the Evangelist series.
F

For those intrested.
Anyway
a_n = min [ ( f(x) - ... - a_(n_19) x^(n-19) - a_(n-12) x^(n-12) - a_(n-7) x^(n-7) -
a_(n-5) x^(n-5) - a_(n-2) x^(n-2) ) / x^n ]
Tommy's Evangelist recursion.
Or the tommynaci method.
---
Notice for exp(x) we get the Exact solution
By using a_n = min [ ( f(x) - a0 - a1 x - a2 x^2 - ... - a_(n-1) x^(n-1) ) / x^n ]
But that does not work for nonentire f or f with Some negative derivatives.
Hence Tommy's Evangelist recursion is used instead for such.
This implies the method does not perform well for exp.
---
I had Some gaussian variant Ideas about the above.
But way too many questions ...
---
How Well does the tommynacci method work ?
Time for medicine.
But as you probably can tell im feeling better and no longer confused.
Unfortunaly not at full potential. So i probably missed something trivial too add.
Take care
Regards
Tommy1729
10/17/2015, 11:59 PM
About the Tommy-Sheldon iterations ..
I designed them for nonnegative g ' '.
I suggest - with doubt - that in case of a negative , we remove the minus sign.
In other words the absolute value.
Although counterintuit , notice the second deriv already influences the first
Derivative.
Regards
Tommy1729
I designed them for nonnegative g ' '.
I suggest - with doubt - that in case of a negative , we remove the minus sign.
In other words the absolute value.
Although counterintuit , notice the second deriv already influences the first
Derivative.
Regards
Tommy1729
10/18/2015, 11:07 PM
Im considering to replace g'' in the gaussian method or the Tommy-Sheldon iterations by ln( e + exp(g'')).
Notice the condition f '' > 0 implies
For x > 1 :
g '' + g' (g'-1) > 0.
---
How good the Tommy-Sheldon iterations work are depending alot on how good the gaussian is.
Lets say the gaussian is off by a factor 5.
Then the next iteration gives
New g = ln( 5 f (*) ) = ln(5) + old g.
This change seems too weak.
Funny because if we consider gaussian( 5 f) then this weak change is exactly what we need ( to get S9 close to gaussian and both to correct )
, since min( 5 f/ x^m) = 5 min ( f/ x^m).
Reconsidering things seems necessary.
Fake function theory is tricky.
However , i assumed here the gaussian was off by a factor 5.
It is not certain this is possible.
What then gives back confidence to the Tommy-Sheldon iterations.
Or by the replacement suggested Above ,
" The exponential Tommy-Sheldon iterations ".
Shorthand ets.
Regards
Tommy1729
Notice the condition f '' > 0 implies
For x > 1 :
g '' + g' (g'-1) > 0.
---
How good the Tommy-Sheldon iterations work are depending alot on how good the gaussian is.
Lets say the gaussian is off by a factor 5.
Then the next iteration gives
New g = ln( 5 f (*) ) = ln(5) + old g.
This change seems too weak.
Funny because if we consider gaussian( 5 f) then this weak change is exactly what we need ( to get S9 close to gaussian and both to correct )
, since min( 5 f/ x^m) = 5 min ( f/ x^m).
Reconsidering things seems necessary.
Fake function theory is tricky.
However , i assumed here the gaussian was off by a factor 5.
It is not certain this is possible.
What then gives back confidence to the Tommy-Sheldon iterations.
Or by the replacement suggested Above ,
" The exponential Tommy-Sheldon iterations ".
Shorthand ets.
Regards
Tommy1729
10/18/2015, 11:22 PM
In context of the previous post , i assume the gaussian can be off by about
( inspired by the ets and the binary partition )
ln(e + exp(0)) / ln(e) = ln(e+1) ~ 1,313
Regards
Tommy1729
( inspired by the ets and the binary partition )
ln(e + exp(0)) / ln(e) = ln(e+1) ~ 1,313
Regards
Tommy1729
10/19/2015, 12:20 AM
Solving g '' + g ' ( g ' - 1) = C is very insightfull !
Regards
Tommy1729
Regards
Tommy1729
10/27/2015, 01:27 PM
Not sure if its a good idea.
I have not considered it much.
I call it the egyptian method
Min(a) + min(b) =< min(a+b)
And
1/x = 1/(x+1) + 1/(x^2 + x).
So instead of min( f(x)/x^n )
Try min ( f(x) / (x^n + 1) ) + min ( f(x) / (x^2n + x^n) ).
Regards
Tommy1729
I have not considered it much.
I call it the egyptian method
Min(a) + min(b) =< min(a+b)
And
1/x = 1/(x+1) + 1/(x^2 + x).
So instead of min( f(x)/x^n )
Try min ( f(x) / (x^n + 1) ) + min ( f(x) / (x^2n + x^n) ).
Regards
Tommy1729
02/16/2016, 03:17 AM
I need the ratio's of the real a_n vs fake a_n for the function exp(x) , where the fake is the limit of the Tommy-Sheldon iterations.
Also , related and usefull ,
1 / ( 1 - a_n/fake a_n ).
[ i assumed the limit of the ratio going to 1 indeed ].
---
Identical question for exp^[2].
Im close to insights.
Regards
Tommy1729
Also , related and usefull ,
1 / ( 1 - a_n/fake a_n ).
[ i assumed the limit of the ratio going to 1 indeed ].
---
Identical question for exp^[2].
Im close to insights.
Regards
Tommy1729
02/17/2016, 01:25 PM
Intresting is the fake of exp( (2 pi)^{-1} * 0,5 * ln(x)^2 ).
This makes the gaussian agree on the usual S9.
Methods agreeing could indicate good estimates.
So how good is it ? And how true is the indication ?
Will we get the searched O(1/n^2) ??
Need to study this.
Regards
Tommy1729
This makes the gaussian agree on the usual S9.
Methods agreeing could indicate good estimates.
So how good is it ? And how true is the indication ?
Will we get the searched O(1/n^2) ??
Need to study this.
Regards
Tommy1729
02/18/2016, 12:53 PM
Notice how the second iteration of the Tommy-Sheldon iterations gives a nonnegative coëfficiënt for g '' , therefore being better than the gaussian in " most " cases.
Naturally in all cases where g " < 0 , but more investigation is needed for g " > 0.
Regards
Tommy1729
Naturally in all cases where g " < 0 , but more investigation is needed for g " > 0.
Regards
Tommy1729