Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Slog(Exponential Factorial(x))
#1
Question 
EF(x) = exponential factorial(x) = x^(x-1)^(x-2)^...^3^2^1.
What happens if you do the tetration logarithm of the exponential factorial function. (I am thinking tetration logarithm base the Tetra-Euler Number.) How can the tetration logarithm of the exponential factorial function be approximated?
Slog(e4,EF(1))=0.
Slog(e4,EF(2))~.636.
Slog(e4,EF(3))~1.612.
Slog(e4,EF(4))~2.693.
Slog(e4,EF(5))~3.703.
Slog(e4,EF(6))~4.703.
Numbers worked out with the Kneser method.
I conjecture that slog(k,EF(x))-x approaches a number, as x grows larger and larger, for all real k greater than eta.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ
Please remember to stay hydrated.
Sincerely: Catullus
Reply
#2
(06/15/2022, 01:08 AM)Catullus Wrote: Numbers worked out with the Kneser method.

Just a general remark: since we don't have an "accepted" Kneser-Method it would be good to always mention *which* implementation one uses. For instance, Sheldonison's method is not yet *proven* to implement Kneser (which is what Sheldonison always stated, although he is confident that it is). My q&d polynomial method (based on Carlemanmatrices, but truncated ones) *seems to give* an approximation to sheldonison's values, but as well it is not proven to claim such an asymptotic at all.   For instance in my msgs in MSE and MO I always state it explicitely, that neither the Sheldon's nor my is proven to approximate the true Kneser-solution.       

I think this is not "nitpicking" but helpful for the readers, not falsely to assume they would "stand on the shoulders of giants" when they derive conclusions and lemmas and theorems based on  the found values...              

Gottfried
Gottfried Helms, Kassel
Reply
#3
(06/15/2022, 01:08 AM)Catullus Wrote: EF(x) = exponential factorial(x) = x^(x-1)^(x-2)^...^3^2^1.
What happens if you do the tetration logarithm of the exponential factorial function. (I am thinking tetration logarithm base the Tetra-Euler Number.) How can the tetration logarithm of the exponential factorial function be approximated?
slog(e4,EF(1)) = 0.
slog(e4,EF(2)) ~ .636.
slog(e4,EF(3)) ~ 1.612.
slog(e4,EF(4)) ~ 2.693.
slog(e4,EF(5)) ~ 3.703.
Numbers worked out with the Kneser method.
This is my 16th thread!  Smile
I conjecture that slog(k,EF(x))-x approaches a number, as x grows larger and larger. For any k greater than eta.

I think your conjecture is wrong.

For a fixed x :

slog(x^x^... n times ) is about n + constant , for large n.

You might want to look at base change formula and base change constant.

slog( 3^... n times ) is a converging sequence.

because ln ln ... ( 3^ 3^ ... ) converges.

but the base change constant increases as the value x increases.

the base of the slog is not so relevant if it is above e and so is x.

SO for sufficiently large  x :

 x < slog_b(EF(x)) < x + basechange(b,x)

Now since x^(x-1)^(x-2)^... is much closer to x^x^x than b^b^b I think slog_b(EF(x)) is closer to  x + basechange(b,x) than to x.

therefore slog_b(EF(x)) - x is probably a strictly increasing function of x.

Maybe slog_e4 ( EF(x) ) - x - 3/4 * basechange(e4,x) converges ...

This is not a formal proof ofcourse.

If we find something like slog_e4 ( EF(x) ) - x - 3/4 * basechange(e4,x) converges or similar we could use that to construct a C^oo solution to EF(x).




regards

tommy1729
Reply
#4
(06/15/2022, 09:32 AM)Gottfried Wrote:
(06/15/2022, 01:08 AM)Catullus Wrote: Numbers worked out with the Kneser method.

Just a general remark: since we don't have an "accepted" Kneser-Method it would be good to always mention *which* implementation one uses. For instance, Sheldonison's method is not yet *proven* to implement Kneser (which is what Sheldonison always stated, although he is confident that it is). My q&d polynomial method (based on Carlemanmatrices, but truncated ones) *seems to give* an approximation to sheldonison's values, but as well it is not proven to claim such an asymptotic at all.   For instance in my msgs in MSE and MO I always state it explicitely, that neither the Sheldon's nor my is proven to approximate the true Kneser-solution.       

I think this is not "nitpicking" but helpful for the readers, not falsely to assume they would "stand on the shoulders of giants" when they derive conclusions and lemmas and theorems based on  the found values...              

Gottfried

A little bit of a sidebar, it would appear the only computation method that is solidly proven would be Paulsen and Cowgill's algorithm, designed largely off the Kouznetsov method. It uses Kounetsov's race track method, but includes a more thorough construction of Kneser using similar ideas as Kneser and Kouznetsov. Gem of a paper. Algorithm isn't very good though, as I'd say. It's slow for something like 100 digits, is slow at discovering taylor series. And in many ways is just a contour integral, which is never efficient in programming without large amounts of speed ups.

I would argue though that Sheldon's method is proven. Him and I were working on a paper, and I devoured a lot of the insight he had on his matrix contour integration method. It's absolutely provable, it's just unwritten. Unfortunately Sheldon felt he wasn't in the mood to start writing the paper. I wouldn't write my findings without him as a co-author, but I'm very confident it wouldn't take much to turn fatou.gp into a working proof. I see how it would be proven, is what I'm trying to say. And it really only relies on a good amount of fourier analysis. It's not my cup of tea, because I hate matrices, so I'd stumble a bit with some of the proofs using matrices--but much of it is translatable to transformations of fourier series, and that's my cup of tea, lol.

The carlemann matrix approach always seemed doomed from my perspective. It's a great approximation tool (set 1000x1000 matrix rather than infinity). It reminds me too much of Heisenberg mechanics vs Schrodinger mechanics. And I'm a Schrodinger guy, lol. I just don't find it very elegant, so I don't like it. And my brain doesn't like it, because it doesn't feel intuitive to it. I can't even imagine how a construction through Carlemann would ever be provable. I imagine it would certainly work though.

The only other method which I wished was easier to show, is the beta method of reconstructing Kneser. Which appeared to be creating analytic series, and decaying to \(L,L^*\) in upper and lower half planes--real valued as well. This would be enough to confirm Kneser, per Paulsen and Cowgill. The trouble is the beta method starts getting very slow about here. And no longer has the speed it has for the Shell-thron region, so as an algorithm it's useless really, lol.

Are there any other Kneser algorithms in existence that are proven to work besides Cowgill and PAulsen's? (besides Kneser himself, that is).
Reply
#5
(06/16/2022, 06:16 AM)JmsNxn Wrote:
(06/15/2022, 09:32 AM)Gottfried Wrote:
(06/15/2022, 01:08 AM)Catullus Wrote: Numbers worked out with the Kneser method.

Just a general remark: since we don't have an "accepted" Kneser-Method it would be good to always mention *which* implementation one uses. For instance, Sheldonison's method is not yet *proven* to implement Kneser (which is what Sheldonison always stated, although he is confident that it is). My q&d polynomial method (based on Carlemanmatrices, but truncated ones) *seems to give* an approximation to sheldonison's values, but as well it is not proven to claim such an asymptotic at all.   For instance in my msgs in MSE and MO I always state it explicitely, that neither the Sheldon's nor my is proven to approximate the true Kneser-solution.       

I think this is not "nitpicking" but helpful for the readers, not falsely to assume they would "stand on the shoulders of giants" when they derive conclusions and lemmas and theorems based on  the found values...              

Gottfried

A little bit of a sidebar, it would appear the only computation method that is solidly proven would be Paulsen and Cowgill's algorithm, designed largely off the Kouznetsov method. It uses Kounetsov's race track method, but includes a more thorough construction of Kneser using similar ideas as Kneser and Kouznetsov. Gem of a paper. Algorithm isn't very good though, as I'd say. It's slow for something like 100 digits, is slow at discovering taylor series. And in many ways is just a contour integral, which is never efficient in programming without large amounts of speed ups.

I would argue though that Sheldon's method is proven. Him and I were working on a paper, and I devoured a lot of the insight he had on his matrix contour integration method. It's absolutely provable, it's just unwritten. Unfortunately Sheldon felt he wasn't in the mood to start writing the paper. I wouldn't write my findings without him as a co-author, but I'm very confident it wouldn't take much to turn fatou.gp into a working proof. I see how it would be proven, is what I'm trying to say. And it really only relies on a good amount of fourier analysis. It's not my cup of tea, because I hate matrices, so I'd stumble a bit with some of the proofs using matrices--but much of it is translatable to transformations of fourier series, and that's my cup of tea, lol.

The carlemann matrix approach always seemed doomed from my perspective. It's a great approximation tool (set 1000x1000 matrix rather than infinity). It reminds me too much of Heisenberg mechanics vs Schrodinger mechanics. And I'm a Schrodinger guy, lol. I just don't find it very elegant, so I don't like it. And my brain doesn't like it, because it doesn't feel intuitive to it. I can't even imagine how a construction through Carlemann would ever be provable. I imagine it would certainly work though.

The only other method which I wished was easier to show, is the beta method of reconstructing Kneser. Which appeared to be creating analytic series, and decaying to \(L,L^*\) in upper and lower half planes--real valued as well. This would be enough to confirm Kneser, per Paulsen and Cowgill. The trouble is the beta method starts getting very slow about here. And no longer has the speed it has for the Shell-thron region, so as an algorithm it's useless really, lol.

Are there any other Kneser algorithms in existence that are proven to work besides Cowgill and PAulsen's? (besides Kneser himself, that is).

Do you have a paper for the  Paulsen and Cowgill's algorithm ?
In particular one that is free and does not use computer code , since Im not a hero at computer code ...

Always willing to try and give my own proofs and be inspired.

I admit I am still confused or nonconvinced about all those methods.
The basic ideas seems ok but the devil is in the details i guess.

I am not sure how many methods we have for tetration. I estimate about 15 but as you know alot of issues and questions.
And proving they are different without direct numerical methods ( thus theoretical ) is a difficult task.

Imo we also need to understand the basechange constants better. 
They seem key to many things.

I understand how you feel about matrices.
I find it weird and ironic that " linear math " seems harder than most other branches when all things are said and done.

You would expect that "linear" implies simplest or so.

Matrix splitting is cool but has its conditions.
Everything in linear algebra has annoying conditions lol.

I will however maybe post a method based on matrices soon if things work out ; Im working on a new idea.

Also the beta method and the gaussian method might be equal.

And I feel the matrix method from andydude/Walker slog deserves to be mentioned.

Also i considered proving the beta method and gaussian method with their carlemann matrix analogues.

Im only talking about methods that might work and at the same time might be analytic.


regards

tommy1729
Reply
#6
btw if anyone knows a faster way than matrix splitting type for andrews/walkers slog , let me know.

regards

tommy1729
Reply
#7
(06/15/2022, 01:08 AM)Catullus Wrote: EF(x) = exponential factorial(x) = x^(x-1)^(x-2)^...^3^2^1.
What happens if you do the tetration logarithm of the exponential factorial function. (I am thinking tetration logarithm base the Tetra-Euler Number.) How can the tetration logarithm of the exponential factorial function be approximated?
slog(e4,EF(1)) = 0.
slog(e4,EF(2)) ~ .636.
slog(e4,EF(3)) ~ 1.612.
slog(e4,EF(4)) ~ 2.693.
slog(e4,EF(5)) ~ 3.703.
Numbers worked out with the Kneser method.
This is my 16th thread!  Smile
I conjecture that slog(k,EF(x))-x approaches a number, as x grows larger and larger. For any real k greater than eta.

EF(x) = exponential factorial(x) = x^(x-1)^(x-2)^...^3^2^1.
Let slogef(x) be its inverse.

Let HF(x) = hyperfactorial(x) = 2^3^..^x.

The argument I posted earlier is pretty close to a proof that slog(HF(x)) - x does not converge to a constant as x grows.

I suspect (conjecture) that slogef(HF(x)) -x also does not converge.
need to think more about it.

another related conjecture is 

slog(EF( x - ln(x) )) - x converges as x grows ?

Im getting ideas ...


regards

tommy1729
Reply
#8
(06/15/2022, 01:08 AM)Catullus Wrote: EF(x) = exponential factorial(x) = x^(x-1)^(x-2)^...^3^2^1.
What happens if you do the tetration logarithm of the exponential factorial function. (I am thinking tetration logarithm base the Tetra-Euler Number.) How can the tetration logarithm of the exponential factorial function be approximated?
slog(e4,EF(1)) = 0.
slog(e4,EF(2)) ~ .636.
slog(e4,EF(3)) ~ 1.612.
slog(e4,EF(4)) ~ 2.693.
slog(e4,EF(5)) ~ 3.703.
Numbers worked out with the Kneser method.
This is my 16th thread!  Smile
I conjecture that slog(k,EF(x))-x approaches a number, as x grows larger and larger. For any real k greater than eta.

Let FE(b,x) = b^3^4^...^x.
FE(e,x) = FE(x) 

Lets get some rude boundaries clear.

 x^e^e^... < x^(x-1)^(x-2)^... < e^e^...^x < e^3^4^...^x <  x^x^...^x

taking slog on both sides.

1 + slog( e^e^... * ln(x) ) < slog (x^(x-1)^(x-2)^...) < x + slog(x) < slog(FE(x)) <  x + basechange(e,x)

2 + slog( e^e^... * ln(ln(x)) ) < slog (x^(x-1)^(x-2)^...) < x + slog(x) < slog(FE(x)) < x + basechange(e,x)

...

x + slog(ln ln ln ... ln x  ) < ...

upperbound for x + slog( ln ln ln ... ln x) = x + 0.99

So we end up with the estimated

x + 0.99 < slog( EF(x) ) < x + slog(x).

trivial to say but slog(EF(x)) / x thus clearly converges.

It is not immediately clear how to continue.

How does one show that slog( EF(x) ) < x  + slog(slog(slog(x))).

or that  ln^[1/2] ( slog( EF(x) ) ) - ln^[1/2](x) converges.   

Maybe the answer lies in one of the thousands posts here.

Or someone else might find it.


regards

tommy1729
Reply
#9
ok I have a proof.

it does indeed converge and really fast !

slog( EF(x) ) = sum ( slog( EF(x) ) - slog( EF(x-1) ) ) + Constant. 

difference operator and sum operator cancel.
( telescoping if you want )

slog( EF(x) ) = x + y

now consider how going from slog( EF(x) ) to slog ( EF(x+1) ) changes.


EF(x+1) = (x+1)^EF(x)

so 

slog EF(x+1) = slog( (x+1)^x^(x-1)^... ) ) = slog( (x+1)^EF(x) )

slog( (x+1)^x^(x-1)^... ) ) = slog( (x+1)^EF(x) ) = slog( exp( ln(x+1) * x^(x-1)^... ) ) 

= 1 + slog( ln(x+1) * EF(x) )

= 2 + slog( ln(ln(x+1)) + ln(EF(x) ) )

= 3 + slog ( ln ( ln(ln(x+1)) + ln(EF(x) ) ) )

we know for large L and small S ( small compared to L ) :
 
log ( L + S ) < log ( L ) + 2 S/L.

now clearly slog ( a + b ) < slog(a) + slog(b)

so

3 + slog ( ln ( ln(ln(x+1) + ln(EF(x) ) ) < 3 + slog( ln^[2](EF(x)) + o(1/sqrt (EF(x)) ) )

applying slog ( a + b ) < slog(a) + slog(b)

< 3 + slog(ln^[2](EF(x))) + slog( o(1/sqrt EF(x) ) )

< 1 + slog(EF(x)) + slog( o(1/sqrt EF(x) ) )

< 1 + x + y + slog( o(1/sqrt EF(x) ) )

So

1 < slog( EF(x+1) ) - slog( EF(x) ) < 1 + slog( o(1/sqrt EF(x) ) )

therefore 

slog( EF(x) ) = sum ( slog( EF(x) ) - slog( EF(x-1) ) ) + Constant < constant + sum ( 1 + slog( o(1/sqrt EF(x) ) ) )

< constant + x + constant2.

so slog( EF(x) ) - x converges to some constant. And fast.

a similar conclusion can be drawn from another base since slog(b,x) - slog(e,x) also converges to a constant.


regards

tommy1729
Reply
#10
Paulsen and Cowgill's paper is a mathematical paper, which constructs Kneser, and gives a uniqueness condition for Kneser. They don't really make an algorithm, per se, just explain how to grab Taylor series and calculate them. Which, I guess, is an algorithm in and of itself.

http://myweb.astate.edu/wpaulsen/tetration2.pdf

It's a fantastic paper. Paulsen has another paper exploring complex bases, but I think it lacks a good amount of the umph! of this paper, lol. It's not too hard to read, and in my mind, is a very cogent explanation of Kneser.


Also, I just have an irrational hatred of matrices. Not even that it's particularly hard, I can still use matrices if pressed--I just hate them. Lol I'm matrix phobic, lol.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question Continuous Hyper Bouncing Factorial Catullus 9 120 08/15/2022, 07:54 AM
Last Post: JmsNxn
Question E^^.5 and Slog(e,.5) Catullus 7 302 07/22/2022, 02:20 AM
Last Post: MphLee
  A related discussion on interpolation: factorial and gamma-function Gottfried 9 17,759 07/10/2022, 06:23 AM
Last Post: Gottfried
Question Slog(x^^^2) Catullus 1 142 07/10/2022, 04:40 AM
Last Post: JmsNxn
Question Slog(e4) Catullus 0 175 06/16/2022, 03:27 AM
Last Post: Catullus
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 5,856 03/08/2021, 07:13 PM
Last Post: JmsNxn
  Math overflow question on fractional exponential iterations sheldonison 4 10,518 04/01/2018, 03:09 AM
Last Post: JmsNxn
  Some slog stuff tommy1729 15 26,824 05/14/2015, 09:25 PM
Last Post: tommy1729
  A limit exercise with Ei and slog. tommy1729 0 3,784 09/09/2014, 08:00 PM
Last Post: tommy1729
  A system of functional equations for slog(x) ? tommy1729 3 8,995 07/28/2014, 09:16 PM
Last Post: tommy1729



Users browsing this thread: 6 Guest(s)