Slog(Exponential Factorial(x))
#11
Question 
(06/17/2022, 12:06 PM)tommy1729 Wrote: slog( EF(x) ) = sum ( slog( EF(x) ) - slog( EF(x-1) ) ) + Constant < constant + sum ( 1 + slog( o(1/sqrt EF(x) ) ) )

< constant + x + constant2.

so slog( EF(x) ) - x converges to some constant. And fast.
Does anyone know of any closed forms for any of those constants?
Please remember to stay hydrated.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Sincerely: Catullus /ᐠ_ ꞈ _ᐟ\
Reply
#12
(06/17/2022, 10:21 PM)JmsNxn Wrote: Paulsen and Cowgill's paper is a mathematical paper, which constructs Kneser, and gives a uniqueness condition for Kneser. They don't really make an algorithm, per se, just explain how to grab Taylor series and calculate them. Which, I guess, is an algorithm in and of itself.

http://myweb.astate.edu/wpaulsen/tetration2.pdf

It's a fantastic paper. Paulsen has another paper exploring complex bases, but I think it lacks a good amount of the umph! of this paper, lol. It's not too hard to read, and in my mind, is a very cogent explanation of Kneser.


Also, I just have an irrational hatred of matrices. Not even that it's particularly hard, I can still use matrices if pressed--I just hate them. Lol I'm matrix phobic, lol.

Im still left with many many questions.

anyways,
I had the vague idea of mapping a boundary R to the real line iteratively.

that is to say by some kind of averaging.

by analogue when you want to map f(x) to the zero function you do something like 

f(x)/2

f(x)/2 /2 

f(x) /2 /2 /2

and 

g1(f(x)) = f(x)/2

g2(g1(f(x))) = f(x)/4

etc

the sequence g_n are bounded functions that get weaker and weaker ( closer to id(z)) , so the sequence g_n(g_{n-1}(... )  is analytic.

maybe this is a garbage idea but it seems intuitive to me.

In fact , what do we know about fractional (iterations of) riemann mappings ?

In essense they are just fractional iterations of analytic functions but still ... I feel we might be missing something here.


regards

tommy1729
Reply
#13
(06/17/2022, 11:49 PM)tommy1729 Wrote:
(06/17/2022, 10:21 PM)JmsNxn Wrote: Paulsen and Cowgill's paper is a mathematical paper, which constructs Kneser, and gives a uniqueness condition for Kneser. They don't really make an algorithm, per se, just explain how to grab Taylor series and calculate them. Which, I guess, is an algorithm in and of itself.

http://myweb.astate.edu/wpaulsen/tetration2.pdf

It's a fantastic paper. Paulsen has another paper exploring complex bases, but I think it lacks a good amount of the umph! of this paper, lol. It's not too hard to read, and in my mind, is a very cogent explanation of Kneser.


Also, I just have an irrational hatred of matrices. Not even that it's particularly hard, I can still use matrices if pressed--I just hate them. Lol I'm matrix phobic, lol.

Im still left with many many questions.

anyways,
I had the vague idea of mapping a boundary R to the real line iteratively.

that is to say by some kind of averaging.

by analogue when you want to map f(x) to the zero function you do something like 

f(x)/2

f(x)/2 /2 

f(x) /2 /2 /2

and 

g1(f(x)) = f(x)/2

g2(g1(f(x))) = f(x)/4

etc

the sequence g_n are bounded functions that get weaker and weaker ( closer to id(z)) , so the sequence g_n(g_{n-1}(... )  is analytic.

maybe this is a garbage idea but it seems intuitive to me.

In fact , what do we know about fractional (iterations of) riemann mappings ?

In essense they are just fractional iterations of analytic functions but still ... I feel we might be missing something here.


regards

tommy1729

Fractional iterations of Riemann mappings are actually pretty simple to understand.

Let's say: \(f : S \to \mathbb{D}\), the only way to iterate this is, is to restrict \(S  = \mathbb{D}\) (the unit disk). So it reverts into iterating automorphisms of \(\mathbb{D}\). Every automorphism of \(\mathbb{D}\) is given by a Blashcke product, and therefore just looks like iterating a linear fractional transformation. By which, iterating any Riemann mapping looks like iterating linear fractional transformations upto conjugation.

What I think you're getting at, is trying to iterate a solution to Kneser's Riemann mapping. Which would equate to \(f^{\circ n}(y) \to K(y)\), where this is Kneser's Riemann mapping. That wouldn't be possible. But it would be possible to write \(f_n(y) \to K(y)\), this is discoverable with Taylor series, so it is perfectly possible there exists an iteration formula for this. Which could possibly look like \(g_n(g_{n-1}(...)) \to K(y)\). No idea how to do that though.

Fractional iterations of Riemann Mappings are just iterations of automorphisms though. It's the only way the statement makes sense... Unless you meant something else?

Regards
Reply
#14
What about slog(HF(x))? how fast does does that grow?
Please remember to stay hydrated.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Sincerely: Catullus /ᐠ_ ꞈ _ᐟ\
Reply
#15
(06/22/2022, 03:20 AM)Catullus Wrote: What about slog(HF(x))? how fast does does that grow?

didnt i rename HF To FE ? and gave the bounds in my previous posts ? between x + slog(x) and x + basechange(e,x) ??

regards

tommy1729
Reply
#16
what i wonder if how basechange(e,x) behaves as a function of slog(x).

basechange(e,x) = a0 + a1 slog(x) + a2 slog(x)^2 + ...

regards

tommy1729
Reply
#17
(06/22/2022, 11:38 PM)tommy1729 Wrote: what i wonder if how basechange(e,x) behaves as a function of slog(x).

basechange(e,x) = a0 + a1 slog(x) + a2 slog(x)^2 + ...

regards

tommy1729

I was always intrigued by the basechange function. But I'm confused how it's presented in most situations in this forum.

Let, \(b > \eta\). If we write:

$$
\text{tet}_b(z) = \text{tet}_e(B(z))\\
$$

Isn't it clear that locally a solution always exists to this equation?  The monodromy theorem guarantees this function always exist.

So That:

$$
\text{tet}_b(z) : \mathbb{C}/(-\infty,-2] \to \mathbb{C}\\
$$

Similarily with \(b=e\). Therefore \(B(z)\) always exists locally. In fact, we only need to worry about \(-\delta < \Re(z) < 1+\delta\) and \(|\Im(z)| < \delta\). Because the orbits of the exponential will cover everywhere else. This solution always exists. It is always holomorphic and \(1-1\) for small enough \(\delta\). Then, you have a function:

$$
\text{slog}_b(b^z) = \text{slog}_b(z) + 1 = B^{-1}(\text{slog}(e^z)) = B^{-1}(\text{slog}(z)) + 1\\
$$

Once you have a super logarithm on non trivial neighborhood of \([0,1]\)... I mean, just iterate the orbits and you have \(\text{slog}\) in the complex plane, with its poles. Now you have the tetration.

I'm confused why it's such a difficult problem on this forum. I'll admit. I have no fucking clue how to evaluate or compute this function \(B\). But it's pretty simple complex analytic theory to prove it exists. You just need the analytic implicit function theorem and a modest use of the monodromy theorem about the line \([0,1]\).

Maybe it's because I am assuming we are using Kneser in both situations. So it's not much more than an implicit solution to the equation \(b^y = e^u\). Is there something deep I am missing with the basechange theory.

I'm not talking about "the base change formula" also. That formula is non-analytic. And that should be settled.

Is there some reason this problem is still important that I'm not privy too, or too dumb to notice?

Regards, James
Reply
#18
(06/22/2022, 11:38 PM)tommy1729 Wrote: what i wonder if how basechange(e,x) behaves as a function of slog(x).

basechange(e,x) = a0 + a1 slog(x) + a2 slog(x)^2 + ...

regards

tommy1729

Well we know that 

slog(x^x^x^…) - X

And 

slog(2^3^…^x) - X

Are approximed by

Slog( ln(x) x + ln(ln(x)) ) 

Or 

Slog( ln(x-1) x + ln(ln(x-2)) )

Which explains impliciet how the base Change formula should work and why

At least for c^oo tetration and real bases x > eta.

This suggesties that a Taylor of slog might not be the best idea.

Maybe slog(f(x)) + slog(f(x))/(x - eta) 

For a Taylor f or so.

Hmmm

Regards

Tommy1729
Reply
#19
Please let \(x\Lambda^\uparrow\) represent the Hyper Bouncing Factorial of x, defined as the Bouncing Factorial of x, but instead of starting at 1, you start at 2 and you replace the multiplication in the definition with exponentiation.
What would happen if you did \(\text{slog}(x\Lambda^\uparrow)\)?
Please remember to stay hydrated.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Sincerely: Catullus /ᐠ_ ꞈ _ᐟ\
Reply
#20
What base of slog would make \(\text{slog(EF(}x))-x\) converge to zero, as x grows larger and larger?
Please remember to stay hydrated.
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Sincerely: Catullus /ᐠ_ ꞈ _ᐟ\
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question Continuous Hyper Bouncing Factorial Catullus 9 663 08/15/2022, 07:54 AM
Last Post: JmsNxn
Question E^^.5 and Slog(e,.5) Catullus 7 682 07/22/2022, 02:20 AM
Last Post: MphLee
  A related discussion on interpolation: factorial and gamma-function Gottfried 9 18,539 07/10/2022, 06:23 AM
Last Post: Gottfried
Question Slog(x^^^2) Catullus 1 308 07/10/2022, 04:40 AM
Last Post: JmsNxn
Question Slog(e4) Catullus 0 331 06/16/2022, 03:27 AM
Last Post: Catullus
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 6,133 03/08/2021, 07:13 PM
Last Post: JmsNxn
  Math overflow question on fractional exponential iterations sheldonison 4 10,876 04/01/2018, 03:09 AM
Last Post: JmsNxn
  Some slog stuff tommy1729 15 27,617 05/14/2015, 09:25 PM
Last Post: tommy1729
  A limit exercise with Ei and slog. tommy1729 0 3,903 09/09/2014, 08:00 PM
Last Post: tommy1729
  A system of functional equations for slog(x) ? tommy1729 3 9,194 07/28/2014, 09:16 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)