Posts: 205
Threads: 46
Joined: Jun 2022
06/17/2022, 11:25 PM
(06/17/2022, 12:06 PM)tommy1729 Wrote: slog( EF(x) ) = sum ( slog( EF(x) ) - slog( EF(x-1) ) ) + Constant < constant + sum ( 1 + slog( o(1/sqrt EF(x) ) ) )
< constant + x + constant2.
so slog( EF(x) ) - x converges to some constant. And fast. Does anyone know of any closed forms for any of those constants?
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ
Please remember to stay hydrated.
Sincerely: Catullus
Posts: 1,676
Threads: 368
Joined: Feb 2009
(06/17/2022, 10:21 PM)JmsNxn Wrote: Paulsen and Cowgill's paper is a mathematical paper, which constructs Kneser, and gives a uniqueness condition for Kneser. They don't really make an algorithm, per se, just explain how to grab Taylor series and calculate them. Which, I guess, is an algorithm in and of itself.
http://myweb.astate.edu/wpaulsen/tetration2.pdf
It's a fantastic paper. Paulsen has another paper exploring complex bases, but I think it lacks a good amount of the umph! of this paper, lol. It's not too hard to read, and in my mind, is a very cogent explanation of Kneser.
Also, I just have an irrational hatred of matrices. Not even that it's particularly hard, I can still use matrices if pressed--I just hate them. Lol I'm matrix phobic, lol.
Im still left with many many questions.
anyways,
I had the vague idea of mapping a boundary R to the real line iteratively.
that is to say by some kind of averaging.
by analogue when you want to map f(x) to the zero function you do something like
f(x)/2
f(x)/2 /2
f(x) /2 /2 /2
and
g1(f(x)) = f(x)/2
g2(g1(f(x))) = f(x)/4
etc
the sequence g_n are bounded functions that get weaker and weaker ( closer to id(z)) , so the sequence g_n(g_{n-1}(... ) is analytic.
maybe this is a garbage idea but it seems intuitive to me.
In fact , what do we know about fractional (iterations of) riemann mappings ?
In essense they are just fractional iterations of analytic functions but still ... I feel we might be missing something here.
regards
tommy1729
Posts: 935
Threads: 111
Joined: Dec 2010
(06/17/2022, 11:49 PM)tommy1729 Wrote: (06/17/2022, 10:21 PM)JmsNxn Wrote: Paulsen and Cowgill's paper is a mathematical paper, which constructs Kneser, and gives a uniqueness condition for Kneser. They don't really make an algorithm, per se, just explain how to grab Taylor series and calculate them. Which, I guess, is an algorithm in and of itself.
http://myweb.astate.edu/wpaulsen/tetration2.pdf
It's a fantastic paper. Paulsen has another paper exploring complex bases, but I think it lacks a good amount of the umph! of this paper, lol. It's not too hard to read, and in my mind, is a very cogent explanation of Kneser.
Also, I just have an irrational hatred of matrices. Not even that it's particularly hard, I can still use matrices if pressed--I just hate them. Lol I'm matrix phobic, lol.
Im still left with many many questions.
anyways,
I had the vague idea of mapping a boundary R to the real line iteratively.
that is to say by some kind of averaging.
by analogue when you want to map f(x) to the zero function you do something like
f(x)/2
f(x)/2 /2
f(x) /2 /2 /2
and
g1(f(x)) = f(x)/2
g2(g1(f(x))) = f(x)/4
etc
the sequence g_n are bounded functions that get weaker and weaker ( closer to id(z)) , so the sequence g_n(g_{n-1}(... ) is analytic.
maybe this is a garbage idea but it seems intuitive to me.
In fact , what do we know about fractional (iterations of) riemann mappings ?
In essense they are just fractional iterations of analytic functions but still ... I feel we might be missing something here.
regards
tommy1729
Fractional iterations of Riemann mappings are actually pretty simple to understand.
Let's say: \(f : S \to \mathbb{D}\), the only way to iterate this is, is to restrict \(S = \mathbb{D}\) (the unit disk). So it reverts into iterating automorphisms of \(\mathbb{D}\). Every automorphism of \(\mathbb{D}\) is given by a Blashcke product, and therefore just looks like iterating a linear fractional transformation. By which, iterating any Riemann mapping looks like iterating linear fractional transformations upto conjugation.
What I think you're getting at, is trying to iterate a solution to Kneser's Riemann mapping. Which would equate to \(f^{\circ n}(y) \to K(y)\), where this is Kneser's Riemann mapping. That wouldn't be possible. But it would be possible to write \(f_n(y) \to K(y)\), this is discoverable with Taylor series, so it is perfectly possible there exists an iteration formula for this. Which could possibly look like \(g_n(g_{n-1}(...)) \to K(y)\). No idea how to do that though.
Fractional iterations of Riemann Mappings are just iterations of automorphisms though. It's the only way the statement makes sense... Unless you meant something else?
Regards
Posts: 205
Threads: 46
Joined: Jun 2022
What about slog(HF(x))? how fast does does that grow?
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ
Please remember to stay hydrated.
Sincerely: Catullus
Posts: 1,676
Threads: 368
Joined: Feb 2009
(06/22/2022, 03:20 AM)Catullus Wrote: What about slog(HF(x))? how fast does does that grow?
didnt i rename HF To FE ? and gave the bounds in my previous posts ? between x + slog(x) and x + basechange(e,x) ??
regards
tommy1729
Posts: 1,676
Threads: 368
Joined: Feb 2009
what i wonder if how basechange(e,x) behaves as a function of slog(x).
basechange(e,x) = a0 + a1 slog(x) + a2 slog(x)^2 + ...
regards
tommy1729
Posts: 935
Threads: 111
Joined: Dec 2010
(06/22/2022, 11:38 PM)tommy1729 Wrote: what i wonder if how basechange(e,x) behaves as a function of slog(x).
basechange(e,x) = a0 + a1 slog(x) + a2 slog(x)^2 + ...
regards
tommy1729
I was always intrigued by the basechange function. But I'm confused how it's presented in most situations in this forum.
Let, \(b > \eta\). If we write:
$$
\text{tet}_b(z) = \text{tet}_e(B(z))\\
$$
Isn't it clear that locally a solution always exists to this equation? The monodromy theorem guarantees this function always exist.
So That:
$$
\text{tet}_b(z) : \mathbb{C}/(-\infty,-2] \to \mathbb{C}\\
$$
Similarily with \(b=e\). Therefore \(B(z)\) always exists locally. In fact, we only need to worry about \(-\delta < \Re(z) < 1+\delta\) and \(|\Im(z)| < \delta\). Because the orbits of the exponential will cover everywhere else. This solution always exists. It is always holomorphic and \(1-1\) for small enough \(\delta\). Then, you have a function:
$$
\text{slog}_b(b^z) = \text{slog}_b(z) + 1 = B^{-1}(\text{slog}(e^z)) = B^{-1}(\text{slog}(z)) + 1\\
$$
Once you have a super logarithm on non trivial neighborhood of \([0,1]\)... I mean, just iterate the orbits and you have \(\text{slog}\) in the complex plane, with its poles. Now you have the tetration.
I'm confused why it's such a difficult problem on this forum. I'll admit. I have no fucking clue how to evaluate or compute this function \(B\). But it's pretty simple complex analytic theory to prove it exists. You just need the analytic implicit function theorem and a modest use of the monodromy theorem about the line \([0,1]\).
Maybe it's because I am assuming we are using Kneser in both situations. So it's not much more than an implicit solution to the equation \(b^y = e^u\). Is there something deep I am missing with the basechange theory.
I'm not talking about "the base change formula" also. That formula is non-analytic. And that should be settled.
Is there some reason this problem is still important that I'm not privy too, or too dumb to notice?
Regards, James
Posts: 1,676
Threads: 368
Joined: Feb 2009
06/28/2022, 02:03 PM
(This post was last modified: 06/28/2022, 02:06 PM by tommy1729.)
(06/22/2022, 11:38 PM)tommy1729 Wrote: what i wonder if how basechange(e,x) behaves as a function of slog(x).
basechange(e,x) = a0 + a1 slog(x) + a2 slog(x)^2 + ...
regards
tommy1729
Well we know that
slog(x^x^x^…) - X
And
slog(2^3^…^x) - X
Are approximed by
Slog( ln(x) x + ln(ln(x)) )
Or
Slog( ln(x-1) x + ln(ln(x-2)) )
Which explains impliciet how the base Change formula should work and why
At least for c^oo tetration and real bases x > eta.
This suggesties that a Taylor of slog might not be the best idea.
Maybe slog(f(x)) + slog(f(x))/(x - eta)
For a Taylor f or so.
Hmmm
Regards
Tommy1729
Posts: 205
Threads: 46
Joined: Jun 2022
07/11/2022, 09:56 AM
(This post was last modified: 08/14/2022, 10:18 PM by Catullus.)
Please let ![[Image: png.image?\dpi%7B110%7D%20x\Lambda%5E+]](https://latex.codecogs.com/png.image?\dpi%7B110%7D%20x\Lambda%5E+) represent the Hyper Bouncing Factorial of x defined as the Bouncing Factorial of x, but instead of starting at one you start at two and you replace the multiplication in the definition with exponentiation.
What would happen if you did ![[Image: png.image?\dpi%7B110%7D%20\text%7Bslog%7D(x\Lambda%5E+)]](https://latex.codecogs.com/png.image?\dpi%7B110%7D%20\text%7Bslog%7D(x\Lambda%5E+)) ?
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ
Please remember to stay hydrated.
Sincerely: Catullus
Posts: 205
Threads: 46
Joined: Jun 2022
07/13/2022, 02:38 AM
(This post was last modified: 08/14/2022, 10:19 PM by Catullus.)
For what base of slog would ![[Image: png.image?\dpi%7B110%7D%20\text%7Bslog(EF(%7Dx\Lambda%5E+))-x]](https://latex.codecogs.com/png.image?\dpi%7B110%7D%20\text%7Bslog(EF(%7Dx\Lambda%5E+))-x) converge to zero, as x grows larger and larger?
ฅ(ミ⚈ ﻌ ⚈ミ)ฅ
Please remember to stay hydrated.
Sincerely: Catullus
|