Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
regular slog
#1
Let us determine the regular super logarithm of , at the lower fixed point . Regular super logarithm shall mean that it satisfies
(1)
(2)
and that
(3) where the right side is the regular iteration of at the fixed point .

Then the formula for the principal Abel function is:



and that for the regular super logarithm:



Graph of :

   

Proof:

For doing this we first compute the regular Schroeder function (note that the Schroeder function is determined up to a multiplicative constant and the Abel function is determined up to an additive constant). A Schroeder function of a function is a function that satisfies the Schroeder equation



We see that we can derive a solution of the Abel equation



by setting .

Now there is the the so called principal Schroeder function of a function with fixed point 0 with slope , given by:



This function particularly yields the regular iteration at 0, via .

To determine the Schroeder equation at the lower fixed point of we consider
with fixed point 0 and same slope . Let then .

.
.

Hence is the principial Schroeder function of at .

To get the principal Abel function we take the logarithm to base :

.
Reply
#2
For repelling fixed points, we can compute the Abel function of the inverse function which has attracting fixed points:

and then is clear, that is an Abel function of the original function , because
iff iff .

So for repelling fixed points we get the formula:



which works for arbitrary repelling complex fixed points and arbitrary as long as we chose a branch of the involved logarithm such that .

For computing the regular super logarithm however we face a major problem with repelling fixed points: we can not compute directly , as . This presents a problem because for the rslog we have to compute to normalize the values.
The good news however is that seems to always exists. So the regular super logarithm is then:

for and
otherwise.

Following the idea of Jay to add the regular iteration at conjugate fixed points (and my idea to divide by 2 to get an Abel function again) let us consider



where is a fixed point in the upper halfplane.

Proposition: for , . Particularly this implies that

Note, that we define merely on the real axis, because this is the intersection of the domain of definition of (upper halfplane) and (lower halfplane).

Proof:

The first question that appears is: Which branch of the logarithm converges to . While the usual logarithm is defined to yield imaginary values , for the lower primary fixed point we use the logarithm that yields imaginary values denote this by . For the non-primary fixed points is appropriate for the th upper fixed point and is appropriate for the conjugated fixed point (though is also ok, for ).


We first verify that and hence .
.

A further consequence is that .

The rest is then easily established, let be the th fixed point in the upper half plane:
.
Reply
#3
Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this?

Andrew Robbins
Reply
#4
andydude Wrote:Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this?

In [1] Szekeres defines being regular (in the case ) if it has a family of Schroeder iterates (where is a Schroeder function) such that


Such a family of Schroeder iterates is then unique (and we usually call it the regular iterates at fixed point 0).
Szekeres shows in [1] that is regular if the principal Schroeder function

is used for the Schroeder iterates. (Given that it exists, and satisfies some further conditions as strict monotony, differentiability etc.)

In the case of analytic functions with asymptotic development at 0, the formal iterates are the regular iterates.

[1] Szekeres: Regular iteration of real and complex functions, 1958.
Reply
#5
andydude Wrote:Forgive me for being slow, but have you shown that this satisfies Szekeres' definition of regularity? and if so, where have you shown this?

What function/iterates do you mean?
Reply
#6
Ok, I understand now. I just did the same thing with Aldrovandi's diagonalization method. Aldrovandi and others have shown that when you diagonalize the Koch/Bell/Carleman matrix of a function , then the diagonal matrix contains the eigenvalues, i.e. the powers of , and the diagonalizing matrix is the inverse of the Koch/Bell/Carleman matrix of the Schroeder function. So this got me thinking if the eigensystem decomposition (or matrix diagonalization) produces a regular Schroeder function.

It does, but of course eigenvectors are only unique up to scaling, so I suppose you could think of it as a question of convention rather than uniqueness. You could, of course, find an eigensystem decomposition such that the diagonalization matrix was the Koch/Bell/Carleman matrix of the principal Schroeder function, but I was wondering how to do so. Turning to Mathematica, I found the nice function Eigensystem[] for breaking down a matrix into its eigenvalues and eigenvectors, so I though I'd give it a try.

The eigenvectors returned by Eigensystem[] form the columns of the diagonalizing matrix in but the Schroeder function matrix satisfies , so even though it returns:





this is actually P which means S is actually:





using a degree 3 approximation to the infinite matrices. So using the property that the series coefficients are just the 1st row (as opposed to the 0th row), which means the corresponding Schroeder function is:



where f is of the form . Now what I find interesting is that the definition of the principal Schroeder function states that which implies:



and in the limit (assuming ):



which is the same Schroeder function the Eigensystem[] function returned. From this, I can clearly see that the limit definition of the Schroeder function actually makes sense, because it didn't make sense to me before now. Fortunately, my understanding of Koch/Bell/Carleman matrices allowed me to find another way to get the the same function.


While I was doing this I noticed something very interesting. We know the relationship between the Abel and Schroeder function is , which means the inverse relationship is:

and replacing f with , we get



because , and because the matrix P represents the inverse Schroeder function. What I find interesting about this is that it is almost a Fourier expansion of the exponential of iteration of DE, and that it is almost easier to compute than the Schroeder function, since you don't even need to invert P!

Andrew Robbins
Reply
#7
andydude Wrote:Ok, I understand now. I just did the same thing with Aldrovandi's diagonalization method. Aldrovandi and others have shown that when you diagonalize the Koch/Bell/Carleman matrix of a function , then the diagonal matrix contains the eigenvalues, i.e. the powers of , and the diagonalizing matrix is the inverse of the Koch/Bell/Carleman matrix of the Schroeder function.So this got me thinking if the eigensystem decomposition (or matrix diagonalization) produces a regular Schroeder function.

Yes this method is Gottfried's method, I somewhere already posted that in the case of hyperbolic iteration (power series developed at a fixed point), Gottfried's method gives the formal power series iteration, which is the regular iteration.

However this method is also applicable to developments at non-fixed points (in this case the D is no more powers of but still a diagonal matrix). I now realize that this method usually would depend on the development point. For example if we consider with the fixed points 2 and 4. And we start with the diagonalization at development point 2 we get the regular iteration at 2. If we move the development point continuously to 4 (which's regular iteration is different from the one at 2) the iterates must have changed ...

Quote:
It does, but of course eigenvectors are only unique up to scaling, so I suppose you could think of it as a question of convention rather than uniqueness.
Oh, see: The regular Abel function is only determined up to an additive constant and the regular Schroeder function is only determined up to a multiplicative constant. In our case of slog we simply fix one Abel function by the condition .

Quote:While I was doing this I noticed something very interesting. We know the relationship between the Abel and Schroeder function is , which means the inverse relationship is:

and replacing f with , we get



because , and because the matrix P represents the inverse Schroeder function. What I find interesting about this is that it is almost a Fourier expansion of the exponential of iteration of DE, and that it is almost easier to compute than the Schroeder function, since you don't even need to invert P!

Uff, can you just say what and is?
Reply
#8
Ok. h is the base of the decremented exponential, and DE is the decremented exponential function . The reason why I chose the symbol h is because when you use iterated decremented exponentials to find iterated exponentials, the bases obey the h-root-h relationship: as I mentioned here and you mentioned here where , and thus which is also the symbol Galidakis uses for the infinitely iterated exponential.

Andrew Robbins
Reply
#9
Thanks for refreshing my memory Smile
Yes, for regular iteration of power series it is indeed is easier to compute the inverse of the Abel function, which is however nearly the same as to compute the iterates . The Abel function has also a singularity at 0.
Reply
#10
As we now have the limit formula for the regular slog it is time to publish also the corresponding powerseries for the regular slog. Interestingly the computation is in a certain way similar to the computation of the natural Abel function and Andrew's slog.
This is the good thing about regular tetration that there is a well developed theory about direct compuation (limit formula) and about power series computation.

So let us start with the regular Schroeder function of a given powerseries at 0 with fixed point at 0.

A Schroeder function satisfies
(where )

Let us write this with the Bell matrix (th row contains the coefficients of the -th power of the function) for and for . As and dont have a constant/0th coefficient the matrix is correspondingly stripped:



we only need to consider the first row of which is :


E.g., truncation to 4:


We see that the first row is 0 and needs to be chopped this gives freedom up to a multiplicative constant for (which is anyway known for Schroeder functions) and we decide to choose depending on whether or and from which side we approach the fixed point. This then leads to the equation with the matrix , which is with removed first row and column

, eg.



However we dont need an equation solver to solve this system, because we chopped off the first line and column and not the last line and the first column, as in Andrew's slog; we can solve it by hand:
.

Also the solution of this equation system does not depend on the truncation size as it is with the slog. But of course this becomes relativated by needing a fixed point.

So we have a formula for the powerseries of the regular Schroeder function. Then the regular Abel function is just .

Let us apply this to . First we have to move the fixed point to 0 by conjugation:
has the coefficients:
,



It has the coefficients


So


where

So the Abel function of is and so the
Abel function of is

However it seems as if the convergence radius of is just . So you can not use this formula exclusively to plot for example the regular Abel function of in the range -1 to 1.9, it does not converge at 0. A numeric comparison with the natural slog will hopefully follow later.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Some slog stuff tommy1729 15 11,325 05/14/2015, 09:25 PM
Last Post: tommy1729
  Regular iteration using matrix-Jordan-form Gottfried 7 8,070 09/29/2014, 11:39 PM
Last Post: Gottfried
  A limit exercise with Ei and slog. tommy1729 0 1,826 09/09/2014, 08:00 PM
Last Post: tommy1729
  A system of functional equations for slog(x) ? tommy1729 3 4,167 07/28/2014, 09:16 PM
Last Post: tommy1729
  slog(superfactorial(x)) = ? tommy1729 3 4,737 06/02/2014, 11:29 PM
Last Post: tommy1729
  [stuck] On the functional equation of the slog : slog(e^z) = slog(z)+1 tommy1729 1 2,324 04/28/2014, 09:23 PM
Last Post: tommy1729
  A simple yet unsolved equation for slog(z) ? tommy1729 0 1,783 04/27/2014, 08:02 PM
Last Post: tommy1729
  regular tetration base sqrt(2) : an interesting(?) constant 2.76432104 Gottfried 7 8,585 06/25/2013, 01:37 PM
Last Post: sheldonison
  A kind of slog ? C + SUM f_n(x) ln^[n](x) ? tommy1729 1 2,280 03/09/2013, 02:46 PM
Last Post: tommy1729
  tetration base conversion, and sexp/slog limit equations sheldonison 44 51,825 02/27/2013, 07:05 PM
Last Post: sheldonison



Users browsing this thread: 1 Guest(s)