Jabotinsky's iterative logarithm
#11
Ivars Wrote:Would that be true also for imaginary t? t=I, like:

\( \text{ilog}(f^{\circ I}) = I \text{ilog}(f) \)

yes. As one can see from the derivation it is true for any complex iteration counts.

Quote:The functions having ilog property ilog(fg)=ilog (f) + ilog(g) seem to be rather wide class, am I right?

Its not one class, there are equivalence classes where \( f \) is considered to be equivalent to \( g \) if there is some \( t\neq 0 \) such that \( f=g^{\circ t} \), for each equivalent \( f \) and \( g \) this property holds.
Reply
#12
@Henryk
Good point, (about the \( f^{\circ t} \) being a function-parameter, not the value-parameter) I didn't realize that because of the vague notation that we use. I'm going to use a very verbose notation, like the kind used for other transforms (like Fourier \( \mathcal{F}[f](x) \) and Laplace \( \mathcal{L}[f](x) \) transforms) so that we can get this right.

Starting with the Abel functional equation:
\( \mathcal{A}[f](f^{\circ t}(x)) = \mathcal{A}[f](x) + t \)
\( \frac{\partial}{\partial x} \mathcal{A}[f](f^{\circ t}(x)) (f^{\circ t})'(x) = \frac{\partial}{\partial x} \mathcal{A}[f](x) \)
\( \mathcal{J}[f](f^{\circ t}(x)) = (f^{\circ t})'(x) \mathcal{J}[f](x) \)
George Szekeres defines the Julia function as a function that satisfies the above functional equation and is also the reciprocal of the derivative of the Abel function. This is exactly what has just been said about "ilog" so these should be the same function.

The discussion above defines the same function as:
\( \mathcal{J}[f](x) = \left[\frac{\partial}{\partial t} f^{\circ t}(x)\right]_{t=0} \)
such that:
\( \mathcal{J}[f^{\circ t}](x) = t \mathcal{J}[f](x) \)
which according to you and Ecalle, is also related as:
\( \mathcal{J}[f](x) = \frac{1}{\frac{\partial}{\partial x}

\mathcal{A}[f](x)} \)
which is exactly the definition of a Julia function.

I think what we should take away from this is that the two equations:
\( \mathcal{J}[f](f^{\circ t}(x)) = (f'(x))^t \mathcal{J}[f](x) \)
\( \mathcal{J}[f^{\circ t}](x) = t \mathcal{J}[f](x) \)
do not contradict each other in any way (although they would if we continued using the vague notation). And that now that we see that these are talking about the same function, now we know 2 things about 1 function, as opposed to 1 thing about 2 functions...

[update]I fixed the t=1 problem.[/update]

Andrew Robbins
Reply
#13
andydude Wrote:Starting with the Abel functional equation:
\( \mathcal{A}[f](f^{\circ t}(x)) = \mathcal{A}[f](x) + t \)
\( \frac{\partial}{\partial x} \mathcal{A}[f](f^{\circ t}(x)) f'(x) = \frac{\partial}{\partial x} \mathcal{A}[f](x) \)
\( \mathcal{J}[f](f^{\circ t}(x)) = f'(x) \mathcal{J}[f](x) \)
...
\( \mathcal{J}[f](f^{\circ t}(x)) = f'(x) \mathcal{J}[f](x) \)
For me it looks as if there always \( t=1 \) must be set ...

Quote:now we know 2 things about 1 function, as opposed to 1 thing about 2 functions...

Smile
Reply
#14
bo198214 Wrote:Gosh, I should learn proper reading first. Jabotinsky states:
If \( F(G(z))=G(F(z)) \) it can be shown [5] that \( G(z) \) is some iterate of \( F(z) \) and hence that [translated into our style]:
\( \text{ilog}(F\circ G)=\text{ilog}(F)+\text{ilog}(G) \)

[5] J. Hadamard, Two works on iteration, Bull. Amer. Math. Soc. 50 (1944), 67-75
Reading the above with the "matrix-filter", and for F and G are resp. matrix-operators FM and GM thought. then the above equality (F°G = G°F) holds only, if the matrices commute - and this is also required, if the matrix-logarithms are to be added: log(FM) + log(GM) = log(FM*GM) = log(GM*FM) <=> FM and GM commute.
For finite matrices it is also known, that matrices are commuting, if their eigenvectors are identical. But if the eigenvectors are identical, then FM and GM differ only by their sets of eigenvalues FV and GV.

Hmm. What is surprising now is, that if FM and GM provide different iterates, then, for tetration, the eigenvalues in FV and GV must be appropriate powers of the same base - this seems much more restrictive than the statement of Jabotinski: there I can't find such a restriction (but maybe I'm just missing this at the moment)

Gottfried
Gottfried Helms, Kassel
Reply
#15
Gottfried Wrote:For finite matrices it is also known, that matrices are commuting, if their eigenvectors are identical. But if the eigenvectors are identical, then FM and GM differ only by their sets of eigenvalues FV and GV.

Thats a good additional information for us non-matrixers Wink

Quote:Hmm. What is surprising now is, that if FM and GM provide different iterates, then, for tetration, the eigenvalues in FV and GV must be appropriate powers of the same base - this seems much more restrictive than the statement of Jabotinski: there I can't find such a restriction (but maybe I'm just missing this at the moment)

But take into account that Jabotinsky is only considering functions with f(0)=0, i.e. which have a fixed point at 0. Generally all formal powerseries co mputations are restricted to this case because otherwise the coefficients of composition are no finite expressions (in terms of power series coefficients) anymore.
Reply
#16
bo198214 Wrote:For me it looks as if there always \( t=1 \) must be set ...
oops
\( \mathcal{J}[f](f^{\circ t}(x)) = (f^{\circ t})'(x) \mathcal{J}[f](x) \)
Reply
#17
bo198214 Wrote:But take into account that Jabotinsky is only considering functions with f(0)=0, i.e. which have a fixed point at 0. Generally all formal powerseries co mputations are restricted to this case because otherwise the coefficients of composition are no finite expressions (in terms of power series coefficients) anymore.

Ah -yes. If he matrix-operator is triangular, then the diagonal is the sequence of consecutive powers of f'(0)/1! . So, in matrix-lingo it is the statement:[updated]
  1. assume functions f and g having their associated matrix-operators FM and GM triangular (f(0),g(0)=0)
  2. Then both operators have sets of eigenvalues, which consist of the consecutive powers of base-parameter, say u for FM and v for GM
  3. f and g may be seen as iterates of f0°a resp g0°b. The second eigenvalue of FM is then u0^a and of GM is v0^b
  4. if f°g = g°f then the operators FM and GM commute.
  5. if FM and GM commute, their eigenvectors are the same (statement extrapolated from finite matrices)
  6. the eigenvectors consists of polynomials in u0 and v0.
    Proposal: from the polynomial composition of coefficients in the eigenvectors it follows, that u0=v0
  7. from u0=v0 it follows, that v = v0^b = u0^b =u^(b/a) or u=u0^a = v0^a = v^(a/b)
  8. from this follows, that also f0=g0 and
  9. f=f0°a =g0°a =g°(-b+a)
    or g=g0°b = f0°b = f°(-a+b) ,
    so f and g are iterates of each other

Hmm. this statement is worth to be put into the matrix-facts-library (to be created)... Smile

Gottfried
Gottfried Helms, Kassel
Reply
#18
Gottfried Wrote:Ah -yes. If he matrix-operator is triangular, then the diagonal is the sequence of consecutive powers of f'(0)/1! . So, in matrix-lingo it is the statement:

  1. assume functions f and g having their associated matrix-operators FM and GM triangular (f(0),g(0)=0)
  2. Then both operators have sets of eigenvalues, which consist of the consecutive powers of base-parameter, say u for FM and v for GM
  3. if f°g = g°f then the operators FM and GM commute.
  4. if FM and GM commute, their eigenvectors are the same (statement extrapolated from finite matrices)
  5. so the only different characteristic of FM and GM is the value of their second eigenvalue u^1 and v^1
  6. but v can be expressed as x'th power of u: v=u^x (or if v is negative and u is positive u=v^y)
  7. this exponent of the eigenvalue u (resp v) is just the height-parameter, so g=f°x or f=g°y

Yes! (However "height" sounds strange when used for iteration of functions which are not exponentials)

Quote:[*] remark: this argumentation seems to include, that f and g have the same base-parameter

There is no base. \( f \) and \( g \) are arbitrary analytic functions with development and fixed point at 0.
Reply
#19
bo198214 Wrote:
Quote:[*] remark: this argumentation seems to include, that f and g have the same base-parameter

There is no base. \( f \) and \( g \) are arbitrary analytic functions with development and fixed point at 0.

Hmm - first: I didn't expect that someone would reply so fast; so I added my corrections in the statement-list just by updating. Sorry.

second: yes, true; may be the terms "base" and "height" focus the reader to some speciality. What I mean is the following.

I express the operator iteration as
addition: x {+,b} h which means (x +b)+b)+b)...+b) h-times
multiplication: x {*,b} h which means (x *b)*b)*b)...*b) h-times
tetration: x {^,b} h which means b^(...^(b^(b^x)))) h-times

and generally for a function F, expressed as operator, this may perhaps be extended to

function F: x {F,b} h which means then F(F(...F((x)))

Then the formal base-parameter b may be included into the definition of F itself and may be omitted/defaulted. For instance, we are not used to consider the sin()-function in explicite terms of a "base" - but in fact, the powerseries may be parametrized with log(b) getting a sin_b(x), and the conventional sin()-function is then sin_b(x) with b=exp(1)
So, for the iterated sin-function sin_b we may write

function sin_b: x {SIN,b} h which means then sin_b(sin_b(...sin_b(x))) h-times

and then

function sin : x {SIN,exp(1)} h
with the default
function sin : x {sin} h = sin°h(x)

Regarding the "Height" - well, in fact it is better to say "iterate", but the letter "i" for its abbreviation would be *much* confusing. Our usual "t" is somehow anonym, and I used "t" from the beginning of my tetration-discussion as a very basic notation (also in my program-code), so what to do? "Height" for "iteration-height" is at least not completely misleading...
Gottfried Helms, Kassel
Reply
#20
Gottfried Wrote:Regarding the "Height" - well, in fact it is better to say "iterate", but the letter "i" for its abbreviation would be *much* confusing. Our usual "t" is somehow anonym, and I used "t" from the beginning of my tetration-discussion as a very basic notation (also in my program-code), so what to do?

Perhaps we can agree on an external third person. Ecalle uses \( w \) for what I would call "iteration exponent" or "iteration count". So \( w \) would be mnemonic for "iteration width" haha.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
Question Derivative of the Tetration Logarithm Catullus 1 1,126 07/03/2022, 07:23 AM
Last Post: JmsNxn
Question Iterated Hyperbolic Sine and Iterated Natural Logarithm Catullus 2 1,707 06/11/2022, 11:58 AM
Last Post: tommy1729
  Jabotinsky IL and Nixon's program: a first categorical foundation MphLee 10 8,271 05/13/2021, 03:11 PM
Last Post: MphLee
  Is bugs or features for fatou.gp super-logarithm? Ember Edison 10 21,792 08/07/2019, 02:44 AM
Last Post: Ember Edison
  Can we get the holomorphic super-root and super-logarithm function? Ember Edison 10 22,746 06/10/2019, 04:29 AM
Last Post: Ember Edison
  True or False Logarithm bo198214 4 17,277 04/25/2012, 09:37 PM
Last Post: andydude
  Base 'Enigma' iterative exponential, tetrational and pentational Cherrina_Pixie 4 17,215 07/02/2011, 07:13 AM
Last Post: bo198214
  Principal Branch of the Super-logarithm andydude 7 24,351 06/20/2011, 09:32 PM
Last Post: tommy1729
  Logarithm reciprocal bo198214 10 37,668 08/11/2010, 02:35 AM
Last Post: bo198214
  Kneser's Super Logarithm bo198214 16 74,559 01/29/2010, 06:43 AM
Last Post: mike3



Users browsing this thread: 1 Guest(s)