Ivars Wrote:So iteration of a function is function repeatedly applied. I-times applied function is as strange as I times 2*2*2*...., but different.

Exactly. The thing is of course that we have an immediate definition of applying something n times, n natural number, however everything else needs some rules to extrapolate the meaning of fractional iteration, real iteration or complex iteration.

For example

2*I is 2 I-times repeated in addition, whatever that should mean, but we have the commutativity of multiplication, which we surely want to keep also for complex numbers, and hence 2*I=I*2=I+I. Thats an easy way to derive.

2^I, what should that mean? For that we have the function

which can be given as a power series and thatswhy can also be applied to complex arguments

. And there it turns out that

and so we can derive

.

Finally what about

. Perhaps we first ask for the already notationally occupied

. And perhaps before that we start with the basics:

On functions one can define the composition operation

. The function

is the function gained by first to apply the function

and then to apply the function

,

.

Correspondingly one defined

to be the

composed n times, e.g.

,

.

Now we have what n times iteration means, n being a natural number. The next question would be how to extend this to integer numbers, i.e. including negative numbers. For seeing this we first notice that

and surely we want to keep this law also for other number domains, hence:

or

, where

.

And one immediately sees that

is the inverse function of

(only if

was bijective of course).

And this already deeply ingrained in mathematics if one writes

(however this is mistakable for

so we use the more unambigous notation

.)

Now the next question is what is

and there we can see by keeping our previous law

that the n times iteration of

must be again the function

.

for example

is a function such that

. Though it turns out that

is generally not uniquely determined by this demand.

But under certain conditions at a fixed point of

there is unique solution, called regular (fractional) iteration. Such a condition is for example that

, or more generally for real iterations

. Or in words: that the derivation of

at the fixed point

is the

-th power of the derivation of

at

.

This makes surely sense as if one looks at the iteration of the function

, one gets

. Where

is the derivation at the fixed point

. And one want to keep this law for non-natural

.

Via this method you can determine what is meant to be a fractional iteration of

, i.e.

where

is determined by the previous explanation and fixed point condition. And by continuity we can extend this also to real iterations.

So what is now meant by complex iterations? For this one uses another method, the so called Abel function. An Abel function

for the function

is defined by

. The Abel function counts the iterations of

:

.

So if

is bijective we have

.

This Abel function is closely related to the logarithm or hyperlogarithms.

If we take as function

the logarithm is an Abel function for

:

.

Or if

then a superlogarithm (as used by Andrew) is defined by

,

.

The equation

is of course easily applicable to complex

nothing needs to be changed just keep the law for complex

too. E.g.

.

And in our case

, where an Abel function is

we have

. The occuring inverse of the Abel function is of course our later

. Like the inverse of

is

.

In the case of regular iteration, i.e. if

is the regular iteration of the function

at some fixed point, the Abel function would be called regular Abel function. Until now it is not clear whether the by Andrew defined slog is a regular Abel function at the lower fixed point of

(, if existing).

Quote:a[4]i = ?

, where slog and its inverse can be expanded into powerseries'.