Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Iteration basics
#1
What is iteration of a function in plain simple words- this word gives me headache. Or perhaps give me a link , or place in this thread. This name has been a barrier for me even if I have read it 1000 times and also in articles- I do not get it- something basic is missing... What is iteration of a function, please.

Thank You in advanceSmile

Ivars
Reply
#2
Common difficulty. Sometime brain blocks...

Assume
Code:
´
f(x) = x^2+2
Code:
´
a) f(x)^2 = (x^2+2)^2           // taking f to the 2'nd power
          =  x^4 + 4x^2 +4

b) f°2(x) = f(f(x))             // iteration of f
          = (f(x))^2  + 2
          =  (x^4 + 4x^2 +4) + 2
          = x^4+4x^2+6

c)  e^f(x) = ...                // exponentiation of f

Assume
Code:
´
       f(x) = e^x
            = 1 + x/1! + x^2/2! + ....
Code:
´
a)       f(x)^2 = (1 + x/1! + x^2/2! +...)^2   // taking f to the second power
                =  1 + 2*x/1! + 4*x^2/2! + 8*x^3/3! + 16*x^4/4! +...
                = f(2x)

b)       f°2(x) = f(f(x))                      // iteration of f
                = f(e^x)
                = 1 + e^x/1! + e^2x/2! + ...

c)       e^f(x) = ...                          // exponentiation of f
                     // is in this case equal to second iteration

(You had this right previous times Smile )

Gottfried
Gottfried Helms, Kassel
Reply
#3
So easy!

So iteration of a function is function repeatedly applied. I-times applied function is as strange as I times 2*2*2*...., but different. I- times applied multiplication is multiplication operation applied i times= 2^i. 2+2+2+ i times is either 2 times i or i times 2. Hmm.

There must be a wealth of possible outcomes.

i times applied tetration is giving- what? Imaginary height of power tower is also strange.

a[4]i = ?

(i^(1/i))[4]infinity = i so

a[4]i= a[4]((i^(1/i))[4] infinity)

I have no idea what are rules in those brackets. We can nest i's like this forever.

Thanks, Gottfried.

Ivars
Reply
#4
Ivars Wrote:So iteration of a function is function repeatedly applied. I-times applied function is as strange as I times 2*2*2*...., but different.

Exactly. The thing is of course that we have an immediate definition of applying something n times, n natural number, however everything else needs some rules to extrapolate the meaning of fractional iteration, real iteration or complex iteration.

For example
2*I is 2 I-times repeated in addition, whatever that should mean, but we have the commutativity of multiplication, which we surely want to keep also for complex numbers, and hence 2*I=I*2=I+I. Thats an easy way to derive.

2^I, what should that mean? For that we have the function which can be given as a power series and thatswhy can also be applied to complex arguments . And there it turns out that and so we can derive
.

Finally what about . Perhaps we first ask for the already notationally occupied . And perhaps before that we start with the basics:

On functions one can define the composition operation . The function is the function gained by first to apply the function and then to apply the function , .
Correspondingly one defined to be the composed n times, e.g. , .

Now we have what n times iteration means, n being a natural number. The next question would be how to extend this to integer numbers, i.e. including negative numbers. For seeing this we first notice that
and surely we want to keep this law also for other number domains, hence:


or
, where .
And one immediately sees that is the inverse function of (only if was bijective of course).
And this already deeply ingrained in mathematics if one writes (however this is mistakable for so we use the more unambigous notation .)

Now the next question is what is and there we can see by keeping our previous law that the n times iteration of must be again the function .
for example is a function such that . Though it turns out that is generally not uniquely determined by this demand.

But under certain conditions at a fixed point of there is unique solution, called regular (fractional) iteration. Such a condition is for example that , or more generally for real iterations . Or in words: that the derivation of at the fixed point is the -th power of the derivation of at .

This makes surely sense as if one looks at the iteration of the function , one gets . Where is the derivation at the fixed point . And one want to keep this law for non-natural .

Via this method you can determine what is meant to be a fractional iteration of , i.e. where is determined by the previous explanation and fixed point condition. And by continuity we can extend this also to real iterations.

So what is now meant by complex iterations? For this one uses another method, the so called Abel function. An Abel function for the function is defined by . The Abel function counts the iterations of :


.

So if is bijective we have
.

This Abel function is closely related to the logarithm or hyperlogarithms.
If we take as function the logarithm is an Abel function for : .

Or if then a superlogarithm (as used by Andrew) is defined by , .

The equation is of course easily applicable to complex nothing needs to be changed just keep the law for complex too. E.g.
.

And in our case , where an Abel function is we have

. The occuring inverse of the Abel function is of course our later . Like the inverse of is .

In the case of regular iteration, i.e. if is the regular iteration of the function at some fixed point, the Abel function would be called regular Abel function. Until now it is not clear whether the by Andrew defined slog is a regular Abel function at the lower fixed point of (, if existing).

Quote:a[4]i = ?

, where slog and its inverse can be expanded into powerseries'.
Reply
#5
Thanks. This is great. Will return to this page few times while rereading many posts.And may be have a question, evenSmile.

Actually, very interesting thing, Abel function.

Ivars
Reply
#6
Ivars Wrote:Thanks. This is great. Will return to this page few times while rereading many posts.And may be have a question, evenSmile.

Ivars

In this thread it may be appropriate to recall the introduction into iteration in continuous interation and possibly also in operators . I've made some update in the first one; but it surely needs some extensions, especially in respect to the problems of different fixpoints et al. and non-(easy) invertible functions.

Gottfried
Gottfried Helms, Kassel
Reply
#7
So I have a question now:

Is it possible to integrate /differentiate iterated functions over iteration parameter (e.g. t in fot(x)) if that is continuous real or complex number in some interval?

Ivars
Reply
#8
Ivars Wrote:So I have a question now:

Is it possible to integrate /differentiate iterated functions over iteration parameter (e.g. t in fot(x)) if that is continuous real or complex number in some interval?

Ivars
According to my derivations in Continuous Iteration we have if base b=e^(1/e) a powerseries in h (if h is the height parameter) which may then be differentiated/integrated by usual diff/int of terms. If base b differs from this, we have, due to eigensystem-analysis a series, where h is in the exponent, thus a modified dirichlet-series (I'd say). This can also be diff'ed/int'ed termwise, but has then something like a*u^h and the derivative of such a term is then a*log(u)*u^h
Gottfried
Gottfried Helms, Kassel
Reply
#9
So first we have to prove that function fot (x) can be expanded in powerseries of h (or t) in the vicinity of the point x ( e.g x=e^(1/e)) and that would define it as differentiable per t, but only for that basis ( x) or some small region around it?

Ivars
Reply
#10
Well, these are really good questions. If you are familiar with Taylor series, then you will know that you can represent any analytic function as a power series:



This is the first iterate of the function. When a function is iterated, its output becomes its next input, so and in general we write when n is an integer, or when t is non-integer. Finding the derivative of is one of the goals of natural iteration.

Continuous iteration can be classified into two major methods, because there are two primary ways that you can turn into a 1-variable power series, because there are two variables: x and t. One could also construct a 2-variable power series, but thats complicated, so I wont do that now.

Here is the power series that corresponds to regular iteration:


And here is the power series that corresponds to natural iteration:


Whats weird about these two methods, is that what they are doing is not really finding derivatives, but finding the coefficients in the power series, but because the coefficients in the power series are related to the derivatives, you can find the derivatives with these methods. For example, if you wanted to find the second derivative of with respect to t, then you could apply natural iteration to find , then solve for the derivative to obtain .

So if you are interested in derivatives with respect to x, then you should search this forum for "regular" and if you are interested in derivatives with respect to t, then you should search this forum for "natural" and see what you find in your search. Smile

Andrew Robbins
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 2,092 09/04/2011, 05:59 AM
Last Post: Gottfried
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 10 11,956 06/09/2011, 05:56 AM
Last Post: bo198214
  Hyperoperators [n] basics for large n dyitto 9 5,826 03/12/2011, 10:19 PM
Last Post: dyitto



Users browsing this thread: 1 Guest(s)