Iteration basics
#11
andydude Wrote:Here is the power series that corresponds to regular iteration:
\(
f^t(x)
\ =\ \sum_{k=0}^{\infty} x^k G_k(t)
\ =\ f^t(0)
\ +\ x \left[D_x f^t (x)\right]_{x=0}
\ +\ \frac{x^2}{2} \left[D_x^2 f^t (x)\right]_{x=0}
\ +\ \cdots
\)

And here is the power series that corresponds to natural iteration:
\(
f^t(x)
\ =\ \sum_{k=0}^{\infty} t^k H_k(x)
\ =\ f^0(x)
\ +\ t \left[D_t f^t (x)\right]_{t=0}
\ +\ \frac{t^2}{2} \left[D_t^2 f^t (x)\right]_{t=0}
\ +\ \cdots
\)
Hmm, now I see first time the difference... (why not start one thread "Basics" and add (or reference) posts to these basic questions - it may be easier to add information than to add to a mega-document Tex-Faq)

Concerning the second: my coefficients for the eigensystem-based analysis shows, that the series w.r.t height t (or h) have t in the exponent; they are *not* powerseries (except for one set of bases), so I wonder, whether the above formal derivative is correct?

[update] hmm, on a second read I may answer this by myself: this difference is coded in the different type of derivatives [D...] only - the taylor formula is true for any type of series.
But there is still one aspect, which I'll think about. Conversion of a zeta-series (the parameter is in the exponent, similar to the expansion of It. dec. exp.) into a representation as a powerseries involves the mystic stieltjes-constants, which are related to the euler-mascheroni-constant gamma. [/update]

Gottfried
Gottfried Helms, Kassel
#12
andydude Wrote:Well, these are really good questions.
Andrew Robbins

Andrew,

These are Really GOOD answers- makes a lot of difference, for me, to stop beating around the bush. (and find a new oneSmile).

And easy to visit every time.

Thanks.

By the way, have you calculated/willing to share the asymptotic - infinity values for odd operations? I am still eager to check the Fine structure constant approximation which I made from the Pentation value and from Your graph- just to close another wrong direction, if nothing more. It bothers me that graphical values improved the convergence- they should not have- the odds can not be so big.

Ivars
#13
Ivars Wrote:By the way, have you calculated/willing to share the asymptotic - infinity values for odd operations? I am still eager to check the Fine structure constant approximation which I made from the Pentation value and from Your graph- just to close another wrong direction, if nothing more. It bothers me that graphical values improved the convergence- they should not have- the odds can not be so big.

I am still building an accelerated super-log library and a general hyper-op library so that I can do this reliably. You and GFR have both expressed an interest in this, so I will do my best at getting this done soon. I will be publishing pseudo-code for the library in the computation sub-forum, so that the methods will be available for scrutiny and suggestion. I feel that this approach is better than trying to port the code to all these different platforms (Maple, Matlab, Pari, Sage, etc.) as the essential algorithms are more important than anything else. Once we have these algorithms sorted out, then we can apply them (hopefully with precision tracking along the way), so that we can know that the values we get only have 6 digits, for example. I think Jay Fox's error bound will help with this.

Andrew Robbins
#14
Hi Andrew

Of course if there is no accuracy all those conjectures are misquided. I just discovered when iterating f(x)=ln(abs(x)) in another thread that You get interesting probability density distributions but are they distributions produced by the iteration process or by limited accuracy effects remains an open question until they are done with appropriate accuracy, which is however difficult on my computer.

I am sorry I can be of no help in these improvements, but I am eagerly looking forward to see the results.

Ivars
#15
I wanted to ask another question. Got curious outside the basics, perhaps.

What if we contruct a 2 variable function F(t,x) where t is iteration of f(x) , x - argument of f(x) and then parametrize, for example:

t=g(y); x= h(y) , so

F(g(y), h(y) ) ;

Now, we can assume various functions g, h with different growth speeds. For example, we may consider:

g(y) = y!, h(y) = e^y or,

g(y) =e^y, h(y) = y! or any other.

It could be also possible to make t an imaginary part of a complex number, while x would be real part, so that:

\( z = x+it = h(y) + I* g(y) \)

So then F(t,x) becomes F(z) if functions h(y) and g(y) satisfy Euler conditions for derivatives of complex number functions.

What I wanted to ask, is , has this been considered, where could I read about it and does it make any sense at all to look at such functions?

Ivars
#16
Ivars Wrote:What if we contruct a 2 variable function F(t,x) where t is iteration of f(x) , x - argument of f(x) and then parametrize, for example:

It could be also possible to make t an imaginary part of a complex number, while x would be real part, so that:

\( z = x+it = h(y) + I* g(y) \)

So then F(t,x) becomes F(z) if functions h(y) and g(y) satisfy Euler conditions for derivatives of complex number functions.

What I wanted to ask, is , has this been considered, where could I read about it and does it make any sense at all to look at such functions?

Never heard about it, and have no idea whether this is bringing interesting considerations.
#17
Ivars Wrote:What I wanted to ask, is , has this been considered, where could I read about it and does it make any sense at all to look at such functions?

That is a good question, and I think that there is plenty of talk about "complex iteration" but not as much about the function you speak of. I think this function has not beed considered as you put it. I seem to remember \( f^x(x) \) being mentioned somewhere, but never \( f^{Im(x)}(Re(x)) \), maybe you might find something with this function...

Andrew Robbins
#18
andydude Wrote:That is a good question, and I think that there is plenty of talk about "complex iteration" but not as much about the function you speak of. I think this function has not beed considered as you put it. I seem to remember \( f^x(x) \) being mentioned somewhere, but never \( f^{Im(x)}(Re(x)) \), maybe you might find something with this function...

Andrew Robbins

And of course, opposite is possible to look at as well, (though initially I felt that the proper place for imaginary part could be in iteration argument, not function):

\( f^{Re(x)}(Im(x)) \),

Yes I can ask a question but to study this... - may be only if there are some simple obvious functions or even values to start with.
I can not register right now such. The idea itself just appeared when I was trying to see if we can iterate function faster or slower than function itself works on argument, and does it mean anything, is there a relationship between these various speeds which somehow must be registered outside the iterated function and its arguments?.

Than I thought that if we somewhat limit the choices of relationships between function argument and iteration argument, which complex plane obviously does (and function values that can be Re(x)/Im(x) are limited as well by Euler-Cauchy equations) , it might be easier to look for some relationships. But I have no single example even.

That would then require another thread.

Ivars
#19
Another question:

While looking for possible application of time in mathematics, popped into mathematics of time scales: See

Time Scale calculus publications

Time scales and Lyapunov stability

The notion of complex unit circle and half plane being subcases of Hilger circle with different time scale graininess sound very much related to deeper (possibly) structure of real numbers.

I would like to ask a question is this approach somehow related to iterations of functions, higher operations, or can be related? As I understand( and I understand very little so far) it kind of gives an additional degree of freedom to look at discrete and continuos functions together. If so, it may also be used to add an extra degree of freedom to number axis itself.

Thank You in advance,

Ivars

P.S. A very interesting PPT review:

Time scales PPT review
#20
Hm, timescales seem a very interesting topic.
Its about unification of difference and differential equation.
This is a (imo) very good introduction/tutorial.

Ivars Wrote:I would like to ask a question is this approach somehow related to iterations of functions, higher operations, or can be related?

I am not sure about this. I mean we have a kinda difference equation for the super exponentiation:
\( \text{sexp}(x+1)=e^{\text{sexp}(x)} \) or
\( \text{sexp}(x+1)-\text{sexp}(x)=e^{\text{sexp}(x)}-\text{sexp}(x) \).

However perhaps nobody on this forum is so familiar with timescales, to say wether one could directly change this to a differential equation or whatever and get (unique) solutions from there.


Possibly Related Threads…
Thread Author Replies Views Last Post
  [To Do] Basics of Iterating Relations MphLee 0 473 12/27/2022, 07:57 PM
Last Post: MphLee
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 36,397 06/11/2022, 12:24 PM
Last Post: tommy1729
  Iteration series: Different fixpoints and iteration series (of an example polynomial) Gottfried 0 5,661 09/04/2011, 05:59 AM
Last Post: Gottfried



Users browsing this thread: 3 Guest(s)