11/13/2007, 02:41 PM
jaydfox Wrote:Ah, you'll miss the pictures forthen.
Dont worry I will have a look at it in some days (when I manage to get access to the internet) and will honor it correspondingly

Bummer!
|
11/13/2007, 02:41 PM
jaydfox Wrote:Ah, you'll miss the pictures for Dont worry I will have a look at it in some days (when I manage to get access to the internet) and will honor it correspondingly ![]() bo198214 Wrote:Can you make a comparison of I've come back to this. I'm starting with an unaccelerated solution to Andrew's slog, to ensure that my accelerated version doesn't skew the results. Assuming initial results with the unaccelerated version are promising, I'll then work on an accelerated solution. I did a preliminary test with the rslog calculated with n=80 (i.e., 80 exponentiations), using the first 150 terms of the power series, and a solution to the 500x500 system, and the results were very close, accurate to half a dozen decimal places or so (I didn't save the results). I've now calculated an rslog solution with n=100, using the first 200 terms of the power series. I've also calculated the solution to a 1000x1000 matrix for Andrew's slog. Comparing then the first 100 terms of each, the differences were less than about 10^-11 in absolute terms. In relative terms (since the terms decrease in magnitude exponentially), by about the 25th term the difference is about 10^-5. So from an initial testing, it appears quite likely that Andrew's solution and the rslog will converge on the same solution. But as I had previously mentioned, very high precision is necessary to make a strong conclusion, and at any rate this doesn't constitute a proof.
~ Jay Daniel Fox
03/12/2008, 09:20 PM
Just a short note, why this thread is called Bummer:
From a "proper" analytic iteration But we know that the regular iteration at a fixed point is the only analytic iteration that does not introduce a singularity at that fixed point (no oscillating first or higher order derivative when approachin the fixed point). Conclusio: There is no analytic iteration As every tetration
04/18/2009, 11:24 AM
(This post was last modified: 04/18/2009, 12:24 PM by Kouznetsov.)
bo198214 Wrote:...Shame! 18 years without advances. There should be a paper about it. Let us submit one right now! bo198214 Wrote:...It is beacuse you stay at the real axis. Get out from the real axis, and you have no need to deal with numbers of order of bo198214 Wrote:...What about the range of holomorphism of each of the 3 functions you mention? How about their periodicity? Do they have periods? Below, for base [attachment=480] In the first plot, the lines are shown. Thick curves correspond to integer valuse of p and q. In the second plot, the lines are shown. Thick curves correspond to integer valuse of p and q. The dashed lines show the cuts. On the third plot, the difference Dashed: Thin: Thick: My approximation for I suspect, each of functions P.S. Henryk, could you please help me to handle the sizes of the figures? I think, the same size would be better.
04/18/2009, 12:46 PM
(This post was last modified: 04/18/2009, 01:28 PM by Kouznetsov.)
bo198214 Wrote:..Below, there are two plots for the The left one is made of the entire super-function of exponential, hich is periodic with period The right hand side one is made of the tetration, hich is periodic with period I suspect, each of these generalized exponential is unique, while we do not move the cutlines. I try to upload the plot of the difference between these two functions:
@Kouznetsov
Just as an aside, we have used different terms for what you call superfunction. The standard term for this is orbit (see here), used in practically every textbook on dynamics. However, this doesn't really capture exactly the same idea, although it is very similar. I like the the term iterational function (which we talked about here), because it sounds like "exponential". But I also understand "superfunction", so I suppose it is a matter of taste. Andrew Robbins andydude Wrote:I like the the term iterational function (which we talked about here), because it sounds like "exponential". But I think the terminology we agree upon is e.g. superexponential. So superfunction is just a generalization of this terminology if we dont apply it to the exponential, but something else that perhaps has not a name. One could also say inverse Abel function, but this is somewhat lengthy. Btw. "orbit" is anyway wrong because its a set not a function.
04/22/2009, 11:33 PM
So lets see if I can use this terminology.
According to Markus Müller, the superfunction of Is that right? can I say "from"?
04/23/2009, 08:39 AM
andydude Wrote:According to Markus Müller, the superfunction of If you mean the fixed point then I would say "at". However -1 is not a fixed point of Yes Is it regular? The regular iteration is characterized by The iteration is given in terms of the superfunction So the t-th iterate of
Ah now I see what you meant by "from -1", namely that 0 is mapped to -1 by the superfunction. As indicated above we just write
Note that regular superfunctions are determined up to translations along the x-Axis in this case Instead of For example Regarding the iteration of quadratic polynomials there is also a very interesting article about the impossibility to do so in the whole complex plane. [1] R. E. Rice, B. Schweizer, and A. Sklar. When is f(f(z))=az^2 + bz + c? Am.Math.Mon., 87 : 252 −−263,1980. Not only the impossibility to have analytic, or continous halfiterates; no, there are no halfiterates (functions on |
« Next Oldest | Next Newest »
|