Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Bummer!
#31
jaydfox Wrote:Ah, you'll miss the pictures for then.

Dont worry I will have a look at it in some days (when I manage to get access to the internet) and will honor it correspondingly Wink.
Reply
#32
bo198214 Wrote:Can you make a comparison of with Andrew's computed by your super sophisticated algorithm and post a graph of the difference in the range say [-1,1.9] somewhere?!
If the difference turns into a smooth curve starting from some precision then we know they are different, if the result is at any precision rather a random curve this would favour the equality of both solutions.

I've come back to this. I'm starting with an unaccelerated solution to Andrew's slog, to ensure that my accelerated version doesn't skew the results. Assuming initial results with the unaccelerated version are promising, I'll then work on an accelerated solution.

I did a preliminary test with the rslog calculated with n=80 (i.e., 80 exponentiations), using the first 150 terms of the power series, and a solution to the 500x500 system, and the results were very close, accurate to half a dozen decimal places or so (I didn't save the results).

I've now calculated an rslog solution with n=100, using the first 200 terms of the power series. I've also calculated the solution to a 1000x1000 matrix for Andrew's slog.

Comparing then the first 100 terms of each, the differences were less than about 10^-11 in absolute terms. In relative terms (since the terms decrease in magnitude exponentially), by about the 25th term the difference is about 10^-5. So from an initial testing, it appears quite likely that Andrew's solution and the rslog will converge on the same solution. But as I had previously mentioned, very high precision is necessary to make a strong conclusion, and at any rate this doesn't constitute a proof.
~ Jay Daniel Fox
Reply
#33
Just a short note, why this thread is called Bummer:

From a "proper" analytic iteration one would expect that it is analytic everywhere in the domain of the definition.
But we know that the regular iteration at a fixed point is the only analytic iteration that does not introduce a singularity at that fixed point (no oscillating first or higher order derivative when approachin the fixed point).

Conclusio:
There is no analytic iteration , , that is analytic at both fixed points for most , though for integer it is.

As every tetration (that satisfies , and analyticity) can be written as for some analytic iteration , this statement can be called a bummer as there is no "proper" analytic iteration.
Reply
#34
bo198214 Wrote:...
the conjecture about the equality of the 3 methods of tetration is shattered.
I received an e-mail of Dan Asimov where he mentions that the continuous iterations of at the lower and upper real fixed points, , differ! He, Dean Hickerson and Richard Schroeppel found this arround 1991, however there is no paper about it.
...
Shame! 18 years without advances. There should be a paper about it. Let us submit one right now!

bo198214 Wrote:...
The numerical computations veiled this fact because the differences are in the order of .
...
It is beacuse you stay at the real axis. Get out from the real axis, and you have no need to deal with numbers of order of .

bo198214 Wrote:...
So the first lesson is: dont trust naive numerical verifcations. We have to reconsider the equality of our 3 methods and I guess there will show up differences too.
What about the range of holomorphism of each of the 3 functions you mention?
How about their periodicity? Do they have periods?

Below, for base , I upload the plots of two functions:

which is superfunciton of such that and where .

which is superfunciton of such that and , where ; at least for .


   
   
[attachment=480]
   
In the first plot, the lines
const
const
are shown. Thick curves correspond to integer valuse of p and q.

In the second plot, the lines
const
const
are shown. Thick curves correspond to integer valuse of p and q.
The dashed lines show the cuts.

On the third plot, the difference is shown in the same notations. The plot of this difference along the real axis is below:
   
Dashed:
Thin:
Thick: My approximation for

I suspect, each of functions and is unique.

P.S. Henryk, could you please help me to handle the sizes of the figures?
I think, the same size would be better.
Reply
#35
bo198214 Wrote:..
There is no analytic iteration , , that is analytic at both fixed points for most , though for integer it is.
..
Below, there are two plots for the , .
The left one is made of the entire super-function of exponential, hich is periodic with period , and its inverse.
The right hand side one is made of the tetration, hich is periodic with period , and its inverse.
   
I suspect, each of these generalized exponential is unique, while we do not move the cutlines.
I try to upload the plot of the difference between these two functions:
   
Reply
#36
@Kouznetsov
Just as an aside, we have used different terms for what you call superfunction.
The standard term for this is orbit (see here), used in practically every textbook on dynamics.
However, this doesn't really capture exactly the same idea, although it is very similar.
I like the the term iterational function (which we talked about here), because it sounds like "exponential".
But I also understand "superfunction", so I suppose it is a matter of taste.

Andrew Robbins
Reply
#37
andydude Wrote:I like the the term iterational function (which we talked about here), because it sounds like "exponential".
But I also understand "superfunction", so I suppose it is a matter of taste.

But I think the terminology we agree upon is e.g. superexponential.
So superfunction is just a generalization of this terminology if we dont apply it to the exponential, but something else that perhaps has not a name.
One could also say inverse Abel function, but this is somewhat lengthy.

Btw. "orbit" is anyway wrong because its a set not a function.
Reply
#38
So lets see if I can use this terminology.

According to Markus Müller, the superfunction of from (-1) is .

Is that right? can I say "from"?
Reply
#39
andydude Wrote:According to Markus Müller, the superfunction of from (-1) is .

Is that right? can I say "from"?

If you mean the fixed point then I would say "at". However -1 is not a fixed point of but 1 is, so I dont know exactly what you mean.

Yes is a superfunction of .




Is it regular?
The regular iteration is characterized by being differentiable (but at least asymptotically differentiable) at the fixed point . (This also implies that .)
The iteration is given in terms of the superfunction by:


has two fixed points 1 and :
is not invertible at 1, but it is invertable at .
So the t-th iterate of is differentiable at and so is the regular superfunction at .
Reply
#40
Ah now I see what you meant by "from -1", namely that 0 is mapped to -1 by the superfunction. As indicated above we just write where unfortunately the should be a \mapsto having a vertical bar at the left, which is however not shown in this TeX-derivate.

Note that regular superfunctions are determined up to translations along the x-Axis in this case .
Instead of we can put an arbitrary different constant, via which we can choose initial conditions different from .
For example would be reached by and by , etc.



Regarding the iteration of quadratic polynomials there is also a very interesting article about the impossibility to do so in the whole complex plane.
[1] R. E. Rice, B. Schweizer, and A. Sklar. When is f(f(z))=az^2 + bz + c? Am.Math.Mon., 87 : 252 −−263,1980.

Not only the impossibility to have analytic, or continous halfiterates; no, there are no halfiterates (functions on ) at all!
Reply




Users browsing this thread: 2 Guest(s)