# Tetration Forum

You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
Hi -

I'm considering the properties of the regular iteration after fixpoint-shift.
To have easier conditions I look at the function f(x) = x^2 - 0.5 and its iterates.

It has the fixpoints $x_a = \frac{1+\sqrt{3}}{2}$ and $x_b = \frac{1-\sqrt{3}}{2}$.

I compute the half-iterate f°0.5(x) by translating f(x) -> g(x - x_a) + x_a and determine the powerseries for g°0.5(x)

The result is interesting; the curve for the half-iterate meets the first fixpoint, but misses the second. Instead, the series diverges in that region.
However, using the more reliable results of f°0.5(x) at other values of x, I can construct a bit of continuation, which gives one winding around (xb,xb).

Naively, I'd expected that the function crossed both fixpoint but it seems, that this needed a completely different powerseries, and thus a modified procedure.

I'm without an idea currently, how to proceed here. Does anyone have a comment?

Here are two plots:

a) an overview. Integer iterates f°-1(x), f°0, f°1,f°2, f°3,f°4 and the regular f°0.5 iteration based on the powerseries representation

[attachment=456]

b) a detail, f°0.5 and f°1 in the vicinity of (xb,xb) and a continuation based on more reliable results of f°0.5 at other values of x

[attachment=457]

Gottfried
A bit more continuation:

[attachment=458]

Gottfried
Interesting phenomenon, I guess it has to do with that the function is not strictly increasing in the vicinity of the second fixed point. If you develop the regular half iterate at the left fixed point it gives a non-real function.

If I remember correctly the matrix power approach even yields a non-real solution if applied at 0 between both fixed point *when the function is strictly increasing*.

For complex fixed points, its anyway (mostly) not the case that the regular iteration at one fixed point has the other fixed point as fixed point.
bo198214 Wrote:Interesting phenomenon, I guess it has to do with that the function is not strictly increasing in the vicinity of the second fixed point. If you develop the regular half iterate at the left fixed point it gives a non-real function.

right. but is that the only neccessary and sufficient reason ?

bo198214 Wrote:If I remember correctly the matrix power approach even yields a non-real solution if applied at 0 between both fixed point *when the function is strictly increasing*.

in general yes.

bo198214 Wrote:For complex fixed points, its anyway (mostly) not the case that the regular iteration at one fixed point has the other fixed point as fixed point.

i think you can even say for any 3 real fixed points that are not attractive , it is not the case that regular iteration at one fixed point has the other 2 real fixed points as fixed points.

anyways , i think we would all benefit by being carefull with fixpoint ideas.

the following looks intresting to me :

if g(g(x)) = f(x)

f(x) has 2 fixpoints of which 1 in commen with g(x).

how many other fixpoints can g(x) have , and what determines the amount and position ?

i think the radius of g(x) and f(x) are important in all this.

also if the fixpoints are attractive or not seems important.

i think in case f(x) and g(x) are both entire , the number of their fixpoints are - in general - as good as unbounded.

it might be intresting to consider :

fixpoint of f( inverse f(x) + 1 ) = fixpoint of g(x) ?

( i think its clear why ? )

furthermore analytic fixpoint shift might be another " tetration uniqueness condition " ?!?

just some ideas ...

regards

tommy1729
bo198214 Wrote:Interesting phenomenon, I guess it has to do with that the function is not strictly increasing in the vicinity of the second fixed point. If you develop the regular half iterate at the left fixed point it gives a non-real function.

If I remember correctly the matrix power approach even yields a non-real solution if applied at 0 between both fixed point *when the function is strictly increasing*.

For complex fixed points, its anyway (mostly) not the case that the regular iteration at one fixed point has the other fixed point as fixed point.

Hmm, what I understand now is the following. While the black line for the f(x)-curve shows the locus for continuous decreasing x, we don't notice, that around the second fixpoint the *iterates* of f(x) oscillate around and converge to the fixpoint. The curve for f°0.5(x) "has to reflect this": two iterates of f°05->one iterate of f(x). There is a (necessary) first crossing of f(x) and f°0.5(x) near the fixpoint (x~ -0.195) and if one looks at the construction-scheme for the continuation of f°0.5(x) it is obvious, that this trajectory must be winding around that of f(x). Well: the *trajectory*. If the curve of f(x) is seen as *trajectory* (for instance via the cobweb-curve), then it lies on a line (the curve of the graph) but is oscillating around and converges to the fixpoint.

So I think now this is a general problem for fractional iterates.
Curious now: the f°05-curve winds around and converges to the fixpoint. But where does the curve appear again for x<-0.5? In the complex plane?

Hmmm.

Gottfried
Hmm, seems to make things better understandable. Here is the curve of f(x), but with focus on the aspect of also representing a trajectory of iterates. I made segments of iteration (each segment has range h modulo 1) visible by coloring: segments of equal color map consecutively to the next under iteration.
The "critical" area near the second fixpoint (read) is an area of oscillation, part of the green segment is overlaid by the red segment.
Interestingly the curve f(x) does not spiral like f°0.5(x), or let's say: it's a degenerate spiral, or it may be expressed as limit case of spiral without y-radius (or so...)

The light-blue segment of the f(x)-curve cannot be reached from any segment on the right; the iteration-"source" is imaginary. This answers the also the continuation of the f°0.5(x)-curve: it must (re-)appear in the complex plane.

Hmm...

[attachment=461]

I'm tending to following consideration:
* assume a function f(x)
* define one initial point (x0,f(x0))
* define the set of iterates -> discrete set of points, possibly accumulating at some fixpoints
* use binomial-theorem (our binomial-formula for iterates) to compose points to get continuous set of points.

The we have one segment of a curve within a certain interval (here the range between the two fixpoints -0.5<=x<=(1+sqrt(3))/2) , so to say: "one segment came into existence"

* then - how to express the generation of the rest of the curve for f(x) (the lightblue part in the plot) in this terms ?

Concerning the half-iterate:
remember - the current computation is based only on one special method to define a half-iterate.
* Can there be another approach to such interpolation, providing a non-spiraling curve for f°0.5(x) ?
* Or, let it be spiralling, but which crosses the y-axis exactly at (x,y) =(0,fp_y), where fp_y is the value of the fixpoint - this would be only a very small correction, but would look much more elegant...?

Gottfried
Looking at the winding of f°0.5(x) at the second fixpoint and in contrast at the straight line at first fixpoint gives an idea for an explanation for another problem. I was always wondering, why the alternating series of consecutive iterates gives matching results for the serial summation and the matrix-computed summation in one direction and not in the other.
It seems, that this agrees insofar, that, if the trajectory converges straightly to the fixpoint the results match; and if the trajectory spirals the results don't match.

This seems to be more generalizable: it does not only occur with spiralling but also with oscillating trajectories as we observe this with the f(x)-function itself when iterated towards the second fixpoint.

I get, for the alternating sums, beginning at x=1, iterates towards 2'nd versus 1'st fixpoint

serial summation
0.709801988103 towards 2'nd fixpoint: $\sum_{h=0}^{\infty} (-1)^h * f^{\circ h}(1.0)$
0.419756033790 towards 1'st fixpoint: $\sum_{h=0}^{\infty} (-1)^h * f^{\circ -h}(1.0)$

Matrix-method:
0.580243966210 towards 2'nd fixpoint // incorrect, doesn't match serial summation
0.419756033790 towards 1'st fixpoint // matches serial summation

where systematically with the matrix-method the two results sum up to the parameter x in f°h(x). If we could find the functional equation for the relation between the series of the negative and of the postive heights we could give reliable values for the "sums" for very badly diverging series by that...

Gottfried

plz clarify.
(06/02/2011, 08:36 PM)tommy1729 Wrote: [ -> ]dear gottfried , im confused about your last post ...

plz clarify.

there is no secret at the alternating sums of iterates of consecutive (integer) heights: just approximate it using your favorite software which is capable of Cesaro-/Abel-/Eulersummation. In Pari/GP you use "sumalt":
Code:
\\ define function f(x) for forward iteration and g(x) for backward iteration (=negative height) \\(additional parameter h for positive integer heights is possible) f(x,h=1) = for(k=1,h,x = x^2 - 0.5 ); return (x) ; g(x,h=1) = for(k=1,h,x = sqrt(0.5 + x) ); return (x) ; \\ do analysis at central value for alternating sums x0=1 x0 = 1.0 sp = sumalt(h=0,(-1)^h * f(x0 , h)) sn = sumalt(h=0,(-1)^h * g(x0 , h)) y = sp + sn - x0
to reproduce the "serial" sums. (clearly you would optimize this by exploiting the fact, that the function-calls in sumalt use strictly consecutive integer heights)

Then y is in general (for real x0 in a unit-interval of iteration) not zero.

Using the matrix-method to compute the alternating sums I get systematically y=0 due to the rules of matrix-algebra. The interesting point is, that always one of the alternating sums is correct, either sp or sn - and I did not yet see, which one and when and why that one.

I've just done some more discussion of the iteration of this function due to the review of this question this days on math SE (But note, that that discussion has nothing to do with the problem of the relation to the matrix-based method as mentioned in my earlier post)
Sometimes we find easter-eggs even after easter...

For the alternating iteration-series
$\hspace{48} sn(x,p)=\sum_{h=0}^{\infty} (-1)^h g(x,h) ^p \\
\hspace{48} \text{where } g(x)=\sqrt{0.5+x} \hspace{48} g(x,h)=g(g(x,h-1)) \hspace{48} g(x,1)=g(x)$

(definitions as copied and extended from previous post, see below)

we find a rational polynomial for p=4. That means

$\hspace{48} x^4 - g(x)^4 + g(x,2)^4-g(x,3)^4+ ... - ... \\
\hspace{48}= sn(x,4) = 1/8 - x^2 + x^4$

(maybe this is trivial and a telescoping sum only, didn't check this thorough)
<hr>
Another one:
$\hspace{48} sn(x,1)+sn(x,2) = -0.25 + x^2
$

<hr>

Code:
\\ define function f(x) for forward iteration and g(x) for backward iteration (=negative height) \\(additional parameter h for positive integer heights is possible) f(x,h=1) = for(k=1,h,x = x^2 - 0.5 ); return (x) ; g(x,h=1) = for(k=1,h,x = sqrt(0.5 + x) ); return (x) ; \\ do analysis at central value for alternating sums x0=1 x = 1.0 sp(x) = sumalt(h=0,(-1)^h * f(x , h)) sn(x) = sumalt(h=0,(-1)^h * g(x , h)) y(x) = sp(x) + sn(x) - x

Pages: 1 2 3