Ok, I've fiddled some days with this case of iteration. Now new information, but lots of paper with interesting graphs... 
Hmm, I haven't done this, because even if we get the coefficients linearly scaled this does not give obvious and special helpful interesting information when iterated. Also I doubt, that it can be done with higher polynomials - it was just computing a +/- deviance d of same abolute value of a sort of mean-value for the two fixpoints. If we have three or more, like f(x) = x + (x-t0)(x-t1)(x-t2) then in general there are no common d's with same absolute value - so I think, this will be impossible.
However, this led me to rethink the formula derived from symbolic eigensystem-decomposition of the b^x-1 - iteration. Written as function of height parameter h it is something like
(where u=log(b) and the a_k are functions of x and u, resp are constant for given x and u) which made me fiddle with the derivatives, sums of f(h) with increasing h (again) and so on. If I have news, I'll post it in the tetra-series thread.
Gottfried

bo198214 Wrote:but at least we have traced back the regular iteration at different fixed points to the regular iteration of different scalings. Does this also work for higher order polynomials?
Hmm, I haven't done this, because even if we get the coefficients linearly scaled this does not give obvious and special helpful interesting information when iterated. Also I doubt, that it can be done with higher polynomials - it was just computing a +/- deviance d of same abolute value of a sort of mean-value for the two fixpoints. If we have three or more, like f(x) = x + (x-t0)(x-t1)(x-t2) then in general there are no common d's with same absolute value - so I think, this will be impossible.
However, this led me to rethink the formula derived from symbolic eigensystem-decomposition of the b^x-1 - iteration. Written as function of height parameter h it is something like
Gottfried
Gottfried Helms, Kassel