With development point I dont mean the base. Here I only consider tetrations with real base > 1, the tetrational should be real on the real axis.

With "development point" I mean the following: Our methods (intuitive and matrix power) take in a powerseries development and put out a powerseries development. This input powerseries is usually the powerseries of exp_b developed at 0. But it works for lots of other functions too.

So if we take in the powerseries (at 0) of the shift conjugated function z-> exp_b(z+z0)-z0 with the output of the Abel function and the superfunction , then the function and are Abel and superfunction of the original exp_b respectively. I call z0 the development point of the method, or say for example the islog_b developed at z0.

For the particular case of being a fixed point we already know that the matrix power iteration is equal to regular iteration at that fixed point, and we also know that in this case the intuitive slog/iteration can not be applied.

So my original objection was: that the matrix power method would be continuous in z_0. So if I go with z_0 from one fixed point to another the iteration must change (as the iterations are different at both fixed points). While if I assume that this change is not continuous the iteration stays a same for a while and then jumps if it comes near to another fixed point.

These regions of non-changing iteration could be the attracting regions of fixed point. Perhaps there are also "repelling" regions (where the inverse function is attracting) which dictate the iteration outcome in the same way.

With "development point" I mean the following: Our methods (intuitive and matrix power) take in a powerseries development and put out a powerseries development. This input powerseries is usually the powerseries of exp_b developed at 0. But it works for lots of other functions too.

So if we take in the powerseries (at 0) of the shift conjugated function z-> exp_b(z+z0)-z0 with the output of the Abel function and the superfunction , then the function and are Abel and superfunction of the original exp_b respectively. I call z0 the development point of the method, or say for example the islog_b developed at z0.

For the particular case of being a fixed point we already know that the matrix power iteration is equal to regular iteration at that fixed point, and we also know that in this case the intuitive slog/iteration can not be applied.

So my original objection was: that the matrix power method would be continuous in z_0. So if I go with z_0 from one fixed point to another the iteration must change (as the iterations are different at both fixed points). While if I assume that this change is not continuous the iteration stays a same for a while and then jumps if it comes near to another fixed point.

These regions of non-changing iteration could be the attracting regions of fixed point. Perhaps there are also "repelling" regions (where the inverse function is attracting) which dictate the iteration outcome in the same way.