08/12/2009, 04:18 PM

(08/12/2009, 02:43 PM)jaydfox Wrote: The variable fma in that code is the matrix I'm referring to. Hopefully you can tell from the context what Andrew's talking about. Essentially, it gives us a matrix, which we can multiply on the right by a vector of powers of n, to get a vector representing a power series in x. For fractional n, the terms diverge.

Or, we can multiply on the right by a vector of powers of x (i.e., picking a fixed x), and get a power series in n, which is the iteration function. I use this latter approach.

Oh you mean this matrix is only for the computation of the iteration of ? And this takes this exuberant time?

You know that Walker describes an alogrithm for the iteration of based on a limit formula, not via powerseries development. Perhaps this is more appropriate if considering time. Particularly because he conducted some accelerations.

I initially thought that the inversion of the interpolation matrix took so much time (and not the fma) thatswhy I provided the direct formula via stirling numbers.

Well I would anyway ask you to make a modular solution.

Input: some super-exponential to base at least defined on some real axis.

Output: the taylor series of the base change to base .

This would give the flexibility also to check whether change of base leads from one regular super-exponential again to a regular super-exponential (at the lower fixed point).

The regular super-exponentials for base are anyway drastically more gentle numerically than the one for base .