(08/23/2009, 02:45 PM)tommy1729 Wrote:bo198214 Wrote:For the constant -1 vanishes and we make the following calculations:notice in the last line bo wrote s(x) instead of s(0).

The derivation of is easily determined to be

and so the k-th derivative is , which give us in turn

for .

which is obviously a misprint. The corresponding equation system for arbitrary is:

for .

Quote:it is an expansion at x = 0.

now if we consider expansions at both x = 0 and x = 1 and get the same coefficients for x = 0 by computing them from

1) the coefficients expanded at x = 1

2) solving the modified equation ( see below)

I doubt this. I guess we get different superlogarithms for different development points . One should perhaps check this with a complex plot.

I further guess that for converging to the lower fixed point, the solution converges to the regular tetration.

And 3. I guess that Andrew's slog corresponds to the inverse of the matrix power sexp (which also depends on a development point).

Quote:since b^1 i = b^i , we get an extra b^i factor on the right side.*nods*

Quote:and v_k is replaced by sum v_k / k!

If you have a function f(x) with powerseries coefficients at 0 then f(x+d) has the powerseries development coefficients v_k(d) (provided that f has convergence radius >d at 0):

and vice versa .

Quote:bo mentioned the potential non-uniqueness for v_k when expanded at x = 0.

maybe this could be the extra condition we(?) are looking for.

The demand that the solution at different development points should give the same function does not determine how to solve the equation system in a different way. If you mean that.