andydude Wrote:bo198214 Wrote:I dont know whether they should, but actually there are different. I would say that this has to do with that you need an infinite matrix to do a fixed point shift. While you are actually dealing with a finite matrix.

Yes, I agree with you on this. I think that the problems we have had in the past with shifting, (or and things like this) is due to the fact that we are using finite matrices.

Andrew Robbins

Hmm, I'll add some new idea at the end of this msg, but let me first recall some formulae, which I already stated sometimes to shed another light on the reason for different values when using truncated matrices.

---------------------------------------------

In matrix-notation it occurs, that the fixpoint-shift can be expressed by factoring the carleman-matrix into diagonal and triangular-matrices (in the infinite-dimesion case). If Bb is the (transposed) Carleman-matrix for the function f(b,x)= b^x, and V(x)~ * Bb = V(b^x)~ provides the value for the function f(b,x) in the second entry of the result-vector, like

(1)

then we can express the fixpoint shift, using that factoring by introducing t, where t^(1/t)=b and u=log(t), and, for shortness Q = P^-1

(2)

where again

(3)

and U is the factorial scaled matrix of Stirling-numbers 2'nd kind

(4)

such that U_t is the Bell-matrix for t^x - 1.

In (2) the matrix-product Q~ * U_t for each entry of the result we had to compute infinite series, for which we could insert analytical results if the implicite series-evaluation are based on convergent series.

Different results may occur depending on the order of evaluation of the whole expression (combined (1) and (2)) because of truncation to finite size

(5)

By the binomial-theorem we can evaluate the first three factors first, indicated by the parentheses

(6)

giving the matrix-term

(7)

and for the result, introducing

and analoguously using the binomial-theorem

(

(9)

(10)

which is the matrix-expression for the fixpoint shift x' = x/t - 1

Note, that using powers of P, precisely the t'th-power we may write

(11)

(12)

(13)

thus using the fixpoint-shift x" = x - t

In the matrix-noation one can see, that the use of associativity changes the *position* of the (unavoidable) error due to truncation, and that the change of the finaly quantity of the error depends on this position.

-----------------------------------------------------------

But this all just a reminder, since I've discussed it already elsewhere.

Meditating

on Henryk's and Andrew's posts I

possibly found a key (or a first step), how to approach the problem of differing values for fractional iterates depending on the selection of fixpoints.

First note, that for b>e^(1/e) the value for u is u>1 .

Second, since in

where

we have factorials in the denominator of the second column of U, and dV(u) provides only growth of geometric rate, column 2 contains entries, which consitute a convergent series for all u and any finite integer height (expressed by the same power of U_t), when matrix-multiplied with Q~.

But this is not so with fractional heights: then the entries in the second column diverge strongly, and by inspection of the representation based on the symbolic-eigensystem decomposition, we seem to have a growthrate of order exp(k^2)/k! for the k'th entry.

The sequence of entries seems to follow a pattern of initial decrease, arriving at a minimum at some index k and then a tail of infinite increase, something like

"d d d d m i i i i ..." where "d" indicates decrease, "m" the minimum and "i" increase.

(See for instance

http://go.helms-net.de/math/tetdocs/html...ration.htm)

For half-integer heights h the "m" position occurs very early; if the fractional height h approaches integer values, the position moves to the tail, and for integer heights we may now say: it disappears "into the infinity" such that we have a convergent series for integer heights.

But the matrix-multiplication of Q~ * U_t^h , which expresses the summation of the "d d ... d m i i i ... " terms cofactored by the row-entries in Q~ is derived on the assumtion that we have a valid similarity-transform

(14)

which is, using

,

(15)

But if X^-1 * U_t involves evaluation of divergent series, then we do not really know, whether the row-by-column-multiplication constitute appropriate values to assure, that X^-1 * U_t * X is really a matrix-similarity transform, for instance, that the results of row-by-column-multiplications in X^-1 * U_t follow the same pattern as it occurs, when the involved series are convergent.

And since the formal matrix-multiplications are one-to-one with the functional representations of the fixpoint-shift, we may derive a caveat from here concerning the fixpoint-shift when applied to fractional "heights", as far as divergent summation of powerseries is implicated.

Just a note, I'll play with this a bit more to see, whether it is relevant/correct so far.

Gottfried