One encouraging result for fractional iterates with non-real base

Well , perhaps I was too much scared because of the difficult approximations of the matrix-operator-method for the cases when not e^-e < b < eta.

With a careful computation of the halfiterate of the complex base "I" I got

which is near the expected result, without need to change my hypothesis.

Remember my hypothetical formula for continuous tetration

y = {b,x}^^h

implemented by

V(y)~ = V(x)~ * (dV(log(b))*B)^h = V(x)~ * Bb^h

and

y = V(y)[1]

To arrive at the desired result, Bb^h must be constructed by the analytical description:

Let

W^-1 * D * W = Bb

be the eigendecomposition of Bb and

W^-1 * D^h * W = Bb^h

the h'th power of Bb

This can simply approximated with any eigensystem-solver, when fed with the matrix Bb. Exponentiate the eigenvalues and recompute...

But the result will then be only heuristical and we don't know the degree of approximation.

-------------

Following my hypothese about the further structure of the eigen-matrices, we can compute this structure analytically.

I assumed:

W^-1 = dV(1/t)*P^-1 ~ * X^-1

and

W = X * P~ * dV(t)

P is the known lower triangular pascal-matrix, X is a lower triangular matrix depending on t and u, and dV(t) is a diagonalmatrix containing the consecutive powers of t. The eigenvalues in D are the consecutive powers of u.

Here t and u are dependend on the base-parameter b, such that

t^(1/t) = b

u = log(t)

With my fixpoint-tracer I can first find a solution for t and b (at least) for some values b outside the range e^-e < b <e^(1/e), and it seems even for complex values of b (I have to make this sure for the general case of complex b)

The entries of X can be composed by finite polynomials in t and u and since X is triangular its inverse as well as X^-1 * D^h * X can be computed exactly/symbolically without need of approximation.

As I have already indicated in a previous post, the direct computation of W^-1 is *not* possible without approximation, since P^-1 ~ is not rowfinite, but the order of computation of the whole matrix expression can be optimized to have only one situation, where we need approximative summation, giving inexact values. This can be the last step.

The whole formula in the analytical decomposition is

V(y)~ = V(x)~ * Bb^h

V(y)~ = V(x)~ * W^-1 * D^h * W

= V(x)~ * (dV(1/t)*P^-1 ~ * X^-1 ) * D^h *(X * P~ * dV(t)

Exploiting associativity we implement this by

= (V(x)~ * dV(1/t)*P^-1 ~ )* X^-1 * D^h * X * (P~ * dV(t)

V(y/t-1)~ = V(x/t-1)~ * X^-1 * D^h * X

Denote M(t,h) = X^-1 * D^h * X

then

V(y/t-1)~ = V(x/t-1)~ * M(t,h)

M(t,h) is computed with exact terms, and only its second column is needed for the final computation. Call this terms m_r using r for the row-index.

Let (x/t-1) = z then

y/t-1 = sum(r=0..inf) m_r * z^r

and finally

y = ((sum(r=0..inf) m_r * z^r ) + 1)*t

The terms of the series m_r * z^r still do not converge over the short run of 32 terms, which I have available. But they are far better summable by Euler-summation than the terms, which occur, if I applied the naive form of summation using the second column of the precomputed Bb^h.

Possibly the problems, which remained for the matrix-method, can be reduced to mere acceleration of convergence (or regular summation of alternating divergent series), and are then just object of technically improvements of the numerical methods. That would be a very nice outcome, because we had then a firm base, from where only optimizations are required...

(This is worth a good glass of wine tonight :-) )

Gottfried

Well , perhaps I was too much scared because of the difficult approximations of the matrix-operator-method for the cases when not e^-e < b < eta.

With a careful computation of the halfiterate of the complex base "I" I got

Code:

`. y1= {I,1}^^0.5 ~ 1.16729812784 + 0.735996102206*I`

y2= {I,y1}^^0.5 = {I,1}^^1 ~ 0.000150635188062 + 1.00000615687*I

Remember my hypothetical formula for continuous tetration

y = {b,x}^^h

implemented by

V(y)~ = V(x)~ * (dV(log(b))*B)^h = V(x)~ * Bb^h

and

y = V(y)[1]

To arrive at the desired result, Bb^h must be constructed by the analytical description:

Let

W^-1 * D * W = Bb

be the eigendecomposition of Bb and

W^-1 * D^h * W = Bb^h

the h'th power of Bb

This can simply approximated with any eigensystem-solver, when fed with the matrix Bb. Exponentiate the eigenvalues and recompute...

But the result will then be only heuristical and we don't know the degree of approximation.

-------------

Following my hypothese about the further structure of the eigen-matrices, we can compute this structure analytically.

I assumed:

W^-1 = dV(1/t)*P^-1 ~ * X^-1

and

W = X * P~ * dV(t)

P is the known lower triangular pascal-matrix, X is a lower triangular matrix depending on t and u, and dV(t) is a diagonalmatrix containing the consecutive powers of t. The eigenvalues in D are the consecutive powers of u.

Here t and u are dependend on the base-parameter b, such that

t^(1/t) = b

u = log(t)

With my fixpoint-tracer I can first find a solution for t and b (at least) for some values b outside the range e^-e < b <e^(1/e), and it seems even for complex values of b (I have to make this sure for the general case of complex b)

The entries of X can be composed by finite polynomials in t and u and since X is triangular its inverse as well as X^-1 * D^h * X can be computed exactly/symbolically without need of approximation.

As I have already indicated in a previous post, the direct computation of W^-1 is *not* possible without approximation, since P^-1 ~ is not rowfinite, but the order of computation of the whole matrix expression can be optimized to have only one situation, where we need approximative summation, giving inexact values. This can be the last step.

The whole formula in the analytical decomposition is

V(y)~ = V(x)~ * Bb^h

V(y)~ = V(x)~ * W^-1 * D^h * W

= V(x)~ * (dV(1/t)*P^-1 ~ * X^-1 ) * D^h *(X * P~ * dV(t)

Exploiting associativity we implement this by

= (V(x)~ * dV(1/t)*P^-1 ~ )* X^-1 * D^h * X * (P~ * dV(t)

V(y/t-1)~ = V(x/t-1)~ * X^-1 * D^h * X

Denote M(t,h) = X^-1 * D^h * X

then

V(y/t-1)~ = V(x/t-1)~ * M(t,h)

M(t,h) is computed with exact terms, and only its second column is needed for the final computation. Call this terms m_r using r for the row-index.

Let (x/t-1) = z then

y/t-1 = sum(r=0..inf) m_r * z^r

and finally

y = ((sum(r=0..inf) m_r * z^r ) + 1)*t

The terms of the series m_r * z^r still do not converge over the short run of 32 terms, which I have available. But they are far better summable by Euler-summation than the terms, which occur, if I applied the naive form of summation using the second column of the precomputed Bb^h.

Possibly the problems, which remained for the matrix-method, can be reduced to mere acceleration of convergence (or regular summation of alternating divergent series), and are then just object of technically improvements of the numerical methods. That would be a very nice outcome, because we had then a firm base, from where only optimizations are required...

(This is worth a good glass of wine tonight :-) )

Gottfried

Gottfried Helms, Kassel