open problems / Discussion
#1
Here I want to start a thread, where we may discuss the "open problems".

I think it is best to have such a separate thread, because then the open problems survey - list is not been littered with scratchpad-like discussions.
If such a discussion has then been settled, the result may be redirected into the "open problems"-thread

Hope you agree with this.
Gottfried Helms, Kassel
#2
Gottfried Wrote:Here I want to start a thread, where we may discuss the "open problems".

I think it is best to have such a separate thread, because then the open problems survey - list is not been littered with scratchpad-like discussions.

Absolutely right.
#3
Gottfried Wrote:b) for different fixpoints/different branches of W

Exactly, the question is why does this work only for the lower real fixed point.

What also burningly interests me how this looks with other functions that have an attractive fixed point.
I think for our example \( f(x)=x^2+x-1/16 \) with fixed points \( \pm 1/4 \) this does no more work, that the eigenvalues of the Carleman matrices converge to the powers of the derivative at one fixed point.
#4
bo198214 Wrote:
Gottfried Wrote:b) for different fixpoints/different branches of W

Exactly, the question is why does this work only for the lower real fixed point.

What also burningly interests me how this looks with other functions that have an attractive fixed point.
I think for our example \( f(x)=x^2+x-1/16 \) with fixed points \( \pm 1/4 \) this does no more work, that the eigenvalues of the Carleman matrices converge to the powers of the derivative at one fixed point.

What about the branches of Lambert-W ?
Gottfried Helms, Kassel
#5
Gottfried Wrote:What about the branches of Lambert-W ?

Dont know. But the interesting thing is that the lower real fixed point is preferred. To achieve a different branch of the Lambert W function you have to chose a different matrix/function.
#6
bo198214 Wrote:
Gottfried Wrote:What about the branches of Lambert-W ?

Dont know. But the interesting thing is that the lower real fixed point is preferred. To achieve a different branch of the Lambert W function you have to chose a different matrix/function.

No, that's not what I meant, but may well be it's not useful.

The same carleman-matrix C can be decomposed using the different fixpoints. So we have the impossibility: the same trace of C must equal two different geometric sums (of different diagonals using different u, in case of b=sqrt(2) of 1/(1-log(2)) and 1/(1-2log(2)).

What I had in mind was, whether at least the derivation of the W'-formula allows the same solution using different u, by considering different branches.
This would not resolve the impossibility, but it were an interesting detail.
Gottfried Helms, Kassel
#7
I think, it oscillates. The solution oscillates between these 2 values, and both branches are present in it. GFR has many times tried to suggest this approach, use of both Lambert branches at the same time (2- valued function), or alternatively ( function implying hidden time variable).

The summation can be looked upon as behaviour of partial sums?

Like when You have series 1-1+1-1+1-1 - oscillating series, it has partial sums 1,0,1,0.....- also oscillating between 2 values, but value 1/2.

So the question is then what series give partial sums 1(/(1-log(2)) and 1/(1-2log(2))?

Obviosly, first one is 1+log(2)+(log(2)^2)+ log(2)^3- .........

Second is 1+ 2log(2)+ (2log(2))^2+ (2*( log(2)))^3 + = 1+log(4)-(log(4))^2 +(log(4))^3 .....

if we would use binary log instead of log base e , first series would become:

1+1+1+1+1 .......... with partial sums 1,2,3,4,5,6 with value -1/2.

Second:

1+2+4+8+16+32+64......with partial sums: 1,3, 7, 15, 31, 63, 127..........with value? (1/(1-2) )=-1 .

So the values of 2 series together oscillate between -1/2 and -1.

-1/2,-1,-1/2,-1 ......

The difference between terms of oscillations is 1/2 or = -1/2. second difference- +1 or -1 ; third difference: 2 or -2 etc.

Question is, shall we use log base e in the case of sqrt(2) when its natural to use base 2?

Admittedly, this post of mine is somewhat confusing.

Ivars
#8
bo198214 Wrote:What also burningly interests me how this looks with other functions that have an attractive fixed point.
I think for our example \( f(x)=x^2+x-1/16 \) with fixed points \( \pm 1/4 \) this does no more work, that the eigenvalues of the Carleman matrices converge to the powers of the derivative at one fixed point.

Well, I see, that the sequence of sets of eigenvalues of truncations of increasing size of the Carleman-matrix C is not stabilizing. But we have this with the Bb-matrices as well, if we use b outside 1..e^(1/e) - so I don't think we have really a problem here. The canned eigensystem-solver do not work analytically, but with polynomials based on the truncation-size, giving best fitting results for that sizes.

The trace of C is a divergent sum; however, with Cesaro-sum of increasing orders, I get stabilization at ~ 2.0 from order 4.5, where order 1.0 means direct summation. (Surprisingly, Euler-summation seems not to applicable here). This result would back the assumtion of eigenvalues consisting of the infinite set (1,1/2,1/4,1/8,...)

-----------------------

I've done some eigenanalysis using the fixpoint-shifts, where fixpoint 1 is t0 = -1/4 and fixpoint 2 is t1=+1/4.

Call C the transposed Carleman-matrix for your function f(x), such that

\( V(x)\sim * C = V(f(x))\sim \)

as usual.

The triangular matrices U0 and U1 are generated by the functions g0(x) and g1(x), which represents the fixpoint-shifted versions, such that
\( \hspace{24}
f(x) = g_0(x')'' \) or \( V(x')\sim * U_0 = V(f(x)')\sim \) where \( x'=x-t_0; \) and \( x''=x+t_0 \)
or
\( \hspace{24}
f(x) = g_1(x')'' \) or \( V(x')\sim * U_1 = V(f(x)')\sim \) where \( x'=x-t_1 \) and \( x''=x+t_1 \)
(' and " indicate the appropriate shifting).

Then:
U0 has the eigenvalues \( D_0=diag([1,1/2,1/4, 1/8,...]) \)
U1 has the eigenvalues \( D_1=diag([1,3/2,9/4,27/8,...]) \)

With the assumtion, that these eigenvalues hold also as possible solutions for C, we need to show, that we can find infinitely many invariant vectors X_k such that \( X_k\sim * C = d_k* X_k\sim \)

If I generate such eigenvectors via U0 resp U1 I get the following possible results (consistent with the assumtions of eigensystem-decomposition)

a1) U0 = W0^-1 * D0 * W0
or
a2) W0 * U0 = D0 * W0

b1) U1 = W1^-1 * D1 * W1
or
b2) W1 * U1 = D1 * W1

and rearranging the fixpoint-shift

c1) C = X0^-1 * D0 * X0 = X1^-1 * D1 * X1

or in vector-invariance-notation:

c2a) X0*C = D0 * X0
c2b) X1*C = D1 * X1


With fixpoint t0(=-1/4) I show the first 3 invariant vectors X0[0..2,0..inf]
Code:
´
  1  -1/4  1/16  -1/64  1/256  -1/1024   ...
  0     1  -1/2   3/16  -1/16    5/256   ...
  0    -4     3   -3/2    5/8   -15/64   ...
such that

X0[0,] * C~ = 1 *X0[0,]
X0[1,] * C~ = 1/2*X0[1,]
X0[2,] * C~ = 1/4*X0[2,]

and with fixpoint t1(=1/4) I show the first 3 invariant vectors X1[0..2,0..inf]
Code:
´
  1   1/4  1/16  1/64  1/256  1/1024  ...
  0     1   1/2  3/16   1/16   5/256  ...
  0  -4/3   1/3   1/2   7/24  25/192  ...
such that

X1[0,] * C~ = 1 *X1[0,]
X1[1,] * C~ = 3/2*X1[1,]
X1[2,] * C~ = 9/4*X1[2,]

which -when continued- proves, that both assumtions for eigenvalues of C lead to possible solutions for the eigenvectors.

The schroeder-functions \( \sigma_0(x) \) \( \sigma_1(x) \) and their inverses are nasty; to determine their powerseries-coefficients we need for all except \( \sigma_1^{-1} \) divergent summation of high order as long as we don't have analytical expressions, so not only the result of the schroeder-functions are depending on truncation, but even their coefficents themselves are more or less good approximations. Only for \( \sigma_1^{-1} \) we have coefficients, based on evaluation of convergent series...
Gottfried Helms, Kassel
#9
Gottfried Wrote:
bo198214 Wrote:What also burningly interests me how this looks with other functions that have an attractive fixed point.
I think for our example \( f(x)=x^2+x-1/16 \) with fixed points \( \pm 1/4 \) this does no more work, that the eigenvalues of the Carleman matrices converge to the powers of the derivative at one fixed point.

Well, I see, that the sequence of sets of eigenvalues of truncations of increasing size of the Carleman-matrix C is not stabilizing. But we have this with the Bb-matrices as well, if we use b outside 1..e^(1/e) - so I don't think we have really a problem here.

No problem? You mean that the coefficients of the functional exponential or of the \( t \)-th iterate converge despite (with increasing matrix size)?


Possibly Related Threads…
Thread Author Replies Views Last Post
  open problems survey bo198214 33 99,021 10/16/2023, 01:26 PM
Last Post: leon
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 9,827 02/22/2023, 08:58 PM
Last Post: tommy1729
  A related discussion on interpolation: factorial and gamma-function Gottfried 9 21,480 07/10/2022, 06:23 AM
Last Post: Gottfried
  Fixpoints of the dxp() - has there been discussion about it? Gottfried 0 4,699 11/10/2011, 08:29 PM
Last Post: Gottfried
  Discussion of TPID 6 JJacquelin 3 12,593 10/24/2010, 07:44 AM
Last Post: bo198214
  Tetration FAQ Discussion bo198214 27 74,706 07/17/2008, 02:52 PM
Last Post: andydude



Users browsing this thread: 1 Guest(s)