Gottfried Wrote:Hmm, this point disppeared for me, when I read your post first time.
So I could say, that this is already fixed, what I always called "my hypothesis about the set of eigenvalues"?
For the infinite case it is clear (by the existence of Schroeder functions at the fixed points), that
^n: n\in\mathbb{N}\})
is a set of Eigenvalues of

for each fixed point

.
Quote:Further it may then be interesting to develop arguments for the degenerate case... In my "increasing size" (of the truncated Bell-matrix) analyses the courious aspect for base
, appeared, that the set of eigenvalues show decreasing distances between them but also new eigenvalues pop up, and each new eigenvalue was smaller than the previous. See page "Graph"So there are concurring tendencies - and it may be fruitful to do an analysis, why this/how this could be compatible with the/a limit case, where they are assumed to approach 1 asymptotically.
I really have no idea about the structure of Eigenvalues of truncations in this case. But somehow the non-integer iterations seem to converge in this case too (?)
bo198214 Wrote:I really have no idea about the structure of Eigenvalues of truncations in this case. But somehow the non-integer iterations seem to converge in this case too (?)
Hmm, two untested ideas, just from tinking about this:
1) The infinite alternating series of 1 - b^1 + b^b^1 - b^b^b^1 + ... -... corresponds with the matrix M= (I + Bb)^-1 , and M's eigenvalues, if b=eta, should then all be 1/2, and converge to this value by increasing size of truncation. If the observed eigenvalues of the truncated matrices actually converge to this, we may conclude for the matrix Bb itself?
2) the "b^x - 1" - iteration is closely related to the original tetration and expressible by a triangular matrix-operator with an obvious set of eigenvalues. Perhaps from here we can deduce an answer?
I'll look at this these days (maybe thursday)
Gottfried
bo198214 Wrote:You probably mean
B * W(B)[col] = W(B)[col] * d[col]
but thats just the matrix version of the Schroeder equation
.
If you find a solution
to the Schroeder equation you make the Carleman/column matrix
out of it and you have a solution of this infinite matrix equation and vice versa if you have a solution W(B) to the above matrix equation that is the Carleman/column matrix of a powerseries
then this power series is a solution to the Schroeder equation. Nothing new is gained by this consideration.
I'm trying to understand the problem of uniqueness now. Since by the eigensystem I do not only have one equation (imaging the meaning of fixpoints) for the first row by
W^-1 [,0] * Bs = d[0,0] * W^-1[,0] = 1*W^-1[,0]
whose series-expression may not be a non-uniquely determination for f(x) = s^x (or another way round?)
(Henryk suggested some examples, for instance a mixture using coefficients of an overlaid sine-function, if I understood this correctly).
I have infinitely many equations according to each row of W^-1. Say u = log(t) wheren t^(1/t) = b = base I have also
W^-1 [,1] * Bs = u^1*W^-1[,1]
W^-1 [,2] * Bs = u^2*W^-1[,2]
...
(where the second row happens to express the series for the first derivative w.r. to the topexponent x if I see it right)
and so on.
The same sine-overlay should not be sufficient to satisfy all these further equations.
But this is merely speculation here, since I did not yet understand, in what the uniqueness-problem comes into play.
Gottfried