• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Matrix-method: compare use of different fixpoints bo198214 Administrator Posts: 1,395 Threads: 91 Joined: Aug 2007 11/12/2007, 11:52 AM Gottfried Wrote:Hmm, this point disppeared for me, when I read your post first time. So I could say, that this is already fixed, what I always called "my hypothesis about the set of eigenvalues"? For the infinite case it is clear (by the existence of Schroeder functions at the fixed points), that $\{\ln(a)^n: n\in\mathbb{N}\}$ is a set of Eigenvalues of $B_b$ for each fixed point $a$. Quote:Further it may then be interesting to develop arguments for the degenerate case... In my "increasing size" (of the truncated Bell-matrix) analyses the courious aspect for base $b= \eta$, appeared, that the set of eigenvalues show decreasing distances between them but also new eigenvalues pop up, and each new eigenvalue was smaller than the previous. See page "Graph"So there are concurring tendencies - and it may be fruitful to do an analysis, why this/how this could be compatible with the/a limit case, where they are assumed to approach 1 asymptotically. I really have no idea about the structure of Eigenvalues of truncations in this case. But somehow the non-integer iterations seem to converge in this case too (?) Gottfried Ultimate Fellow Posts: 787 Threads: 121 Joined: Aug 2007 11/12/2007, 03:13 PM (This post was last modified: 11/12/2007, 03:17 PM by Gottfried.) bo198214 Wrote:I really have no idea about the structure of Eigenvalues of truncations in this case. But somehow the non-integer iterations seem to converge in this case too (?) Hmm, two untested ideas, just from tinking about this: 1) The infinite alternating series of 1 - b^1 + b^b^1 - b^b^b^1 + ... -... corresponds with the matrix M= (I + Bb)^-1 , and M's eigenvalues, if b=eta, should then all be 1/2, and converge to this value by increasing size of truncation. If the observed eigenvalues of the truncated matrices actually converge to this, we may conclude for the matrix Bb itself? 2) the "b^x - 1" - iteration is closely related to the original tetration and expressible by a triangular matrix-operator with an obvious set of eigenvalues. Perhaps from here we can deduce an answer? I'll look at this these days (maybe thursday) Gottfried Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 787 Threads: 121 Joined: Aug 2007 11/13/2007, 05:48 PM (This post was last modified: 11/13/2007, 05:54 PM by Gottfried.) bo198214 Wrote:You probably mean B * W(B)[col] = W(B)[col] * d[col] but thats just the matrix version of the Schroeder equation $\sigma(\exp_b(x))=c\sigma(x)$. If you find a solution $\sigma$ to the Schroeder equation you make the Carleman/column matrix $W(B)$ out of it and you have a solution of this infinite matrix equation and vice versa if you have a solution W(B) to the above matrix equation that is the Carleman/column matrix of a powerseries $\sigma$ then this power series is a solution to the Schroeder equation. Nothing new is gained by this consideration. I'm trying to understand the problem of uniqueness now. Since by the eigensystem I do not only have one equation (imaging the meaning of fixpoints) for the first row by W^-1 [,0] * Bs = d[0,0] * W^-1[,0] = 1*W^-1[,0] whose series-expression may not be a non-uniquely determination for f(x) = s^x (or another way round?) (Henryk suggested some examples, for instance a mixture using coefficients of an overlaid sine-function, if I understood this correctly). I have infinitely many equations according to each row of W^-1. Say u = log(t) wheren t^(1/t) = b = base I have also W^-1 [,1] * Bs = u^1*W^-1[,1] W^-1 [,2] * Bs = u^2*W^-1[,2] ... (where the second row happens to express the series for the first derivative w.r. to the topexponent x if I see it right) and so on. The same sine-overlay should not be sufficient to satisfy all these further equations. But this is merely speculation here, since I did not yet understand, in what the uniqueness-problem comes into play. Gottfried Gottfried Helms, Kassel andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 11/30/2007, 05:24 PM bo198214 Wrote:I simply dont see how you get fixed points involved. You have Carleman matrix $B_b$ of $b^x$ and you uniquely decompose the finite truncations ${B_b}_{|n}$ into ${B_b}_{|n}=W_{|n} D_{|n} {W_{|n}}^{-1}$ and then you define ${B_b}^t = \lim_{n\to\infty} W_{|n} {D_{|n}}^t {W_{|n}}^{-1}$. Where are the fixed points used? In the usual eigen-decomposition, the diagonal consists of powers of $f'(x_0)$ where $x_0$ is the fixed point. Maybe this was already answered. Andrew Robbins « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post The Promised Matrix Add On; Abel_M.gp JmsNxn 2 548 08/21/2021, 03:18 AM Last Post: JmsNxn Revisting my accelerated slog solution using Abel matrix inversion jaydfox 22 30,908 05/16/2021, 11:51 AM Last Post: Gottfried Which method is currently "the best"? MorgothV8 2 7,701 11/15/2013, 03:42 PM Last Post: MorgothV8 "Kneser"/Riemann mapping method code for *complex* bases mike3 2 10,030 08/15/2011, 03:14 PM Last Post: Gottfried An incremental method to compute (Abel) matrix inverses bo198214 3 13,073 07/20/2010, 12:13 PM Last Post: Gottfried SAGE code for computing flow matrix for exp(z)-1 jaydfox 4 13,305 08/21/2009, 05:32 PM Last Post: jaydfox regular sexp:different fixpoints Gottfried 6 18,303 08/11/2009, 06:47 PM Last Post: jaydfox Convergence of matrix solution for base e jaydfox 6 14,357 12/18/2007, 12:14 AM Last Post: jaydfox

Users browsing this thread: 1 Guest(s)