• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Matrix Operator Method Gottfried Ultimate Fellow Posts: 758 Threads: 117 Joined: Aug 2007 08/27/2007, 03:09 PM (This post was last modified: 08/27/2007, 03:12 PM by Gottfried.) bo198214 Wrote:Hm, then I somewhere must have an error in my computation. I wanted to compute the Matrix logarithm via the formula $\log(A)=\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} (A-I)^n$. For A being the power derivation matrix of $e^x$ at 0, i.e. truncated to 6x6Hmm, I can crosscheck such a computation in the evening. But for a quick reply: did you observe the partial sums for the entries for increasing n by s_n_(i,j) = sum (k=1..n) entry_(i,j)? If their sign oscillate we have a candidate for the Euler-summation (for each entry separately!). Another try may be to use the alternate series for logarithm let f=(a-1)*(a+1) ^-1 log(a) = f/1 + f^3/3 + f^5/5 ... (don't have it at hand, whether this series actually has alternating signs, but I think, it's correct). The same formula can be used for matrices A, where (A+I) is invertible and *all* the eigenvalues of (A+I) are in the admissible range for this series. Anyway, it is worth to look at the partial sums; Euler-summation of the low order 2 can sum for instance 1-2+4-8+16-...+... which is even more divergent than many of our sums, requiring only few terms for a good approximation of the final result. For a more detailed discussion I can do better in the evening. Gottfried Gottfried Helms, Kassel bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 08/27/2007, 07:15 PM (This post was last modified: 08/27/2007, 07:17 PM by bo198214.) Gottfried Wrote:If their sign oscillate ...No, their sign doesnt oscillate. The reason is simply: Some of the Eigenvalues are greater than 1 and hence the logarithm sequence does no more converge. This seems to can be fixed with the series $\log(A)=\sum_{n=0}^\infty \frac{2}{2n+1} \left((A-I)(A+I)^{-1}\right)^{2n+1}$ which then properly converges. Gottfried Ultimate Fellow Posts: 758 Threads: 117 Joined: Aug 2007 08/27/2007, 08:15 PM (This post was last modified: 08/27/2007, 08:18 PM by Gottfried.) bo198214 Wrote:Gottfried Wrote:If their sign oscillate ...No, their sign doesnt oscillate. The reason is simply: Some of the Eigenvalues are greater than 1 and hence the logarithm sequence does no more converge. This seems to can be fixed with the series $\log(A)=\sum_{n=0}^\infty \frac{2}{2n+1} \left((A-I)(A+I)^{-1}\right)^{2n+1}$ which then properly converges.Well, I have it in my "Hütte Mathematische Tafeln": $ \log( \frac{1+x}{1-x}) =2\left({x + \frac{x^3}{3} + \frac{x^5}{5} +... }\right) \hspace{100} for |x|<1 \\ \log( \frac{x+1}{x-1})=2\left({x + \frac{x^3}{3} + \frac{x^5}{5} +... }\right) \hspace{100} for |x|>1 $ To apply one of this series to a matrix, all eigenvalues must match the same bound separately. I think, this settles this question for the most interesting cases. For matrices with eigenvalues both <1 and >1 , which occurs with the Bs-matrixes for s outside the range 1/e^e ... e^(1/e) we need still workarounds, like the techniques for divergent summation. But this is then a completely different chapter. I'm happy, we have now arrived at a level of convergence of understanding of (one of?) the core points of concepts. Gottfried Gottfried Helms, Kassel bo198214 Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 08/29/2007, 05:28 PM Gottfried Wrote:With A1 = A-I then log(A) = A1 - A1*A1/2 + A1*A1*A1/3 .... which is a nice exercise... since it comes out, that A1 is nilpotent and we can compute an exact result using only as many terms as the dimension of A is. For the infinite dimensional case one can note, that the coefficients are constant when dim is increased step by step, only new coefficients are added below the previously last row. Are you sure about this? For me it rather looks as if they converge. The Eigenvalues are quite different depending on the truncation. Even in the case $b<\eta$ where you can compute the logarithm via the infinite matrix power series, it should depend on where you truncate the matrix. Gottfried Ultimate Fellow Posts: 758 Threads: 117 Joined: Aug 2007 09/03/2007, 12:46 PM bo198214 Wrote:Gottfried Wrote:With A1 = A-I then log(A) = A1 - A1*A1/2 + A1*A1*A1/3 .... which is a nice exercise... since it comes out, that A1 is nilpotent and we can compute an exact result using only as many terms as the dimension of A is. For the infinite dimensional case one can note, that the coefficients are constant when dim is increased step by step, only new coefficients are added below the previously last row. Are you sure about this? For me it rather looks as if they converge. The Eigenvalues are quite different depending on the truncation. Even in the case $b<\eta$ where you can compute the logarithm via the infinite matrix power series, it should depend on where you truncate the matrix.Henryk - please excuse the delay. I was a bit exhausted after my search for the eigensystem-solution. Well, I think I've not taken the best wording. What I wanted to say was, that if A is nilpotent, then the series is finite. But A is only nilpotent, if the base is e ( the diagonal is 1 and the diagonal of A-I is zero) - I had in mind, that we were talking about this base. Then the entries of the partial sums up to the power d do not change for rows b^x or T(b,x,0)-->T(b,x,1) are the coefficients of the appropriate exponential series and are just collected in a matrix-scheme. This is obviously iterable, Code:. V(x)~   * dV(log(b))* B = V(b^x)~ V(b^x)~ * dV(log(b))* B = V(b^b^x)~and since the result is finite, the involved coefficients, even if seen as matrix multiplications, should be well defined. Code:.             Bb = dV(log(b))*B V(x)~   * Bb^h = V({b,x}^^h)~But I think, it must also formally be proven, that this "collection of coefficients" do in fact behave as a "matrix", so the rules for integer powers and other matrix-operations are applicable. For the easier version U() one finds a discussion of this for instance in Aldrovani/Freitas [AF], which refers to the triangular form of the required operator-matrix; the useful property of row-finiteness is also adressed for instance in Berkolaiko[B], who proves the existence of the matrix-operator for any similar transformation. I'm not sure, whether I can use Berkolaiko's arguments for square-matrices (and thus the original tetration-iteration T()) as well, so I must leave this open, "whether the collection of coefficients can indeed be used as matrix" But the numerical results, which are always approximations based on the finite truncation, suggest that integer iteration and matrix-powers are interchangeable also for the T()-and U()-transformation: a) numerical results by iteration and by matrix-powers coincide. b) linear combinations of different V(x)- vectors result in linear combinations of the according V(y)-vectors as expected c) infinite sums of various V(x) -vectors give expected results(tetra-geometric series) For a subset of parameters the result can be crosschecked by conventional scalar evaluation (possibly Euler-summation required) and agrees with these results. d) even infinite sums of powers of Bb give results which are compatible with conventional scalar evaluation when the parameter b is in the safe range, and this can also be extended for parameters b even outside this range (b>e^(1/b)) (analytical arguments, which back that observation are based on the eigensystem-hypothesis, see below) ------ continuous tetration ------------------------ The continuous version depends then completely on the possibility to interprete the collection of coefficients used in the integer version in fact as a matrix, including the option to take matrix-logarithms and diagonalization (eigensystem-decomposition), since we need the concept of fractional iteration and thus of fractional matrix-powers. Numerically Again the numerical results agree with the expected results for the safe range of the base-parameter b and manageable height-parameters h. Already numerical approximations for the matrix-logarithm and for the diagonalization using finite dimensions up to dim=32 and dim=64 show the expected behaviour in a certain range of the parameters, although the empirical eigenvalues have unknown degree of approximation to the "true" eigenvalues, which are unknown. Anyway, approximative results can be found even for some b outside the "safe range" (if not too far) and h not too high. Just use your favorite Eigen-solver and apply fractional powers... Analytically Based on a hypothese about the structure of the eigenvalues a formal and analytical solution for the eigenvectors was found for parameter b in the "safe range". Again, the results for these parameters agree with the expected values when approximated by scalar operations, and fractional iterates were computed for some examples. It was also possible to apply the hypothesis about the eigen-system for values b>e^(1/e) when the required parameters were taken from the set of complex fixpoints for that b-parameters. The matrices for some example parameters could be reproduced perfectly, for instance for b=3 and b=7, h=1, which matched the matrices given by the simple integer-approach. Also integer powers were correctly reproduced. Still one problem: fractional powers for b>eta A problem occured with non-integer powers here. The fractional powers of matrices, when constructed based on the eigensystem hypothese, were different from the matrices, which were computed by a numerical eigensystem-solver. Possibly the results are just complex rotations of each other (but I didn't confirm this yet), or the hypothese must be modified to use complex conjugates or the like. -------------------------------------------------------- The formula for computation of the eigensystem makes use of the following decomposition. Let D be the diagonalmatrix containing the eigenvalues, dV(u), W the matrix of eigenvectors such that Code:Bb = W^-1 * D * Wthen a further decomposition, where b = t^(1/t), t possibly complex, Code:W = X * P~ * dV(t)and u = log(t) leads to the full decomposition Code:Bb = (dV(t^-1) * P^-1~ * X^-1) * dV(u) * (X * P~ * dV(t))where arbitrary fractional or complex powers of Bb are expressed by that powers of the scalar elements of the diagonal eigenvalue-matrix Code:D = dV(u) Here X and P are triangular (and thus invertible), X, dV(t) and dV(u) are depending on t, but only dV(u) is modified by the tetration-height-parameter h, such that we must insert D^h = dV(u^h) . The coefficients in X are finite polynomials in t and u, having also denominators of products of (1-u),(1-u^2),(1-u^3),... which indicates singularities inherent to this method (see also Daniel's recent post, I'll add the reference later, I'm writing this in a notepad without the http:-references at hand). While all coefficients of the individual matrices are then finitely computable, the analytical computation of W^-1 still involves evaluation of infinite series, because P^-1~ is not row-finite (but may for numerical computation simply be approximated by numerical inversion of W) This means in practice, that there will be concurring methods for the numerical evaluation of the complete matrix-product, in the sense of using the best order for the evaluation by exploitation of the associativity of the matrix-products. ----------------------------------------- So everything is still based on heuristics and hypotheses, which should be attempted to be proven now to close the case. For me it is now nearly without doubt, that this method -for its implicite and underlying definition of tetration- will come out as a formal coherent/consistent method, at least as the formal skeleton/description. But -besides the need of formal proofs- I have still two problems: 1) the described matrix-approach allows dimensions of 24,32,64 and thus as few terms for the final series. This is way too few for a general implementation, even for tests of approximations this is often too few. Jay announced series with up to 700 terms - so since this seems to be possible the method should be reviewed in this aspect. 2) fractional powers for the analytically and eigensystem-constructed Bb-matrices, for b>eta. I could exploratively try to find, what's going wrong with the result and to find a remedy this way. But I would like to find first a hypothesis for the reason (and the remedy) of this problem. It surely has to do with the different branches of complex logarithms, but I don't see in which way this is precisely involved in the current case. The integer powers come out fine... The U()-problem seems to be simpler than the T()-problem, since we deal only with triangular matrices with a more obvious eigensystem. Possibly the "closing of the case" will be easier done if first attempted via the analysis of the U()-transformation. Anyway, if it cannot be done in near future I think I'm taking a longer break - I'm already feeling a bit exhausted and tired of the subject. Gottfried [AF] CONTINUOUS ITERATION OF DYNAMICAL MAPS# R. Aldrovandi and L.P. Freitas physics/9712026 16 Dec 1997 [B]Analysis of Carleman Representation of Analytical Recursions G. Berkolaiko JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 224, 81-90 1998. ARTICLE NO. AY985986 Gottfried Helms, Kassel Gottfried Ultimate Fellow Posts: 758 Threads: 117 Joined: Aug 2007 10/14/2007, 09:32 PM (This post was last modified: 10/15/2007, 03:52 AM by Gottfried.) One encouraging result for fractional iterates with non-real base Well , perhaps I was too much scared because of the difficult approximations of the matrix-operator-method for the cases when not e^-e < b < eta. With a careful computation of the halfiterate of the complex base "I" I got Code:. y1= {I,1}^^0.5                ~ 1.16729812784 + 0.735996102206*I   y2= {I,y1}^^0.5 = {I,1}^^1 ~  0.000150635188062 + 1.00000615687*Iwhich is near the expected result, without need to change my hypothesis. Remember my hypothetical formula for continuous tetration y = {b,x}^^h implemented by V(y)~ = V(x)~ * (dV(log(b))*B)^h = V(x)~ * Bb^h and y = V(y)[1] To arrive at the desired result, Bb^h must be constructed by the analytical description: Let W^-1 * D * W = Bb be the eigendecomposition of Bb and W^-1 * D^h * W = Bb^h the h'th power of Bb This can simply approximated with any eigensystem-solver, when fed with the matrix Bb. Exponentiate the eigenvalues and recompute... But the result will then be only heuristical and we don't know the degree of approximation. ------------- Following my hypothese about the further structure of the eigen-matrices, we can compute this structure analytically. I assumed: W^-1 = dV(1/t)*P^-1 ~ * X^-1 and W = X * P~ * dV(t) P is the known lower triangular pascal-matrix, X is a lower triangular matrix depending on t and u, and dV(t) is a diagonalmatrix containing the consecutive powers of t. The eigenvalues in D are the consecutive powers of u. Here t and u are dependend on the base-parameter b, such that t^(1/t) = b u = log(t) With my fixpoint-tracer I can first find a solution for t and b (at least) for some values b outside the range e^-e < b t^x-1. However it is again enormous consumptive: the required iterations of the Mathar-products is quadratic (precisely:binomial(index-1,2)) with the index. So for index k=20 I need already 171 iterations with always growing matrices... surely some shortcuts may be implemented. The working load is in the number of columns of the H-matrices: the final number of columns for k=20 is  1/2*k*(k^2-4k+5)=3250, and grows cubically with the index.    However - the fact, that this simple scheme is able to mimic the symbolic eigendecomposition of a matrix-operator is a very astonishing aspect. ======================================================================== (end of mail to seqfan) So, for our members, who did not understand my matrx-method with eigendecomposition (or did not trust it ;-) ) here is a completely elementary approach, simple to implement. However - now two hypotheses are involved, which need proof: a) that indeed the property holds, that A-matrices using integer values of height h contain the denominators d_k as factor b) that the Mathar-process indeed provides the correct H-matrices. I checked that up to h=20 and did not find an error. I append also some A-matrices, computed by the above process. Note, they are transposes compared to my mentioned text about "continuous iteration" Gottfried ======================================================================== Code: D_2= (vector)   -1  1 K_2= (vector)   1 A_2= (matrix)   -1  1 -------------------------- D_3= (vector)   1  -1  -1  1 K_3= (vector)   1  3  1 A_3= (matrix)   1  -3  2   2  -3  1 ------------------------------ D_4=   -1  1  1  0  -1  -1  1 K_4=   1  7  13  26  31  31  25  13  6  1 A_4=   -1   7  -12  6   -6  18  -18  6   -5  18  -18  5   -6  11   -6  1 ------------------------------ D_5=   1  -1  -1  0  0  2  0  0  -1  -1  1 K_5=   1  15  40  100  186  310  490  705  921  1140  1315  1435  1481  1420   1285  1105  886  660  455  285  166  85  35  10  1 A_5=    1   -15   50   -60  24   14   -75  145  -120  36   24  -130  230  -170  46   45  -180  275  -180  40   46  -165  215  -120  24   26  -105  130   -60   9   24   -50   35   -10   1 ------------------------------------------ D_6=   -1  1  1  0  0  -1  -1  -1  1  1  1  0  0  -1  -1  1 K_6=   1  31  121  366  861  1642  2982  4932  7727  11497  16628  23127   31277  40937  52147  64612  78297  92497  107162  121451  135002  146787   156632  163631  167871  168862  166802  161541  153616  142981  130527   116621  102186  87531  73486  60166  48101  37376  28236  20635  14656   10026  6611  4130  2440  1306  636  255  80  15  1 A_6=     -1    31   -180   390   -360  120    -30   270   -870  1290   -900  240    -89   694  -1920  2515  -1590  390   -214  1364  -3345  3905  -2190  480   -374  2025  -4440  4825  -2550  514   -416  2395  -4995  4925  -2325  416   -511  2430  -4530  4110  -1800  301   -461  2006  -3480  2885  -1110  160   -330  1336  -2055  1495   -510   64   -154   675   -960   575   -150   14   -120   274   -225    85    -15    1 ----------------------------------------------------------- D_7=   1  -1  -1  0  0  1  0  2  0  -1  -1  -1  -1  0  2  0  1  0  0  -1  -1  1 K_7=   1  63  364  1316  3857  8540  17522  32676  56763  92722  145565 219590  321413  457324  635782  865844  1158395  1523599  1973805   2519790  3172421  3942099  4837273  5864971  7030269  8336258  9781520   11364262  13075818  14906199  16838921  18855277  20927928  23031974   25132905  27201587  29200578  31099439  32859715  34454231  35847014   37015524  37930670  38578071  38937003  39005785  38775716  38259081   37461558  36406349  35109662  33604207  31912699  30072889  28112666   26071493  23978150  21871710  19778472  17733072  15757701  13878746   12110168  10468493  8959111  7590492  6361384  5272659  4318307  3494092   2790081  2198385  1707203  1306193  983171  727447  527625  374374   258812  173747  112567  69951  41279  22883  11698  5369  2142  686  161   21  1 A_7=      1     -63     602    -2100    3360   -2520   720     62    -903    4501   -10500   12600   -7560  1800    300   -3290   13300   -26740   28700  -15750  3480    889   -8337   29778   -53340   51590  -25830  5250   2177  -16485   52269   -86415   78050  -36624  7028   3368  -25977   78218  -121345  103040  -45360  8056   5188  -36421  102242  -148715  118615  -49161  8252   6980  -44604  117719  -161840  121800  -47481  7426   8007  -48538  120967  -156765  110985  -40635  5979   7867  -46599  110173  -134960   90160  -30849  4208   7188  -40215   89656  -102620   63525  -20076  2542   6111  -30723   63357   -67585   38885  -11340  1295   4270  -20216   39081   -38220   19600   -5019   504   2528  -11193   19355   -16695    7525   -1659   139   1044   -4872    7658    -5425    1890    -315    20    720   -1764    1624     -735     175     -21     1 -------------------------------------------------------------------------- ======================================================================= Gottfried Helms, Kassel « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 336 09/09/2019, 10:55 PM Last Post: tommy1729 A fundamental flaw of an operator who's super operator is addition JmsNxn 4 6,931 06/23/2019, 08:19 PM Last Post: Chenjesu Tetration and Sign operator tetration101 0 430 05/15/2019, 07:55 PM Last Post: tetration101 2 fixpoints , 1 period --> method of iteration series tommy1729 0 1,486 12/21/2016, 01:27 PM Last Post: tommy1729 Tommy's matrix method for superlogarithm. tommy1729 0 1,676 05/07/2016, 12:28 PM Last Post: tommy1729 [split] Understanding Kneser Riemann method andydude 7 8,008 01/13/2016, 10:58 PM Last Post: sheldonison [2015] New zeration and matrix log ? tommy1729 1 3,188 03/24/2015, 07:07 AM Last Post: marraco Kouznetsov-Tommy-Cauchy method tommy1729 0 2,027 02/18/2015, 07:05 PM Last Post: tommy1729 Problem with cauchy method ? tommy1729 0 1,871 02/16/2015, 01:51 AM Last Post: tommy1729 Regular iteration using matrix-Jordan-form Gottfried 7 8,657 09/29/2014, 11:39 PM Last Post: Gottfried

Users browsing this thread: 1 Guest(s)