Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Matrix Operator Method
#11
bo198214 Wrote:As Andrew mentioned the Carlemann-Matrix is transposed to the Bell-Matrix (which is basicly the power derivation matrix but without the 0th power and without the 0-th coefficent), thatswhy I asked about the Carlemann-Matrix.

Indeed for a parabolic fixed point at 0 your formula (in PDM form)

gives the regular iteration. So this a wonderful idea to simply extend this also to series with no fixed point.

Hendryk -

first I want to say: I feel very comfortable to see this connection now. I couldn't have express this relation myself (although Andrew had already pointed this out to me).


bo198214 Wrote:But what is still not clear to me (I certainly am not that acquainted with matrices) how fits the Eigenanalysis in here. Perhaps you can also answer a question that recently occured to me:

In Maple there is a function "MatrixFunction" where you can apply an arbitrary real function to a matrix, by somehow manipulating the Eigenvalues.

Maybe, the eigenanalysis is in relation to the fixed-point-approach, while the matrix-logarithm is an alternative, where eigenvalues are degenerate and/or not accessible (as, for instance with the C-matrix for "exp(x)-1" -iteration, see below).

--------------

Something about eigensystems, I hope this fits your remark.

There are two ways to introduce the eigensystem-idea.

(a)
One is the question: is there any vector, which, when multiplied with a matrix A, makes the result equal to itself?
In matrix-notation, with a matrix A and a column-vector X:
A * X = X
This may be extended to allow an arbitrary scaling v (the associated eigenvalue) of X, so for instance

A * X = v * X

If X occurs in a context, where its entries are to be understood as euclidean coordinates, then this means, that X is a vector, whose direction is not changed by this specific matrix-multiplication, and thus expresses an interesting characteristic also of the matrix A.

(b)
Another way is the question: if I want to analyze functions on A, like for instance powers, then all entries change. But in this mathematical complex entity of a matrix: can we separate constant parts from variable parts and find a description for A and its powers, where the variable part is optimally reduced, for instance factored out? For instance, in the above example
A * X = 1 * X
it means, that also
A*A*X = 1 * X or A^2 * X = 1 * X
and A and its powers have in this sense a common characteristic.

For a wide set of (square) matrices there is this amazing property, that A and its powers can be expressed by a multiplication of two constant parts (specific to A, however) and the variation by functions applied to A occurs only in a diagonal-matrix D so that

A = Q * D * Q^-1

Q is here the square-matrix of eigenvectors, and D is a diagonal-matrix containing the eigenvalues.
(Note, that not all matrices can be "diagonalized" this way, but Jordan proved, that -shortly sketched- all square matrices can be reduced to one at least near diagonal canonical form)

The advantage of this matrix-diagonalization is then, that sums, products, powers, series of powers (like exponential-function, sinus, etc) can be expressed in terms of the function, when applied to the diagonal-matrix only. A simple example is:

A^2 = A * A = (Q * D * Q^-1)*(Q * D * Q^-1) = Q * D*D * Q^-1 = Q*D^2*Q^-1

and all except the diagonal-matrix remains constant. Sums or series can be evaluated this nice way as long as all the matrices in the formula have the same eigenvectors Q, - which is given, if the series consists, for example, of powers of A only, or are the identity-matrix I.
(Regarding your Maple-question: I don't know Maple, but possibly the function call is just a request for diagonalization, possibly caching the eigen-matrices and applying the function to the diagonal)
----------------------------

The eminent advantage of this is (regardless of applying the logic of the approaches (a) or (b)), that we can apply any such functions to a diagonal-matrix, by simply applying it to the entries of the diagonal, and can thus reduce the very complex task of applying a function to a matrix to the simple calculation of the same function applied to the scalar values of the diagonal-matrix D.
This includes then fractional, real or complex powers, exponentials,
logarithms to matrices (which were intractable otherwise), but also geometric series and the like.

The explicite application of a function like multiplying a matrix A times itself (integral powers or even inversion as negative power), computing the exponential or the matrix-logarithm by the common series-expansion like

A^2 = A*A or exp(A) = I + A + A*A/2! + A*A*A/3! + ... and

A^2 = Q*D*D*Q^-1 or exp(A) = Q*( I + D + D^2/2! + D^3/3! + ...)*Q^-1

is then in one-to-one relation: doing it this or that way is completely equivalent and leads to the same solution.

Only the numerical aspect is different in terms of numerical accuracy and may play a role this way. Also the two motivations (finding invariant (except scaling) vectors vs. finding the eigensystem-decomposition) principally lead to the same solutions, except for numerical advantages or disadvantages due to the order of computation.
So if we find a tetration-solution via the explicit matrix-logarithm or via the eigensystem-decomposition is only a matter of convenience and of numerical stability.

However, this is not exactly true. There are "borderline"-cases. If eigenvalues are of similar value, or even equal value, or if some are extremely high and others near at zero, or even worse: exactly zero, then the eigensystem-analysis is numerically instable or impossible, or at least indeterminate. So, if an eigenvalue is zero, then the inversion is impossible because inverting zero would result in infinity values. Or if some eigenvalues are equal, then their associated eigenvectors (the columns in Q) are indeterminate, at least in a rotational sense. Then general eigen-decomposition-routines mostly give up and don't provide a usable, uniquely defined result.

This is for instance the case for the C-matrix for base e, which implements the (exp(x)-1) -iteration: all eigenvalues are equal and actually equal to 1. An eigensystem-solver refuses his job, and we are lost. But in this case we may apply the matrix-logarithm-procedure implemented by the naive series using A1 = (A-I) and then log(A) = A1 - A1*A1/2 + A1*A1*A1/3 - ... or some more enhanced versions, and find the matrix-logarithm this way.

For infinite matrices we need approximations by finite dimensions, and it is not always obvious, which procedure gives the better approximation. Will the fractional power of matrix A by

A^0.5 = Q * D^0.5 * Q^-1

or by

A^0.5= exp(0.5 * log(A))

give the better/the more convenient approximation, if we have only dimension dim=32 accessible? Theoretically the results must be equal, but numerically one may be preferable. And if we have equal eigenvalues (which may happen for base e in the "exp(x)-1" -tetration or base e^(1/e) in the standard tetration-iteration) eigensystem-solvers should be principally helpless because of the internal indeterminacy of the associated eigenvectors in our matrices.

-----------------------------------

Well, I made a long comment up to here. This is also due to Daniel's request in the other thread for hints, how his method is related to others, for which I may give a guess:
Since the idea of a "fixpoint" is the engine of his derivations, I assume, that his way is (at least implicitely) equivalent to the eigensystem analysis, whose one question was: is there one vector X which remains unchanged by matrix-multiplication

A * X = X * 1

which would then simply reflect the computation of and the dependency of the eigenvector associated to the eigenvalue 1. But this is only a guess, since I didn't understand Daniel's method until now.
But - if this is the case, then the results should be identical to that of the matrix-exponential-method (besides optimality of approximation and "borderline"-cases), since they are mutually one-to-one exchangable by matrix-theory.

-----------------------------------------------

One more point, to make this post even one paragraph longer and to give an example for a consideration of this eigenvalue-properties in our context.


In my first contact with tetration, where I derived the method for computing numerical approximations for infinite series of powertowers of increasing height (as(s) means alternating sum of powertowers of like base s)

as(s) = 1 - s + s^s - s^s^s + s^s^s^s - ... + ... + s^^k - s^^(k+1) +...-

I came across this eigensystem-problem first time.

1) I described this as() as the result of

V(1)~ * (Bs^0 - Bs^1 + Bs^2 - Bs^3 + ... - ...) = Y~

and y = Y[1] should be the result, if this can be summed at all.

2) The parenthese represents the alternating geometric series of Bs, and according to the remarks about the eigensystem above, this can be rewritten by the formula for geometric series:

V(1)~ * (I + Bs)^-1 = V(y)~

This involves a matrix-inversion; and a judgement, whether this is principally possible (for the ideal infinite case, not only in finite approximations), involves (at least) consideration of the eigenvalues.

3) From numerical approximations with increasing dimensions I could observe:

for s in the admissible range for infinite powertowers

1/e^e+eps < s < e^(1/e)-eps

the eigenvalues of the matrix Bs approximated to the consecutive powers of log(h(s)), where h(s) is Ioannis Galidakis' h()-function, defined implicitely by

t^(1/t) = s ====> h(s)= t

Then let v = log(t).

The set of eigenvalues converge (heuristically) to the set of

(1,v,v^2,v^3,...)

So I took(and take) this as a hypothesis for the Bs-matrices of those parameters s, with which I worked on.

4.a) for abs(v)<1 this means, we have in the limit for infinite dimension eigenvalues lim{k->inf} v^k = 0, so the used matrix Bs is *principally* not invertible due to the occurence of 1/0 in the infinite case.
But what has to be inverted, when discussing this series, is not Bs but (I + Bs), so the limit for the smallest eigenvalue of this sum is 1, and (I + Bs) is invertible. Then in the inverse M=(I + Bs)^-1 we have eigenvalues between 1 and 1/2 and the matrix should behave very well (which was well backed by the numerical computations)

4.b) Even if v is negative but abs(v)<1 , so 1/e < t < 1 and also 1/e^e < s < 1 this should thus work.(in fact the terms were more difficult to manage, they seemed to diverge, or at least to increase and oscillate in their sign)

4.c) For s> e^(1/e) we have for finite dimensions only a subset of the eigenvalues stabilizing to fixed values > 0, and another subset of empirical eigenvalues, whose characteristic I was not able to derive; but they seemed to be in the range 0 < v_k without upper bound. But again here: the addition of the identity-matrix in the geometric-sum-formula shifts the smallest eigenvalue up to 1 as the new minimum, and the inversion provides then a comfortably behaving matrix M, which allows summation of the described form for s beyond the (in-)famous limit e^(1/e)
This allowed then (in my opinion) the conjectures concerning approximation of series like

1 - 10 + 10^^2 - 10^^3 + 10^^4 - ... + ... = as(10)

to values about

as(10) ~ 0.164280...

as a non-completely-unfounded estimate ;-)

--------------

I keep the further details away here, because this is not the focus of this post. The last paragraphs should serve as a more intense introduction into one specific consideration, where the eigenvalues-concept should be unavoidable for argumentation besides its usability for the approximation and analytical description of contiunuous tetration. (If you don't know this article already but want to read into it, it is here:
http://go.helms-net.de/math/binomial/10_...rtower.pdf
It's not much sophisticated, to say the least; in fact some mathematicians found it horrible, but it was my first exploring, so...)


To add one conclusive statement here:
I expect, that all methods, which involve analysis of tetration *in terms of powerseries*, or more precisely of exponential-series, will arrive at the same values; they seem to be only different views into the same matter, and may have relative advantages under certain circumstances.
For me the first place is eigenvalue-decomposition, then, where it is principally impossible or has indeterminacies, I use the matrix-logarithm method.
The "fix-point"-discussion seems to involve/imply the eigensystem-approach, and the first rowvector of Q^-1, associated with the eigenvalue 1, seems just to reflect this fixpoint.
For s = sqrt(2), t=2, it is just the vector [1,2,4,8,16,...] in arbitrary scaling which makes

[1,2,4,8,16,...] * Bs = [1,2,4,8,16,... ] ( * 1 )

and "is an invariant vector under transformation by Bs" .

So much as the current opinion of mine -

Gottfried
Gottfried Helms, Kassel
Reply


Messages In This Thread
Matrix Operator Method - by Gottfried - 08/12/2007, 08:08 PM
RE: Matrix Operator Method - by bo198214 - 08/13/2007, 04:15 AM
RE: Matrix Operator Method - by jaydfox - 08/13/2007, 05:40 AM
RE: Matrix Operator Method - by Gottfried - 08/13/2007, 09:22 AM
RE: Matrix Operator Method - by bo198214 - 08/14/2007, 03:43 PM
RE: Matrix Operator Method - by Gottfried - 08/14/2007, 04:15 PM
RE: Matrix Operator Method - by bo198214 - 08/26/2007, 12:18 AM
RE: Matrix Operator Method - by Gottfried - 08/26/2007, 11:24 AM
RE: Matrix Operator Method - by bo198214 - 08/26/2007, 11:39 AM
RE: Matrix Operator Method - by Gottfried - 08/26/2007, 04:22 PM
RE: Matrix Operator Method - by Gottfried - 08/26/2007, 10:54 PM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 08:29 AM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 11:04 AM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 11:35 AM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 11:58 AM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 12:13 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 01:19 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 02:29 PM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 02:36 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 03:09 PM
RE: Matrix Operator Method - by bo198214 - 08/27/2007, 07:15 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 08:15 PM
RE: Matrix Operator Method - by bo198214 - 08/29/2007, 05:28 PM
RE: Matrix Operator Method - by Gottfried - 08/27/2007, 12:43 PM
RE: Matrix Operator Method - by Gottfried - 10/08/2007, 12:11 PM
RE: Matrix Operator Method - by Gottfried - 10/14/2007, 09:32 PM
RE: Matrix Operator Method - by Gottfried - 04/04/2008, 09:41 AM
RE: Matrix Operator Method - by Gottfried - 04/17/2008, 09:21 PM
RE: Matrix Operator Method - by bo198214 - 04/25/2008, 03:39 PM
RE: Matrix Operator Method - by Gottfried - 04/26/2008, 06:09 PM
RE: Matrix Operator Method - by bo198214 - 04/26/2008, 06:47 PM
RE: Matrix Operator Method - by Gottfried - 04/18/2008, 01:55 PM
RE: Matrix Operator Method - by Gottfried - 07/08/2008, 06:46 AM

Possibly Related Threads...
Thread Author Replies Views Last Post
  Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 76 09/09/2019, 10:55 PM
Last Post: tommy1729
  A fundamental flaw of an operator who's super operator is addition JmsNxn 4 6,186 06/23/2019, 08:19 PM
Last Post: Chenjesu
  Tetration and Sign operator tetration101 0 242 05/15/2019, 07:55 PM
Last Post: tetration101
  2 fixpoints , 1 period --> method of iteration series tommy1729 0 1,268 12/21/2016, 01:27 PM
Last Post: tommy1729
  Tommy's matrix method for superlogarithm. tommy1729 0 1,471 05/07/2016, 12:28 PM
Last Post: tommy1729
  [split] Understanding Kneser Riemann method andydude 7 7,097 01/13/2016, 10:58 PM
Last Post: sheldonison
  [2015] New zeration and matrix log ? tommy1729 1 2,878 03/24/2015, 07:07 AM
Last Post: marraco
  Kouznetsov-Tommy-Cauchy method tommy1729 0 1,869 02/18/2015, 07:05 PM
Last Post: tommy1729
  Problem with cauchy method ? tommy1729 0 1,705 02/16/2015, 01:51 AM
Last Post: tommy1729
  Regular iteration using matrix-Jordan-form Gottfried 7 7,879 09/29/2014, 11:39 PM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)