(10/14/2013, 08:22 PM)MikeSmith Wrote: http://en.wikipedia.org/wiki/List_of_matricesHi Mike -

to get more familiar with ideas about matrices

wow, the wikipedia-list is long...

Well, for me/for the Carlemanmatrix-approach the following are relevant:

* diagonal,

* subdiagonal,

* triangular and

* square

shape ("which parts of the matrix are not 'systematically' zero?")

Matrices with constant entries:

* Triangular with ones filled in,

* Pascalmatrix (=lower or upper triangular with binomial entries),

* Stirlingmatrices S1,S2 = first and second kind

* Standard Vandermondematrix VZ

Matrices/vectors with variable entries:

* Vandermonde vector of argument x: V(x) = [1,x,x^2,x^3,...], its diagonal ("dV(x)") or row or column form

* Vandermonde matrix - some collection of Vandermondevectors, for instance VZ = [V(0),V(1),V(2),V(3),...]

* Z-vector of exponent argument Z(w) = [1,1/2^w, 1/3^w, 1/4^w, ...] for work with dirichletseries and derivatives

* Factorial vector of exponent argument w: F(w)= [0!^w,1!^w,2!^w,...] for work with exponential series

In principle, this is it.

<hr>

From this the relevant Carlemanmatrices can easily be constructed:

Carleman-type matrices C for some function f(x) have a contents such that V(x) * C = V(f(x)) .

This means, that not only like with a function (defined by a power series) a set of coefficients and consecutive powers of an argument x give one single value (the function-values) , instead we here get all consecutive powers of the function-value, such that "input" and "output" of the Carleman-operation has the same shape: the vandermonde-vector V(.).

Then the operation/the defined function can be iterated :

* V(x)*C = V(f(x)),

* V(f(x))*C=V(f(f(x)))

and so on - just by powers of the Carleman matrix.

Simple Carlemanmatrices are:

* The identitymatrix: for f(x) = x : V(x) * I = V(x) = V(f(x))

* The (upper triangular) pascalmatrix for f(x) = x+1: V(x) * P = V(x+1)

(its inverse gives V(x) * P^-1 = V(x-1); this is the operation of "inc"

* its powers gives the iteration/operation of addition: V(x)*P^h = V(x+h)

(this is the operation of "add")

* any diagonal Vandermonde-vector dV(a) for f(x) = a*x: V(x) * dV(a) = V(a*x)

(this is the operation of "mul")

* The S2-matrix, "similarity-scaled" by the factorials: fS2F = dF(-1)*S2*dF(1) then f(x)= exp(x)-1 : V(x) * fS2F = V(exp(x)-1)

(this is the operation of "exp-1")

* The S1-matrix similarity-scaled like the S2 then f(x) = log(1+x) : V(x) * fS1F = V(log(1+x))

(this is the operation of "log+1")

This is the basic material.

Then matrixmultiplication, inversion, diagonalization come into play where we must be aware, that practically we work with truncated matrices and their dot-products are only ideally series but practically polynomials with finitely many coefficients, and we must take care,that our results and our generalizations are justified in that they are arbitrary accurate approximations of the ideal result when taken by the infinite matrix.

And so on... For a longer list of matrices and more ot the relevant properties you could look at my older sub-pages for the compilation of matrices related to binomial- and bernoulli-coefficients on my hompage http://go.helms-net.de/math/binomial_new/index.htm

One more remark: with my newer experiences many fomulations there sound awfully naive and imprecise today, but it was pure explorative couriosity when I collected that informations. Simply skimming over it today might still give a sense for that, what I'm doing with that matrices for tetration, even when more experienced and sophisticated today.

One of my real achievements then was the rediscovering of the Bernoulli-polynomials by the matrix approach to the problem of sums of like powers: I assumed that sums of like powers can be seen as sums of iterations from x^m to (x+1)^m and the Carleman-matrix for this is simply the Pascalmatrix P. So simply replacing P by fS2F means replacing the methods for the sums-of-like-powers and Bernoulli-polynomials by the methods for tetration using the same scheme... So the article about "sums-of-like-powers" is also a good source to see what I'm doing here - and has the advantage, that the infinite iteration , series of iteration and even fractional iteration is or can be finally solved...

Gottfried

Gottfried Helms, Kassel