Posts: 763
Threads: 118
Joined: Aug 2007
jaydfox Wrote:I really need to thank you for opening my eyes up to a different way of viewing this problem! I was on a flight across the U.S. (from California to Pennsylvania), and then a 2hour drive to my final destination, and I was thinking over these matrices the whole time. And then it finally made sense!
Hi Jay 
very good, indeed! So these lines of script helped much, and possibly I should add such lines more often. And in turn  by your comments it helps me to understand the implementation of slog. Similar to you I was not able to really open my mind to get insight into the other models  although I tried many times and read into out postings and into the articles I could access  simply I was absolutely focused to/absorbed by the track I was/am on
Again: good! Meanwhile the discussion went on  I'll see, what else I can contribute to this.
Gottfried
Gottfried Helms, Kassel
Posts: 763
Threads: 118
Joined: Aug 2007
11/06/2007, 11:51 AM
(This post was last modified: 11/06/2007, 11:57 AM by Gottfried.)
jaydfox Wrote:Now comes the interesting part. Since F(e^x) = F(x)+1, we can solve for F(e^x)F(x) = 1
Given P_F as the power series for F, we can find F(e^x) by multiplying , and then subtract , where I is the identity matrix. Set this equal to a column vector of [1, 0, 0, ..]~, and then solve for :
Jay, an additional remark.
On the rhs of the equation as well as the right term on the lhs you have only vectors, and it seems even truncated vectors(row 0 missing). If I approach such questions I usually try to generalize also the rhs and the right term to have square matrices; in this case, to have a full identity matrix on the resultside of the equation. The advantage of this is then, that all relations are also expressible in their inverted view (or it is shown, that this is impossible, or only conditionally possible, if such an inversion is impossible/ not meaningful in a context).
I would like to know, what a matrix of [P_F0,P_F1,P_F2,...] looks like, if the [0,1,0,0,...] vector is also expanded (into identity matrix), and P_F1 is your P_F with additional leading row.
Do you know something about it (perhaps it is the eigensystem)? (or did I simply overlook some obvious?)
Gottfried
Gottfried Helms, Kassel
Posts: 1,389
Threads: 90
Joined: Aug 2007
11/06/2007, 12:46 PM
(This post was last modified: 11/07/2007, 11:05 AM by bo198214.)
Though the question was not to me, the matrix form of
or by composition is
where E is the Carleman matrix of exp and so on. is the upper triangular pascal matrix. But this does not help here/gives no way to solve the equation. If we however reduce F to its first column on the left side, we can reduce to the first column on the right side . Then and the fate is on its way and .
For the truncated matrices I think for no is because the Jordan normal form of is a Diagonal matrix with pairwise different eigenvalues while the Jordan normal form of has only the eigenvalue 1.
Posts: 509
Threads: 44
Joined: Aug 2007
Actually, not quite. For that relationship to be true, technically, then E, F, T etc would have to be Bell matries of those functions. The PDM/Carleman matrix preserves the order of arguments and the Bell matrix reverses it. One way to think about this with two categories. Let (Set) be the category of sets with holomorphic functions as morphisms between them, and let (Vec) be the category of vector spaces with matrices as morphisms between them. Then the Carleman matrix would be a covariant functor and the Bell matrix would be a contravariant functor.
Andrew Robbins
Posts: 1,389
Threads: 90
Joined: Aug 2007
andydude Wrote:Actually, not quite. Actually what not quite?
Quote:The PDM/Carleman matrix preserves the order of arguments and the Bell matrix reverses it.
Hm perhaps then we used the term Carleman and Bell matrix in opposite meaning. If I say Bell matrix I mean the matrix that has at its th row the coefficients of the th power of the power series of . Applying this functor keeps the order of composition operands. I am not sure which reference uses the term Bell matrix or Carleman matrix so that we could verify the usual usage.
However for questions of conjugacy this difference is not really important. is conjugate to if and only if is conjugate to as .
Posts: 440
Threads: 31
Joined: Aug 2007
Is there a good place to see how the Carleman matrix is defined? I googled it and found lots of references to it in journals, but nothing so accessible and definitional to the layman as a wikipedia or mathworld entry.
I've been using a matrix with the columns as the powers of the power series (with each power calculated with simple polynomial multiplication), and Henryk's E matrix is just this for the power series for e^x. So at least he and I are speaking the same thing, even if we're calling it the wrong thing. And for the factorial scaling, I've used dF, since that was Gottfried's notation (I assume the d is for diagonal).
~ Jay Daniel Fox
Posts: 440
Threads: 31
Joined: Aug 2007
11/11/2007, 01:01 AM
(This post was last modified: 11/11/2007, 01:04 AM by jaydfox.)
By the way, I had another random observation about the solutions of the finite truncations of the Abel solution, (EI)f=1. Although we're solving for the power series developed at 0, we are comparing the power series at exp(0)=1 as well.
Well, I remember when I was calculating various values of the partial solutions, I found that the values were wellbehaved to about 1.4, but also to about +2.4. I loosely hypothesized that the radius of convergence of a partial truncation of, say a 400 term solution to 200 terms, would still appear to have a radius of convergence of about 1.4 centered at 0. But somehow, once all the terms of the full 400term solution were used, there was also convergence around z=1 for a rather large radius.
Well, I think I kind of understand why, maybe. You see, if you take a truncation of a power series with a radius of convergence, and recenter the series to a point fairly distant, say more than half as far as the radius of convergence, then a good majority of the terms of the power series become garbage, and not just any garbage, but garbage with a very large root test, indicating that the radius of convergence is still limited within the original circle.
Well, what I've found is that for, say a 400term solution to the slog base e, everything after the 150th term or so is garbage anyway, but garbage with a lower root test I might add. So it was only ever accurate to 150 terms in the first place.
After shifting the center to z=1, it's still pretty much accurate to about 150 terms, maybe just a little less, and the root test is still pretty low after the 150th term. Sure, shifting the center causes us to lose more than half our terms, but it's the half that was already garbage. And this could also be why it appears to have a decent radius of convergence at both z=0 and z=1, because it can be shifted and still have a large radius of convergence.
~ Jay Daniel Fox
Posts: 509
Threads: 44
Joined: Aug 2007
The only paper I know of that references both Bell and Carleman matrices, (and gives a very nice introduction as well) is Continuous time evolution from iterated maps and Carleman linearization by P. Gralewicz, K. Kowalski. That, and Aldrovandi's work on Bell matrices has effectively defined the convention of which one is which.
Andrew Robbins
Posts: 509
Threads: 44
Joined: Aug 2007
Also, for Aldrovandi's work, you can see the following online:
R. Aldrovandi, Special Matrices of Mathematical Physics: Stochastic, Circulant, and Bell Matrices (preview)
R. Aldrovandi, L. P. Freitas, Continuous Iteration of Dynamical Maps (fulltext)
I hope this helps.
Posts: 1,389
Threads: 90
Joined: Aug 2007
Ok, then I will swap my usage of Carleman and Bell matrices.
