# Tetration Forum

You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2 3
This post isn't to attempt to share some awesome new insight into how power series work. What I'm about to explain is probably well-known and has been for a century or more.

But it interests me. Before I started studying tetration a few months ago, I didn't know that a radius of convergence indicated a singularity. I mean, I was familiar with the idea that some power series had a radius of convergence, and obviously I was familiar with the idea of singularities. But at some point during my minimal formal calculus education (two years of high school calculus, a semester of vector calculus, and a semester of differential equations), my teachers and professors failed to make the connection explicit. And in my personal studies, including a foray into tensor math, I never came across it, probably because by that point, power series were more of an afterthought.

I also had very little experience with complex analysis until I started looking at the tetration problem. Funny too, because I figured that complex math wouldn't factor in: I assumed that tetration would at best be for real numbers.

So now I'm filling in holes in my knowledge, and I'm finding some fascinating things. The first was my discovery that, in the vicinity of the origin, and especially near the primary fixed points, Andrew's slog for base e approximates the sum of conjugate logarithms.

Then just today, I was thinking about the sexp function, having finally decided to turn my attention back to it a couple days ago. Anyway, I was thinking about the singularity at sexp(-2). It occurred to me that in the immediate vicinity of -2, it would look pretty much like $\ln(a_1 z)$, where $a_1$ is the coefficient for the first degree term in the power series for sexp. In fact, this would just reduce to $\ln(z)+\ln(a_1)$.

As such, near the singularity, and assuming no other singularities in the immediate vicinity, the power series for sexp should approximately equal the power series for the natural logarithm.

Sure enough, I took my terms for the power series of the slog at 0, and calculated the reversion of the series to get the power series for the sexp (at -1). When I calculated the power series for the first derivative (a trivial calculation), I found that after the first half dozen terms, the terms of the first derivative of the sexp are alternating plus or minus 1 to within 1% or less, and by the 18th term, they're equal to +/- 1 to within 1 part in a million.

In other words, the terms of the power series of sexp(z-1) converge on the terms for the power series of ln(z+1).

For review, the terms for the power series of slog(z) converge on the terms of the power series of $\log_{c_0}\left(z-c_0\right) + \log_{\overline{c_0}}\left(z-\overline{c_0}\right)$, where $c_0$ is the primary fixed point for exponentiation (base e), and $\overline{c_0}$ is its conjugate. This allowed me to develop an algorithm to speed convergence of Andrew's matrix solution for finding the power series of the slog.

What would be really nice is if we could do the same for the matrix solution for the sexp. Unfortunately, it's a non-linear equation, so I'm entirely sure that this new information helps. However, if one had an iterative solver that could handle the non-linearity, which needed only good approximations to start with, then this new information can certainly provide those good approximations.

At this point, I'm wondering if there is anything else that this new information can help me with. For example, I'm still analyzing the graph of the slog, and finding new and interesting facets to its fractal beauty. But I'm not really any closer to being able to use this information to derive a better way to solve for the slog or sexp. In other words, my accelerated solver is still the best tool I have for calculating as much accuracy as possible. More accuracy may be possible, but I don't know yet how to get it.
To illustrate my point a little better, I put together the following comparison of the power series of sexp(z-1) and the power series of ln(z+1):
Code:
|  n |       sexp(z-1)      |       ln(z+1)        |       difference |----+----------------------+----------------------+---------------------- |  0 |   0.000000000000000  |   0.000000000000000  |   0.000000000000000 |  1 |   1.091767351258320  |   1.000000000000000  |   0.091767351258321 |  2 |  -0.324494761735111  |  -0.500000000000000  |   0.175505238264889 |  3 |   0.349836269767157  |   0.333333333333333  |   0.016502936433824 |  4 |  -0.230854426837443  |  -0.250000000000000  |   0.019145573162557 |  5 |   0.201330212284523  |   0.200000000000000  |   0.001330212284523 |  6 |  -0.164352165253219  |  -0.166666666666667  |   0.002314501413448 |  7 |   0.142836335724572  |   0.142857142857143  |  -0.000020807132570 |  8 |  -0.124694993215245  |  -0.125000000000000  |   0.000305006784755 |  9 |   0.111073542269792  |   0.111111111111111  |  -0.000037568841319 | 10 |  -0.099954567162944  |  -0.100000000000000  |   0.000045432837056 | 11 |   0.090897908329423  |   0.090909090909091  |  -0.000011182579668 | 12 |  -0.083325611455161  |  -0.083333333333333  |   0.000007721878172 | 13 |   0.076920407600429  |   0.076923076923077  |  -0.000002669322648 | 14 |  -0.071427110437354  |  -0.071428571428571  |   0.000001460991218 | 15 |   0.066666069650214  |   0.066666666666667  |  -0.000000597016452 | 16 |  -0.062499703108237  |  -0.062500000000000  |   0.000000296891763 | 17 |   0.058823398084957  |   0.058823529411765  |  -0.000000131326808 | 18 |  -0.055555492550941  |  -0.055555555555556  |   0.000000063004615 | 19 |   0.052631550022682  |   0.052631578947368  |  -0.000000028924687 | 20 |  -0.049999986272057  |  -0.050000000000000  |   0.000000013727943 | 21 |   0.047619041200907  |   0.047619047619048  |  -0.000000006418141 | 22 |  -0.045454542411656  |  -0.045454545454546  |   0.000000003042890 | 23 |   0.043478259432923  |   0.043478260869565  |  -0.000000001436642 | 24 |  -0.041666665984126  |  -0.041666666666667  |   0.000000000682540 | 25 |   0.039999999675934  |   0.040000000000000  |  -0.000000000324066

Edit: always got confused on how to write a shifted series. If it's the sexp centered at -1, it's not sexp(z-(-1)) or sexp(z+1), it's just sexp(z-1). Arrgh! Likewise, ln centered at 1 is ln(z+1).
Although I've covered it elsewhere, I'll show a comparison for the slog as well, since we're on the subject:

Code:
C = 0.3181315052... + 1.337235701...i c = 0.3181315052... - 1.337235701...i |  n |       slog(z)        | log_C(z-C)+log_c(z-c) |       difference |----+----------------------+-----------------------+---------------------- |  1 |   0.915946056499533  |   0.945130773415607   |  -0.029184716916074 |  2 |   0.249354598672173  |   0.248253690528730   |   0.001100908143443 |  3 |  -0.110464759796431  |  -0.111008639309894   |   0.000543879513463 |  4 |  -0.093936255099859  |  -0.093733042063317   |  -0.000203213036542 |  5 |   0.010003233293232  |   0.010000010486703   |   0.000003222806528 |  6 |   0.035897921594543  |   0.035879454713238   |   0.000018466881305 |  7 |   0.006573401099605  |   0.006575953489817   |  -0.000002552390211 |  8 |  -0.012306859518184  |  -0.012304686001806   |  -0.000002173516378 |  9 |  -0.006389802569157  |  -0.006390235918384   |   0.000000433349227 | 10 |   0.003273589822817  |   0.003273230813856   |   0.000000359008961 | 11 |   0.003769202952828  |   0.003769267345563   |  -0.000000064392735 | 12 |  -0.000280217019537  |  -0.000280141200757   |  -0.000000075818780 | 13 |  -0.001775106557196  |  -0.001775113859078   |   0.000000007301881 | 14 |  -0.000427969957525  |  -0.000427988270446   |   0.000000018312921 | 15 |   0.000679723261244  |   0.000679722859771   |   0.000000000401473 | 16 |   0.000412792618166  |   0.000412797297022   |  -0.000000004678857 | 17 |  -0.000186597783775  |  -0.000186597001042   |  -0.000000000782734 | 18 |  -0.000253549198417  |  -0.000253550392217   |   0.000000001193801 | 19 |   0.000007474329223  |   0.000007473906558   |   0.000000000422666 | 20 |   0.000123166907930  |   0.000123167193596   |  -0.000000000285666 | 21 |   0.000035922663688  |   0.000035922845263   |  -0.000000000181575 | 22 |  -0.000047714769107  |  -0.000047714825731   |   0.000000000056624 | 23 |  -0.000032728894880  |  -0.000032728964565   |   0.000000000069685 | 24 |   0.000012587032851  |   0.000012587037767   |  -0.000000000004916 | 25 |   0.000020005706280  |   0.000020005730774   |  -0.000000000024494
jaydfox Wrote:Then just today, I was thinking about the sexp function, having finally decided to turn my attention back to it a couple days ago. Anyway, I was thinking about the singularity at sexp(-2). It occurred to me that in the immediate vicinity of -2, it would look pretty much like $\ln(a_1 z)$, where $a_1$ is the coefficient for the first degree term in the power series for sexp. In fact, this would just reduce to $\ln(z)+\ln(a_1)$.

As such, near the singularity, and assuming no other singularities in the immediate vicinity, the power series for sexp should approximately equal the power series for the natural logarithm.

Sure enough, I took my terms for the power series of the slog at 0, and calculated the reversion of the series to get the power series for the sexp (at -1). When I calculated the power series for the first derivative (a trivial calculation), I found that after the first half dozen terms, the terms of the first derivative of the sexp are alternating plus or minus 1 to within 1% or less, and by the 18th term, they're equal to +/- 1 to within 1 part in a million.

In other words, the terms of the power series of sexp(z-1) converge on the terms for the power series of ln(z+1).
I suppose this isn't so interesting if you take the power series for sexp(z-1). After all, if you assumed a generic linear critical interval (-1, 0), or even a simple third order approximation, then the interval (-2, -1) would pretty much look like a logarithm, and as such, the power series at -1 (from the left) would pretty much be that of a logarithm.

But where it would get interesting is when you take power series further and further to the right. The power series at z=0, z=1, z=2, etc., would start out looking more and more like iterated exponentials, yet they would still converge on the power series of a logarithm with its singularity at -2.
Jay ,

an approximate answer... I didn't get the meaning and the subsequent computing from your two-liner about how to compute slog when I read it. But anyway, I recognized that you are using that matrix, that I call B, however modified by subtracting something(like identity matrix) from the subdiagonal and use this in the matrix-root-solving formula.

Let the subdiagonal-detail aside, then an approximate answer may be, that B can be decomposed into a product of a factorial scaled Stirling-matrix 2'nd kind and a binomial-matrix:

B = S2 * P~

and your matrix-root-solving formula was something like

(B - D) *X = Y

where D means the subtraction in the subdiagonal, Y seems to be the second column of the identity-matrix and X are the terms, that you use.
Well, let D aside, then this is about

S2 * P~ * X = Y

and the root solving uses the inversion of B (well, actually B-D) , so

X = P~^-1 * S2^-1 * Y

S2 contains the coefficients of the exponential-series in its second column, and S2^-1 = S1 contains the factorial scaled Stirling numbers 1'st kind in the same column - which are just -1,1/2,-1/3,...

So it is no surprise for me to read, that for slog and sexp you get terms which approximate to that values. May be you can improve your observation by a binomial-transform of your terms, so computing
Z = P~ * X
may have a even clearer pattern.

Hope, I'm not completely out of the path...

Gottfried
Gottfried, I must admit that I haven't put enough study into your matrix methods, having focussed all my attention on Andrew's solution.

Can you point me to a discussion that mentions how to compute the coefficients of S2, P, etc., so that I can be sure I'm looking at the right matrices? If there is a relationship between your matrices and the ones I'm using, I'd like to understand it.
Well, I think I've heard of this connection before, but I'm not sure if there are any counter-examples... I think this may only be true of complex-analytic functions / holomorphic functions (in the domain minus the singularities) or meromorphic functions (in the domain including the singularities) but not true of real-analytic functions, if i recall correctly. But I don't know for sure.

Anyways, I did a plot to educate myself as to what it was you were talking about, and I found some interesting things: the Lambert W-function describes an exponential-like curve through the fixed points, and the curve goes through a lattice-like structure of fixed points. The plot is shown below:

http://tetration.itgo.com/pdf/SuperLogPoles2.pdf

Where the circle would be the radius of convergence of slog centered at z=0 using this connection. I was also thinking that since there is a countably infinite number of singularities, does the natural super-logarithm constitute a meromorphic function? Or does the number of singularities have to be finite?

I'm sure theres a better explaination of the grid other than "they're close" to $\pi/2\ (\text{mod}\ 2\pi i)$, but right now this is all I can tell. Now what I wonder is if all the fixed points are close to $\pi/2\ (\text{mod}\ 2\pi i)$? or if I'm seeing this pattern and it actually does not exist?

This is definitely interesting.

PS. I have also noticed that there is an obvious pattern in the number of fixed points between two fixed points. This can be shown by (with $a_k = -W_k(-1)$ a fixed point of exp(x)):
$\begin{tabular}{rl}
\pi i + a_{0} - a_{-1} & = 0.467121 i \approx 0 \\
5\pi i + a_{1} - a_{-2} & = 0.530701 i \approx 0 \\
9\pi i + a_{2} - a_{-3} & = 0.375917 i \approx 0 \\
13\pi i + a_{3} - a_{-4} & = 0.295789 i \approx 0 \\
17\pi i + a_{4} - a_{-5} & = 0.246132 i \approx 0
\end{tabular}$
where the n in $n \pi i$ is approximately how many $\pi$ intervals there are between a Lambert W-function fixed point and its conjugate. Notice that all the fixed points along the two exponential curves are obtained from the Lambert W-function, whereas the other fixed points are obtained from adding or subtracting $2\pi$. It took me a while to realize it was A016813, or (4n + 1), but then it was obvious since you can see that it adds two fixed points on each side every time you go to the right.

Andrew Robbins
andydude Wrote:I was also thinking that since there is a countably infinite number of singularities, does the natural super-logarithm constitute a meromorphic function? Or does the number of singularities have to be finite?

I'm pretty sure I remember reading that a meromorphic function can have a countably infinite number of singularities, but not an uncountably infinite number.

However, I don't think a meromorphic function can have branches (not least because a branch cut represents an uncountable set), so even the basic logarithm is out.

For the slog, the singularities at the primary fixed points look like ordinary logarithms. I'm still trying to figure out the best way to state this. My latest addition is to make explicit that this holds in a disk around the fixed point with a radius greater than zero but tending towards zero.

Anyway, if we "remove" these singularities, the root test seems to be tending to 0.72, indicating that the "residue" as I have called it still has a singularity of some sort, most likely at the fixed points. I say most likely, but I know that's where the singularities are, just by casual analysis of the slog. You see, the branches aren't like the branches of the ordinary logarithm. In the ordinary logarithm, each point in two branches differ by the same constant. Stated more simply, the first and all subsequent derivatives are equal at the same point in different branches.

However, with the slog, this is not true. The first few derivatives are different, and I'm willing to wager that all derivatives are different. Hence, even after subtracting out the ordinary logarithm at the fixed point, we still have branches and the equivalent of branch cuts as seen from any point of view. However, rather than differing by constant amounts, the points on neighboring branches differ by an amount that is 0 at the fixed point and increasing as we move away from the fixed point. There is precedent for this: consider $g(z)=z^{\frac{3}{2}}$, which has two branches (it's cyclic as we go around the origin). At 0.01, the values are +0.001 and -0.001, and at 1, the values are +1 and -1.

The point is, while the slog looks like an ordinary logarithm at either fixed point, we can't eliminate the branch cuts by subtracting the logarithm, and hence, we can't reduce the root test (thereby increasing the radius of convergence).

However, I think we can subtract the logarithm at sexp(-2)! If we consider a loop around z=-2, with a small radius, then we can look at the graph of the slog to see what will happen. (By a "small radius", I mean less than 1 for sure, though something between 0.5 and 0.75 makes it easy to see on the slog graph.)

The slog of this loop with fixed radius will look like a slightly wavy vertical line, and it allows us to move from branch to branch of the slog as we go up or down multiples of 2*pi*i.

Each major branch of the slog (those right off the "backbone") looks exactly like any other major branch (due to cyclic symmetry), and hence, the branches of the sexp will look the "same" as we loop the singularity at z=-2. Therefore, subtracting the natural logarithm at z=-2 should eliminate the branch cuts due to this singularity. Therefore, the radius of convergence of the slog with the removed natural logarithm should be limited by the singularity at z=-3. Accordingly, the root test of the power series for sexp(z-1) should be 1, but the root test for sexp(z-1)-ln(z+1) should be 0.5, indicating a radius of convergence of 2.

Furthermore, with a bit of effort, we should be able to determine the coefficients for an ideal version of the singularity at z=-3, allowing us to subtract them out, and get an even smaller "residue" of the sexp. I've already confirmed that the singularity at z=-3 (effectively, something along the lines of ln(ln(z-3)) or so) has non-matching branches, so even after removing this singularity, we'll most likely still have a branch cut and hence a limited radius of convergence. But we should be able to get fairly accurate coefficients, at least beyond the first dozen or so, and perhaps these could be fed into an iterative solver for the non-linear system solution of the sexp. Or perhaps, after going that far, further insights will be within reach...
[updated]

jaydfox Wrote:Gottfried, I must admit that I haven't put enough study into your matrix methods, having focussed all my attention on Andrew's solution.

Can you point me to a discussion that mentions how to compute the coefficients of S2, P, etc., so that I can be sure I'm looking at the right matrices? If there is a relationship between your matrices and the ones I'm using, I'd like to understand it.

Jay,

I give a short list of the used matrices. I'm preparing a code-pad for Pari/gp with which you may then experiment with these matrices. Perhaps tomorrow or friday.

In general
V(x)~ = [1,x,x^2,x^3,...]
A prefix d declares this as diagonalmatrix
So, for instance
V(2)~ = [1,2,4,8,16,...]

F = [0!,1!,2!,...], dF arranges this as diagonal dF^-1 contains the reciprocals (to construct the exponential-series, for instance)

P (binomialmatrix)
$\hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
1 & 1 & . & . & . & . \\
1 & 2 & 1 & . & . & . \\
1 & 3 & 3 & 1 & . & . \\
1 & 4 & 6 & 4 & 1 & . \\
1 & 5 & 10 & 10 & 5 & 1
\end{matrix}$

Two properties of this matrix are important here.

1) Application of the binomial-rules, when postmutiplied by a formal powerseries
$\hspace{24}
P * V(x) = V(x+1) \\
P^m * V(x) = V(x + m)$

2) Derivatives
$\hspace{24}
V(x)\sim * P = Y\sim$

Y contains then scaled derivatives for $\hspace{24} f(x)= sum(k=0,inf,x^k)$

St2 tirling kind 2, version 1 (Abramowitsch&Stegun)
$\hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
1 & 1 & . & . & . & . \\
1 & 3 & 1 & . & . & . \\
1 & 7 & 6 & 1 & . & . \\
1 & 15 & 25 & 10 & 1 & . \\
1 & 31 & 90 & 65 & 15 & 1
\end{matrix}$

St2 tirling kind 2, version 2, Wikipedia
$\hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & 1 & 1 & . & . & . \\
0 & 1 & 3 & 1 & . & . \\
0 & 1 & 7 & 6 & 1 & . \\
0 & 1 & 15 & 25 & 10 & 1
\end{matrix}$

I use this version here

S2 : factorial scaled version of St2: dF^-1 * St2 * dF
$\hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & 1/2 & 1 & . & . & . \\
0 & 1/6 & 1 & 1 & . & . \\
0 & 1/24 & 7/12 & 3/2 & 1 & . \\
0 & 1/120 & 1/4 & 5/4 & 2 & 1
\end{matrix}$

This version performs U-exponentiation for a powerseries
V(x)~ * S2 = V(exp(x)-1)~
(see Abramowitsch & Stegun)

Since Input and output are of the form of a powerseries, one can iterate to implement the "x-> (exp(x)-1)"" - iteration

St1 : Stirlingnumbers 1'st kind = inverse of St2
$\hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & -1 & 1 & . & . & . \\
0 & 2 & -3 & 1 & . & . \\
0 & -6 & 11 & -6 & 1 & . \\
0 & 24 & -50 & 35 & -10 & 1
\end{matrix}$

S1 : factorial scaled St1, inverse of S2
$\hspace{24}
\begin{matrix} {rrrrr}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & -1/2 & 1 & . & . & . \\
0 & 1/3 & -1 & 1 & . & . \\
0 & -1/4 & 11/12 & -3/2 & 1 & . \\
0 & 1/5 & -5/6 & 7/4 & -2 & 1
\end{matrix}$

V(x)~ * S1 = V(log(1+x))~

Since this is the inverse of S2, it performs x->log(1+x), and since Input and Output are of the form of a powerseries, this can be iterated

B : my base-matrix for T-iteration
$\hspace{24}
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 2 & 3 & 4 & 5 \\
0 & 1/2 & 2 & 9/2 & 8 & 25/2 \\
0 & 1/6 & 4/3 & 9/2 & 32/3 & 125/6 \\
0 & 1/24 & 2/3 & 27/8 & 32/3 & 625/24 \\
0 & 1/120 & 4/15 & 81/40 & 128/15 & 625/24
\end{matrix}$

This matrix can be understood in two ways:

As
$\hspace{24}
B= matrix(r,c,c^r/r!) = dF^{-1} * matrix(c^r) = dF^{-1} * VZ$

just the same way as in your code-snippet

Or
$\hspace{24}
B = S2 * P\sim
$

Application is
$\hspace{24}
V(x)\sim * B = V(e^x)\sim
$

and since input and output are of the form of powerseries, this can be iterated.

Note that since B = S2 * P~ we have, that
V(x)~ * S2 = V(e^x-1)~
and (see binomial-rules using P -transposed as above))
V(e^x-1)~ * P~ = V((e^x-1)+1)~ = V(e^x)~
we have
V(x)~ * S2 * P~ = V(e^x-1)~ *P~ = V(e^x)~
which is the same as
V(x)~ * B = V(e^x)~

To apply another base s, different from e, such that s<>e, this matrix must be premultiplied by powers of logarithms of s. I use lambda here for brevity. The decomposed description

Bs
$\hspace{24}
\begin{matrix} {rrrrr}
0^0/0!*\lambda^0 & 1^0/0!*\lambda^0 & 2^0/0!*\lambda^0 & 3^0/0!*\lambda^0 & 4^0/0!*\lambda^0 & 5^0/0!*\lambda^0 \\
0^1/1!*\lambda^1 & 1^1/1!*\lambda^1 & 2^1/1!*\lambda^1 & 3^1/1!*\lambda^1 & 4^1/1!*\lambda^1 & 5^1/1!*\lambda^1 \\
0^2/2!*\lambda^2 & 1^2/2!*\lambda^2 & 2^2/2!*\lambda^2 & 3^2/2!*\lambda^2 & 4^2/2!*\lambda^2 & 5^2/2!*\lambda^2 \\
0^3/3!*\lambda^3 & 1^3/3!*\lambda^3 & 2^3/3!*\lambda^3 & 3^3/3!*\lambda^3 & 4^3/3!*\lambda^3 & 5^3/3!*\lambda^3 \\
0^4/4!*\lambda^4 & 1^4/4!*\lambda^4 & 2^4/4!*\lambda^4 & 3^4/4!*\lambda^4 & 4^4/4!*\lambda^4 & 5^4/4!*\lambda^4 \\
0^5/5!*\lambda^5 & 1^5/5!*\lambda^5 & 2^5/5!*\lambda^5 & 3^5/5!*\lambda^5 & 4^5/5!*\lambda^5 & 5^5/5!*\lambda^5
\end{matrix}$

Application is
$\hspace{24}
V(x)\sim * dV(\log(s))*B = V(s^x)\sim
$

and since input and output are of the form of powerseries, this can be iterated.

Bs numerically
$\hspace{24}
\begin{matrix} {rrrrr}
1 & 1 & 1 & 1 & 1 & 1 \\
0 & \lambda & 2*\lambda & 3*\lambda & 4*\lambda & 5*\lambda \\
0 & 1/2*\lambda^2 & 2*\lambda^2 & 9/2*\lambda^2 & 8*\lambda^2 & 25/2*\lambda^2 \\
0 & 1/6*\lambda^3 & 4/3*\lambda^3 & 9/2*\lambda^3 & 32/3*\lambda^3 & 125/6*\lambda^3 \\
0 & 1/24*\lambda^4 & 2/3*\lambda^4 & 27/8*\lambda^4 & 32/3*\lambda^4 & 625/24*\lambda^4 \\
0 & 1/120*\lambda^5 & 4/15*\lambda^5 & 81/40*\lambda^5 & 128/15*\lambda^5 & 625/24*\lambda^5
\end{matrix}$

The terms, that you use can -if at all- mostly be found in the second-column of the matrices or result, since you need only the final scalar result, and not the additional powers, which occur in the next coumns.

So much for short

Gottfried
Pages: 1 2 3