Observations on power series involving logarithmic singularities
#11
I finally bothered to put in the effort to learn how to use power series in SAGE. The native reversion function says it's not implemented, so I ended up learning how to use PARI's power series implementation. There was a small learning curve, but I think I'm getting the hang of it.

jaydfox Wrote:Each major branch of the slog (those right off the "backbone") looks exactly like any other major branch (due to cyclic symmetry), and hence, the branches of the sexp will look the "same" as we loop the singularity at z=-2. Therefore, subtracting the natural logarithm at z=-2 should eliminate the branch cuts due to this singularity. Therefore, the radius of convergence of the slog with the removed natural logarithm should be limited by the singularity at z=-3. Accordingly, the root test of the power series for sexp(z-1) should be 1, but the root test for sexp(z-1)-ln(z+1) should be 0.5, indicating a radius of convergence of 2.

Furthermore, with a bit of effort, we should be able to determine the coefficients for an ideal version of the singularity at z=-3, allowing us to subtract them out, and get an even smaller "residue" of the sexp. I've already confirmed that the singularity at z=-3 (effectively, something along the lines of ln(ln(z-3)) or so) has non-matching branches, so even after removing this singularity, we'll most likely still have a branch cut and hence a limited radius of convergence. But we should be able to get fairly accurate coefficients, at least beyond the first dozen or so, and perhaps these could be fed into an iterative solver for the non-linear system solution of the sexp. Or perhaps, after going that far, further insights will be within reach...

As it turns out, for f(x) = ln(ln(x)), the singularity at x=1 is just a plain natural logarithm, and it can be removed. To see this, construct a power series for ln(ln(x+2))-ln(x+1). If the singularity at x=1 were not fully removed (if branch cuts remained), then the root test would converge on 1, even if it were initially a bit smaller.

However, the root test is 0.5, indicating a radius of convergence of 2. The singularity at x=1 is completely removed, even if technically the function is not analytic at that point.

Therefore, we should be able to subtract ln(ln(z+2)) from sexp(z-1), and get a "residue" (I know I need a better name for it). This residue will have a radius of convergence of 2, just like sexp(z-1)-ln(z+1), but nonetheless the terms are a couple orders of magnitude more accurate.

In fact, I've taken the liberty of making a chart:

   

\(
\begin{eqnarray}
F_{\tiny \text{red}}\left(z\right) & = & \text{sexp}\left(z-1\right) \\
F_{\tiny \text{blue}}\left(z\right) & = & \text{sexp}\left(z-1\right)-\ln\left(z+1\right)\\
F_{\tiny \text{green}}\left(z\right) & = & \text{sexp}\left(z-1\right)-\ln\left(\ln\left(z+2\right)\right)
\end{eqnarray}
\)

On this chart, I've calculated the logarithm, base 2, of each term in the power series. I chose base 2, because a slope of -1 indicates a root test of 1/2, while a slope of 0 indicates a root test of 1.

As you can see, the red graph is essentially flat, giving a root test of 1, due to the singularity at z=-2. The blue and green graphs both converge on a slope of -1, so they both have a root test of 0.5, and a radius of convergence of 2. So the second singularity wasn't completely removed. But the green graph is noticeably lower than the blue. In fact, by the 400th term in the series, the green chart is about 6.21 units below the blue chart, indicating that the green coefficients are almost 75 times smaller (2^6.21). Therfore, the green coefficients represent a better "residue".

The difference is small enough, however, that if calculation speed is an issue, the blue coefficients can be used.

For example, given these three sets of coefficients, there are three ways to calculate the sexp (though it must be within a radius of 2, centered at z=-1, and a radius of 1 for the first equation):

\(
\begin{eqnarray}
\text{sexp}\left(z\right) & = & F_{\tiny \text{red}}\left(z+1\right) \\
\text{sexp}\left(z\right) & = & F_{\tiny \text{blue}}\left(z+1\right) + \ln\left(z+2\right)\\
\text{sexp}\left(z\right) & = & F_{\tiny \text{green}}\left(z+1\right) + \ln\left(\ln\left(z+3\right)\right)
\end{eqnarray}
\)

For the greatest accuracy, use the third equation. For greater speed, with little loss of precision, use the second equation. The first equation is mainly just for reference, and should be avoided, unless you're calculating values very close to z=-1.

And if you're wanting to attempt analytic continuation, the definitely use the second or third equation. The larger radius of convergence means you can move twice as far for the same loss of series precision.
~ Jay Daniel Fox
Reply
#12
Gottfried,

Thanks for the information! I had a little trouble following at first, but then I downloaded your paper on the Vandermonde matrix here:
http://go.helms-net.de/math/binomial_new...monde1.pdf

Between that paper (I've only read the first 1/3 of it so far) and your reply at the bottom of the first page, I was able to follow along, at least to get as far as V(x)~ * S2 * P~ = V(e^x-1)~ *P~ = V(e^x)~
~ Jay Daniel Fox
Reply
#13
jaydfox Wrote:Gottfried,

Thanks for the information! I had a little trouble following at first, but then I downloaded your paper on the Vandermonde matrix here:
http://go.helms-net.de/math/binomial_new...monde1.pdf

Between that paper (I've only read the first 1/3 of it so far) and your reply at the bottom of the first page, I was able to follow along, at least to get as far as V(x)~ * S2 * P~ = V(e^x-1)~ *P~ = V(e^x)~

Jay -

you stopped at it because it was the last one you could decode or the first one, which not?
Well, in case of the latter:
I say
V(x)~ * B = V(e^x) // tetration to the base e

But if
B = S2 * P~
then

V(x)~ * (S2 * P~) = V(e^x) // tetration to the base e

using associativity, we have also

(V(x)~ * S2) * P~ = V(e^x) ~

Now, the multiplication of a powerseries-row-vector with the transposed binomial-matrix adds 1 to its parameter, so we may write

V(y)~ * P~ = V(y+1)~ // binomial-theorem

and conversely, subtract 1 by postmultiplication by P's inverse:

V(y)~ = V(y+1)~ * (P^-1)~

So the above , postmultiplied by (P^-1)~

(V(x)~ * S2) * P~* (P^-1)~ = V(e^x) ~ * (P^-1)~
(V(x)~ * S2) = V(e^x-1)~

and this is then the "exp(x)-1" or as I denote it, the "U"-iteration.

I have prepared a little script which can be used with Pari-tty (just copy&paste it into the syntax-notepad, when paritty has been started and process it line by line). Using paritty (as GUI encapsulating Pari/GP) makes sense, because this allows to see the involved matrices permanently on the screen and how the summation processes work.
One needs also the matrix-definition-files, which provide functions and matrix-constants. I'll attach them - without editing them, however. They can be read in by Pari/GP at the beginning - in fact, the following Pari-tty-script uses this. Rememeber that using Pari-tty you need Pari/Gp 2.2.11, since the low-level communication protocol has a bit changed in the newer versions (you may download this version from my paritty-webspace as well (). Put these files in the standard-directory of your paritty-syntax-folder as created with the paritty-installation. (see http://go.helms-net.de/sw/paritty )

Here is the paritty-script for the "basics-demo".
Gottfried
---------------------
Code:
\\  Pari-tty  (Version Beta 2.03.1 (2007.10.11) (en))
\\  (c) Gottfried Helms  mailto:helms@uni-kassel.de
\\  Info:  http://go.helms-net.de/sw/paritty
\\  ---------------------------------------------------------

\\ this demo can best be seen, if the window is splitted
\\ horizontally, use the split-window-symbol in the top-tool-bar.
\\ Don't maximize this window, since we show some child windows
\\ at the left side of the desktop
\\
\\ to follow the demo, it's best to send the following lines separately:
\\ position the cursor in that line and click the CAL-button or press
\\ the F2-key
\\
\\ =============================================================
  
n=20  \\ set dimension for matrices first

\\ get matrix-definitions, -constants and -functions
\r % _matrixinit.gp
\\ here the %-sign is a paritty-meta-tag, which inserts the current path befor the file name.
\\ You may omit it, if the pari/GP-default "path" is set appropriately
\\ The path is set(to my needs) in _matrixinit.gp. You must adapt the first line
\\ in this file first!

\\ give a name for the project
%proj Basics


\\ show some basic matrices.
%box >P P  \\ P-matrix
%box >P PInv \\ I saved the inverse P^-1 as matrix constant
%box >P VE(P,6)  \\ truncation of P-matrix, shall suffice for display
%box >S2 VE(St2,6) \\ Stirling kind 2
%box >S2 VE(PInv*St2,6) \\ I use the "shifted" version

\\ note that in dFac(b) "b" is the exponent on the factorials, so
\\ dFac(1) = diag(0!^1,1!^1,2!^1,...),
\\ dFac(-1) = diag(1/0!, 1/1!, 1/2! , ... )
%box >S2 S2 = dFac(-1)*PInv*St2*dFac(1);VE(S2,6 ) \\ and then the factorial scaled version




%box >chk V(1)~ * S2 \\ column-summing
%box >chk 1.0*V(1)~ * S2 \\ column-summing, float numeric
                \\  gives e^1-1 in the second column, and its powers
                \\ in the following columns, so we have a powerseries-vector
                \\ as an output, the same form as input.
%box >chk2 V(exp(1)-1)~  \\ compare; however in this we didn't care for
                         \\ quality of approximation



\\ ---- discussion of partial sums and Euler summation -------------------
\\
\\ to display partial sums (heavily used later) I introduce a triangular matrix
%box >DR VE(DR,6) \\ to compute and display partial sums, show only 6x6-submatrix

\\ applied:
%box >chk1 DR* 1.0* S2 \\ partial sums converge to V(exp(1)-1)~


\\ the simple partial summing does not well converge with
\\ geometric series:
%box >chk1 DR*1.0*Mat(V(-2))


\\ for these cases I employ Euler-summation.
%box >chk1 ESum(1.0)*Mat(V(-2))  \\ Euler-sum of order 1 (direct sum) = DR
%box >chk1 ESum(2.0)*Mat(V(-2))  \\ Euler-sum of order 2 (classic Euler sum order 1)
%box >chk1 ESum(3.0)*Mat(V(-2))  \\ Euler-sum of order 3 (classic Euler sum order 2)
%box >chk1 ESum(4.0)*Mat(V(-2))  \\ Euler-sum of order 4 (classic Euler sum order 3)
\\ note, that higher orders give not the fastest approximation. The order must
\\ be appropriate for the order of growth-rate of geometric series

\\ to illustrate this, another example
%box >chk1 ESum(3.0)*Mat(V(-3))  \\ Euler-sum of order 3 (classic Euler sum order 2)
%box >chk1 ESum(4.0)*Mat(V(-3))  \\ Euler-sum of order 4 (classic Euler sum order 3)
%box >chk1 ESum(5.0)*Mat(V(-3))  \\ Euler-sum of order 5 (classic Euler sum order 4)


\\ ==============================================================================
\\ back to S2-matrix and U-tetration.


%box >chk1 1.0*ESum(1.0) * S2      \\ gives V(exp(1)-1)~ as shown above
             \\ best approximation (using all terms) in last row

%box >chk1 1.0*ESum(1.0) * S2*S2   \\ gives next iteration

\\ check (output in standard window)
tmp = exp(1)-1
tmp = exp(tmp) - 1   \\ should be the same as in the chk1-box, 2nd column
tmp^2  \\ should be the same as in 3rd column; but Euler-sum had too small order
       \\ so approximation above was not optimal. try a better "Euler-order"


%box >chk1 1.0*ESum(0.8) * S2*S2   \\ Euler-sum is even better with lower order! Strange....
                      \\ at least for 2nd and 3rd column



\\ for completeness let's look at the inverse of S2
\\ it's the shifted factorial scaled St1, Stirling kind 1
%box >S1 St1 \\ Stirling kind 1
%box >S1 St1*P \\ Stirling kind 1 shifted version
%box >S1 S1=dFac(-1)*St1*P*dFac(1);VE(S1,6) \\ Stirling kind 1 shifted version, factorial scaled
\\ check, is it the inverse of S2
%box >chk1 VE(S2^-1,6)


\\ as the inverse of S2, which performs (iterable V(x)->V(exp(x)-1) it
\\ should iterable perform x->log(1+x)

\\ since I use the partial-sum display via Euler-sum, the V(x)-parameter
\\ must be supplied in diagonal-format
\\ using dV(1) should thus give V(log(1+1))=V(log(2))
%box >chk1 ESum(1.45)*dV(1)*S1 \\ this looks good,
                 \\  the consecutive powers are obvious

\\ now iterated, using S1*S1
%box >chk1 ESum(1.8)*dV(1)*S1*S1 \\ this looks again good,
\\ check
tmp = log(1+1)
tmp = log(1+tmp)
\\ and the consecutive powers
tmp^2
tmp^3  \\ very good.



\\ ======================================================================
\\ now tetration

\\ we construct the matrix B
B = matrix(n,n,r,c,(c-1)^(r-1)/(r-1)!);
%box >B VE(B,6)

\\ note that this is also B = S2 * P~
%box >chk1 VE(S2*P~,6)

\\ now perform T:= x -> e^x, using x=1 first
%box >chk1 ESum(1.0)*dV(1)*B \\ giving [1,e,e^2,e^3,...]

\\ iterate to get x->e^e^x
%box >chk1 ESum(1.0)*dV(1)*B^2 \\ giving [1,e^e,(e^e)^2,(e^e)^3,...]


\\ now make it general. assume a base-parameter s1 to perform x->s^x
\\ still using x = 1 (by the dV(1.0)-parameter)
s1 = 2
Bs = dV(log(s1))*B;
%box >chk1 ESum(1.0)*dV(1)*Bs \\ giving [1,2,(2)^2,(2)^3,...]

\\ iterate to get s^s^x, approx is a bit more difficult
%box >chk1 ESum(0.8)*dV(1)*Bs^2 \\ giving [1,2^2,(2^2)^2,(2^2)^3,...]

\\ but we may either increase the dimension n (requires rereading the initial matrix-module!)  
\\ or using x=1/2 instead, s^s^0.5 = 2^sqrt(2)
%box >chk1 ESum(1.0)*dV(1/2)*Bs^2 \\ giving [1,2^2^0.5,(2^2^0.5)^2,(2^2^0.5)^3,...]

2^sqrt(2) \\ check




\\ The base-parameter s1=2 is outside the e^(-e)...e^(1/e) range, so
\\ lets take a better parameter, say s1=sqrt(2)
s1 = sqrt(2)
Bs = dV(log(s1))*B;
%box >chk1 ESum(1.0)*dV(1)*Bs \\ giving [1,s1,(s1)^2,(s1)^3,...]


\\ iterate to get s^s^x, approx is a bit more difficult
%box >chk1 ESum(0.9)*dV(1)*Bs^2 \\ giving [1,s^s,(s^s)^2,(s^s)^3,...]

tmp=sqrt(2)^sqrt(2) \\ check
tmp^2
tmp^3 \\ ok, leave it with this.




\\ =======================================================================
\\ now naive(numerical) fractional tetration, say height=1/2
\\ we need either matrix-log or eigensystem.

\\ let's begin with matrix-log
BsLog = MLog(Bs,200,1e-80); \\ use 200 terms for log-series, or stop at error<1e-80
%box >chk1 VE(BsLog,n,6) \\ nice, the terms seem to approx zero along a column


\\ to build powers, we multiply by a constant and exponentiate
BsPow = MExp(1/2*BsLog,200,1e-80) ;
%box >chk1 VE(BsPow,n,6)  \\ nice, still the terms approximate zero


\\ now compute the halfiterate sqrt(2)^^0.5
\\ store the result in a variable
%box >chk1 res = ESum(1.0)*dV(1)*BsPow

\\ the scalar result of sqrt(2)^^0.5 is in result[n,2]
tmp = res[n,2]

\\ now iterate: insert this value into the dV()-parameter
%box >chk1 res = ESum(1.0)*dV(tmp)*BsPow \\ perfect

\\ so we have the half-iterates 1.0 -> 1.24362158223 -> 1.41421352681    

\\ ================================================================

\\ The same can be done using eigensystem, I don't show it here.
\\ The problem of convergence and appropriateness of order of
\\ Euler-summation gets ubiquituous, once one uses more difficult
\\ parameters. Unfortunately - to aquire better approximations
\\ one needs more terms, aka bigger matrices - but with
\\ dim>20 we need more digits (at least 400 or 800), and dim>32
\\ seem to be impossible to compute using pari/gp builtin-routine.
\\ I implemented an own version of an eigensystem-solver, but with
\\ not much improvement.
\\ Also, the "empirical" approach for finding numerically the eigensystem
\\ has its systematic flaw because it cannot reflect the fact of
\\ finite truncation of the theoretical infinite matrix.
\\ That was the reason, why I searched for an analytical description
\\ (which I found, based on a hypothese about the eigenvalues)
\\ I can show this another time/another script -

\\ Gottfried Helms 2.11.2007


Attached Files
.zip   matdef.zip (Size: 9.4 KB / Downloads: 622)
Gottfried Helms, Kassel
Reply
#14
Gottfried,

I really need to thank you for opening my eyes up to a different way of viewing this problem! I was on a flight across the U.S. (from California to Pennsylvania), and then a 2-hour drive to my final destination, and I was thinking over these matrices the whole time. And then it finally made sense!

For me, it has been like magic that Andrew's solution works. I mean, I understood the derivation of the matrix from complicated derivatives and equations, etc., but it just seemed too much like a coincidence. I couldn't understand why it worked, only that it did seem to work.

But now I can "see" it!

It's simply a generic matrix solver for the Abel function. Assume we have a function \( F(z) \), with an associated power series \( P_{\small F}(z) \). Write the coefficients of the power series as a column vector, \( P_{\small F}\~{} \).

Now, to compose two power series, we can use basic matrix math. Start with the "inside" function, and write all the integer powers of the series as columns. By powers, I'm talking about good old fashioned polynomial multiplication.

Now, each column represents the function to a given power, and we can use each column in place of a variable. Multiply the matrix by the column vector for the "outside" series, and you have the composition.

For e^x, let's call the power series E(x), with column vector form E~.

Then, the matrix for the powers of E(x) is:

[E~^0, E~^1, E~^2, ...]

For example truncating the power series for e^x at the 6th term, we have E=[1, 1, 1/2, 1/6, 1/24, 1/120]~, and the matrix is then given by:

Code:
1   1       1      1       1        1
0   1       2      3       4        5
0   1/2     2      9/2     8        25/2
0   1/6     4/3    9/2     32/3     125/6
0   1/24    2/3    27/8    32/3     625/24
0   1/120   4/15   81/40   128/15   625/24

This matches the matrix B you have. The easy way to see why this works is to consider that (e^x)^2 is e^(2x), or in general, (e^x)^k is e^(kx). Since the terms of the power series for exp are (x^n)/n!, the terms of the powers of the power series would be (kx)^n/n!, with coefficients in the matrix being simply (k^n)/n!, Here, k is the column and n is the row (both zero-based).

Now comes the interesting part. Since F(e^x) = F(x)+1, we can solve for F(e^x)-F(x) = 1

Given P_F as the power series for F, we can find F(e^x) by multiplying \( B*P_{\small F}\~{} \), and then subtract \( I*P_{\small F}\~{} \), where I is the identity matrix. Set this equal to a column vector of [1, 0, 0, ..]~, and then solve for \( P_{\small F}\~{} \):

\( (B-I)*P_{\small F}\~{}=[1, 0, 0, ...]\~{} \)

This is exactly what Andrew's matrix does (with the exception of explicitly removing the first column; see below). And now I see exactly why it should work, assuming A) that the infinite system has a unique solution, and B) that the partial solutions converge on this unique solution as we increase the matrix size.

It took me a while to see why it works. I was in the car at this point, so I was having to do this in my head.

Essentially, we subtract 1 from the diagonal of B. This simulates finding F(e^x)-F(x). We set the matrix times an unknown column vector equal to "1", which is a column vector with a 1 in the first row.

Now, the top-left entry of the matrix will become 0, since we subtracted 1 from 1. This makes sense, because if F(e^x)-F(x)=1, then we don't know what the constant term is. So we just chop off that first column (associated with the constant), and in our P_F column vector, we make note that the first entry will now correspond to the first power of x.

Then, to see that this equals Andrew's solution, we pre-multiply by the diagonal factorial matrix. This gives us the Vandermonde matrix, ZV as Gottfried writes it, with factorials subtracted from the subdiagonal. It's the subdiagonal, because it was originally the diagonal, but we chopped off the first column.

We even get the factor of the log of the base to successively higher factors. Going back to the power series for exp, if we used base 2 instead, for example, then each row would have be divided by a higher power of ln(2). We subtract 1's from the diagonal, then multiply by the diagonal factorial matrix, and multiply by a diagonal matrix of powers of ln(2), to get back to the ZV matrix, with factorials times powers of ln(2) subtracted from the subdiagonal.

I'm quite excited, because assuming this is correct, it gives us a template for solving the continuous iteration of tetration as well. Instead of column vectors as powers of the exponential, we could use column vectors as the powers of sexp, and by doing so, find a "pentalog" such that S(sexp(x))-S(x)=1. I haven't tried, so I'm just making a conjecture here.

In fact, I'm wondering if this is a generic method for solving Abel functions, given a well-defined power series of the iterating function.

Although, now that I think about it, I wonder if someone has covered this before in this forum, and I didn't understand it at the time, so I had to derive it on my own to understand it...
~ Jay Daniel Fox
Reply
#15
By the way, looking at it from this point of view helps make it clear why solving directly for the sexp was doomed to failure. You see, solving for the slog means composing F(z) with exp(z), to get F(exp(z)).

On the other hand, for a sexp function T, solving it would require the composition exp(T(z)). Exponentiating T necessarily means creating column vectors of powers of T, which means we are no longer dealing with a linear system.

And this is probably a general situation: finding G(F(x)), given the known power series for G and an unknown power series F, is going to lead to a non-linear system. However, F(G(x)) is still a linear system, because G is known.
~ Jay Daniel Fox
Reply
#16
I think you got it. The chopping-of-the-first-row and the diagonal-factorial-matrix parts sounded right. But I think we would need to be able to differentiate tetration reliably (not just the first derivative at zero for example) in order to be able to use it to find pentation or the pentalog/hyper5log via the Abel functional equation. I'm not saying we can't, but I can't think of any method available that gives exact derivatives of tetration yet, but granted, we do have approximations. Smile

Andrew Robbins
Reply
#17
jaydfox Wrote:\( (B-I)*P_{\small F}\~{}=[1, 0, 0, ...]\~{} \)

This is exactly what Andrew's matrix does (with the exception of explicitly removing the first column; see below).
Yup. (and removing the lowest row, that it remains a square matrix).

Quote:And now I see exactly why it should work, assuming A) that the infinite system has a unique solution, and B) that the partial solutions converge on this unique solution as we increase the matrix size.
Unfortunately A) is wrong. We know already that there are infinite many solutions for the infinite equation system. I even gave a particular different (non-sinus based) solution here.
I call Andrew's way of solving this equation system by truncated approximation the natural solution.
Which of course works for any Abel equations (if it converges), as he also stated somewhere.

Though I havent thoroughly verified it, it looks indeed as if the solution is independent of the development point of the power series (which is currently at 0). We should check this.

However this is not the matrix operator method of Gottfried, which gives real iterates of the original function and can be considered as a generalization of the solution of the hyperbolic iteration with a fixed point via the Schroeder equation. We have there
\( f^{\circ t}(x) = \sigma^{-1}(c^t\sigma(x)) \), or directly \( f^{\circ t}=\sigma^{-1}\circ {\mu_c}^{\circ t} \circ \sigma \).

Translated into matrix form
\( F^t = \Sigma^{-1} {dV_c}^t \Sigma \).

And the good news is that nearly each matrix has such a decomposition with a diogonal matrix in the middle, however it is not necessarily of the form of powers of one number, nonetheless one can take real powers of it.
Reply
#18
Quote:I'm not saying we can't, but I can't think of any method available that gives exact derivatives of tetration yet, but granted, we do have approximations.

Easy. We solve the slog with a very large matrix, the larger, the better. Then we shift it to be centered at z=1. Then we take a reversion of the series (the math took me a while to figure out how to do, and only after I figured it out did I discover that PARI has a pretty fast series reversion solver).

This gives us a power series for sexp at z=0, which effectively allows us to find derivatives by basic manipulation of the series itself. We now have the very power series that we must take powers of, as I described, in order to compose, in order to solve the Abel equation.

Of course, loss of precision is the killer here. I'm not talking precision of the individual terms, but of the series itself. For my accelerated 900x900 solution, shifting the center to z=1 already knocks about half the terms off (in other words, the root test looks relatively flat, then spikes about halfway through). I try to work with the "residues" (after subtracting the basic logarithms), since this slightly reduces the effect, but there's not really much you can do except use more terms. I.e., this is standard loss of precision when moving a power series that has a radius of convergence. The best you can do is make small shifts and truncate the series a little after each step, to reduce the effect, but even this only buys a small increase in precision.

Then, the reversion of the series loses some precision as well. When all is said and done, probably only the first 300 to 400 terms are even accurate enough to bother using. So to solve the Abel equation for the pentalog with 1000 terms, you'd probably need to solve an accelerated 2000x2000 or possibly up to 3000x3000 slog system.
~ Jay Daniel Fox
Reply
#19
bo198214 Wrote:However this is not the matrix operator method of Gottfried...

I was pretty sure it wasn't, though the use of the same matrices implies a fundamental connection. But after reading his posts and seeing the similarities, it opened my eyes to the fact that the Vandermonde matrix was effectively a factorial scaled set of coefficients for powers of the power series for exp, and I had already figured out how to compose series a couple weeks ago by using powers of power series, so a lightbulb went off. So even though the methods may be different, I still have Gottfried to thank for explaining things in a way that helped me to see this.
~ Jay Daniel Fox
Reply
#20
Quote:I try to work with the "residues"
By the way, I need a better word than "residue", since residue has an established meaning in relation to singularities, and I'm calculating my "residue" by subtracting functions at singularities... Any ideas?
~ Jay Daniel Fox
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 14,593 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 10,796 02/22/2023, 08:58 PM
Last Post: tommy1729
Question A Limit Involving 2sinh Catullus 0 784 07/17/2022, 06:15 AM
Last Post: Catullus
  Functional power Xorter 3 4,981 07/11/2022, 06:03 AM
Last Post: Catullus
Question Tetration Asymptotic Series Catullus 18 6,855 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 4,609 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 1,327 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 1,277 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 2,679 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 6,944 03/21/2020, 08:28 AM
Last Post: Daniel



Users browsing this thread: 1 Guest(s)