matrix function like iteration without power series expansion - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: matrix function like iteration without power series expansion (/showthread.php?tid=186) Pages: 1 2 matrix function like iteration without power series expansion - bo198214 - 06/30/2008 Guys! That I didnt see that before! We have a very simple formula for computing the $t$-th iterate of an arbitrary function: $f^{\circ t} = \sum_{n=0}^\infty \left(t\\n\right) \sum_{k=0}^n \left(n\\k\right) (-1)^{n-k} f^{\circ k}$ This series does not always converge but at least if $f$ has an attracting fixed point reachable from $x$ then $f^{\circ t}(x)$ converges, because then $f^{\circ k}(x)$ is bounded, say $|f^{\circ k}(x)|, then $|f^{\circ t}(x)|. I.e. the sum is absolutely convergent. I bet my pants that this $f^{\circ t}$ is the regular iteration at the lower (attracting) fixed point in the case $f(x)=b^x$, $1. Why is this matrix function like? Well, if $A$ has eigenvalues all smaller than 1 then, we dont need the Jordanization of $A$, but the matrix power is also given as a convergent series $(A-I+I)^t = \sum_{n=0}^\infty \left(t\\n\right) (A-I)^n=\sum_{n=0}^\infty \left(t\\n\right) \sum_{k=0}^n \left(n\\k\right) (-1)^{n-k} A^k$. If $A$ is the Carleman matrix of $f$, then we just consider the first row in all those matrix powers and add them up. Dont make the error however to assume that $(f-\text{id})^{\circ n}=\sum_{k=0}^n \left(n\\k\right) (-1)^k f^{\circ n}$ This is not true. You can only do this with matrices as $A(B+C)=AB+AC$ while mostly $f\circ (g+h)\neq f\circ g + f\circ h$. Notes: Yes, this looks like the double binomial formula, note however that the formula should not be rearranged into the form $\sum_{n=0}^\infty a_n f^{\circ n}$ because this can lead to non-convergence. This is best seen with the formula $((x-1)+1)^t$ it must not be rearranged into the form $\sum_{n=0}^\infty a_n x^n$ because 0 is a singularity for this function ($t\not\in \mathbb{N}$) and hence there is no power series development at 0. Perhaps Gottfried can jump in to provide summability in the divergent case $b>e^{1/e}$. RE: matrix function like iteration without power series expansion - Gottfried - 06/30/2008 Using a matrix-expression this would be t°h(x) = W^-1 sum k=0..inf sum j=0..k (-1)^j * binomial(k,j) *diag(1,u^j,u^2j,...) * W sum j=0..k (-1)^j * binomial(k,j) *dV(u^j) = diag(u^j-1) *PPow( sum k=0..inf bo198214 Wrote:$f^{\circ t} = \sum_{n=0}^\infty \left(t\\n\right) \sum_{k=0}^n \left(n\\k\right) (-1)^{n-k} f^{\circ k}$ Notes: Perhaps Gottfried can jump in to provide summability in the divergent case $b>e^{1/e}$. Hmm, let me try (maybe I didn't get this right yet). $f^{\circ t} = \sum_{n=0}^\infty \left(t\\n\right) \sum_{k=0}^n \left(n\\k\right) (-1)^{n-k} f^{\circ k}$ is $f^{\circ t} = \sum_{n=0}^\infty \left(t\\n\right) c_n$ which is just a binomial weighting of the coefficients c_n. In my analyses I got the coefficients $f^{\circ t}(x) = \sum_{n=0} a_n x^n$ so, for instance, the c_n for the half-iteration are $c_n = a_n / \left(t\\n\right)$ The rate of growth of the a_n-coefficients for t=0.5 was asymptotically $a_n \sim~ u^{0.5} * \frac{u^{\frac{n^2-n}{2}}}{n!} * m_n$ where m_n are also growing coefficients, if only the leading coefficient of the polynomials at x^n are taken into account. Now the quotient of two consecutive binomials $\frac{ \left(0.5\\n\right) }{\left( 0.5\\n+1\right) }$ seem to approach -1, so the strong growth of about u^n^2/n!, or the quotient of two consecutive coefficients of ~ u^2n/n seems to dominate the characteristic of the c_n-coefficients. A series with quotient of increasing absolute value u^(2n)/n, u>1 cannot regularly be Euler-summed; maybe it can be summed with Borel-summation of higher orders. To be "not regularly" Euler-summable does not mean, we cannot have an approximation of a certain degree; however the problem with this is, that the partial sums may converge up to a certain index n, from where it "begins to diverge" - and it is not yet known to me, to what extent we can use the intermediate approximated value - I'm investigating for verification of some experimental summation-methods of the required power. Hmm - i hope this is not more confusing than clarifying - I've my head not really free today (have to prepare the final lesson tomorrow) Gottfried RE: matrix function like iteration without power series expansion - bo198214 - 07/01/2008 Gottfried Wrote:Using a matrix-expression this would be t°h(x) = W^-1 sum k=0..inf sum j=0..k (-1)^j * binomial(k,j) *diag(1,u^j,u^2j,...) * W sum j=0..k (-1)^j * binomial(k,j) *dV(u^j) = diag(u^j-1) *PPow( sum k=0..infwhat? Quote:"not regularly" Euler-summable Yes, I see. Its somehow similar to your tetra series $x-f(x)+f(f(x))-f(f(f(x)))+\dots$ its only summable via the matrix method for $f$ not having an attracting (finite) fixed point, i.e. in our case $b>e^{1/e}$. However the aim was to not use the matrix method for computation. This is faster and does not require the function to be developable into a powerseries. For regular iteration we have both: limit formulas and powerseries coefficient formulas. And now we have also both for the matrix function method (if it can still be called that way for the limit formula). The limit formula yields exactly the same iterates as the matrix function method if it converges. With this formula its perhaps easier to verify that the matrix function method just gives the regular iteration at the attracting fixed point, though I didnt try to prove this yet. RE: matrix function like iteration without power series expansion - Gottfried - 07/01/2008 bo198214 Wrote:Gottfried Wrote:Using a matrix-expression this would be t°h(x) = W^-1 sum k=0..inf sum j=0..k (-1)^j * binomial(k,j) *diag(1,u^j,u^2j,...) * W sum j=0..k (-1)^j * binomial(k,j) *dV(u^j) = diag(u^j-1) *PPow( sum k=0..infwhat? ...editing-crap. Quote: Quote:"not regularly" Euler-summable However the aim was to not use the matrix method for computation. This is faster and does not require the function to be developable into a powerseries. Yes , sure - I only introduced the matrix-method to have an idea about the estimated coefficients. I'm testing a summation-method, which allows to sum series of such rate of divergence. There are these Hausdorff-means, tightly related to the Riesz-method. Unfortunately it is not obvious, how - for a certain selection of parameters - the range of applicability can be determined. I'm comparing the results of my parameter-settings with known results for strongly diverging series and also I'm looking for articles on this... Gottfried z.B.: Über Klassen von Limitierungsverfahren, die die Klasse der Hausdorffschen Verfahren als Spezialfall enthalten Endl, Kurt in: Mathematische Zeitschrift, volume: 65 pp. 113 - 132 Online at digicenter Göttingen RE: matrix function like iteration without power series expansion - Gottfried - 07/02/2008 bo198214 Wrote:Guys! That I didnt see that before! We have a very simple formula for computing the $t$-th iterate of an arbitrary function: $f^{\circ t} = \sum_{n=0}^\infty \left(t\\n\right) \sum_{k=0}^n \left(n\\k\right) (-1)^{n-k} f^{\circ k}$ I want to add, that this expression exites me very much. Just these days I was looking for alternatives (hopefully improvements) of our fractional iteration problem. I was looking for two aspects expression of the tetration by its own powertowers instead of powerseries interpolation not done based on polynomial interpolation of the coefficients of the powerseries but based on an interpolation of gamma The above formula provides both searched aspects in one shot. The first is obvious; the idea is that only values are involved, which are provided by the function itself (so, for instance, possibly only of a relevant subset of complex numbers which are accessible by the function and its iterates itself/themselves - hmm, amateurish expressed; I hope the idea behind this can be understood anyway) The second, because the fractional binomials use gamma of fractional parameters. So - in my view - this is a big shot... Gottfried RE: matrix function like iteration without power series expansion - Gottfried - 07/08/2008 Hmm, rereading this all, it appears to me, that I might not have understood Henryk's query correctly.[update:removed silly question] Gottfried RE: matrix function like iteration without power series expansion - andydude - 07/08/2008 This seems to be S.C.Woon's method. If you take Woon's double-binomial expansion and let w=1, then you should get the formula you gave. I wonder if it is equivalent, or slightly different? Andrew Robbins RE: matrix function like iteration without power series expansion - Gottfried - 07/08/2008 Henryk - I crosschecked some results of your method against the diagonalization-method. I got some differences, which look somehow suspicious to me. Using, for instance, u = 0.9 (to be near at a critical value) t = exp(u) = 2.45960311116 b = t^(1/t) = 1.44182935647 I got y = b°0.5(1) = 1.25590667... where in these digits no difference appear. But I got differences in less significant digits, which -as I said- look suspicious: either my summation-methods are not perfect (in fact they give different approximations - I'm still searching for better ones), or I need simply more terms (opposite to the impression). They stabilize (using 96 terms) with differences in the last partial sums of order 1e-18 to 1e-22. So they seem to give usable results. However, the difference to your binomial-method y - y ~ 0.00000000259698761292 is so large compared to the summation-internal inaccurcy, that this seems to be systematic. Different summation-methods gave different result even worse - so in any case I've to improve my summation for these type of divergent series. For t=2, b=sqrt(2) it was much better, but possibly things were simply not visible... I hope I can locate (and possibly remove) the source of the difference... Gottfried RE: matrix function like iteration without power series expansion - andydude - 07/08/2008 andydude Wrote:This seems to be S.C.Woon's method. Just for reference, here is the article. The general formula is given on page 32 (formula 71), and I believe it is the same as the one given above. Andrew Robbins RE: matrix function like iteration without power series expansion - Gottfried - 07/09/2008 While I played around with some transformations to find a meaningful transformation for a summation in the divergent case, I just found a transformation of the problem, which at least provides an improvement for the approximation for the convergent case. Henryk's formula is (I inserted (x) at f°t) $\hspace{24} f^{\circ t}(x) = \sum_{n=0}^{\infty} ({t\\n})* \sum_{j=0}^n * (-1)^{n+j}* ({n\\j})* f^{\circ j}(x))$ Here I rewrite this in vectorial notation, for simpliness. With this Henryk's formula may be rewritten as product of vectors V1 and V2 $\hspace{24} f^{\circ t}(x) = V_1\sim * V_2$ where $\hspace{24} V_1 = \text{col}_{\tiny r=0}^{\tiny \infty} [({t\\r})]$ $\hspace{24} V_2 = \text{col}_{\tiny r=0}^{\tiny \infty}[ \sum_{c=0}^r (-1)^{r+c}*({r\\c})*f^{\circ c}(x) ) ]$ ("col" means here: a column-vector of index/length as subscripted) into $\hspace{24} f^{\circ t}(x) = V_1\sim * V_2$ where $\hspace{24} V_1 = \text{col}_{\tiny r=0}^{\tiny \infty}[ \sum_{c=0}^r ((-1)^{r+c}* ({r\\c})*c^t ) ]$ $\hspace{24} V_2 = \text{col}_{\tiny r=0}^{\tiny \infty}[ \sum_{c=0}^r ( \frac{S1(r,c)}{r!} * f^{\circ c}(x) ) ]$ Here S1(r,c) means the Stirlingnumber 1st kind of row=r,col=c Using f(x) = b^x, f°t(x) = f°(t-1)(b^x) , with b=sqrt(2) and t=0.5 I get different series with the two methods. Here is the comparison (the numbers are the individual terms from the vectorproduct V1~*V2): $\hspace{24} \begin{matrix} {ll} \text{binomial} & \text{stirling} \\ 1 & . \\ 0.207106781187 & 1.41421356237 \\ 0.0244875256635 & -0.0639425018608 \\ 0.00661871779280 & -0.0251486716496 \\ 0.00250547215205 & -0.0138115634657 \\ 0.00115807690080 & -0.00888364117548 \\ 0.000610766543495 & -0.00626721909452 \\ 0.000353479520962 & -0.00469791870292 \\ 0.000219183130875 & -0.00367580993247 \\ 0.000143364426110 & -0.00296926498063 \\ 0.0000978609982638 & -0.00245835601888 \\ 0.0000691742158295 & -0.00207564611019 \\ 0.0000503406857704 & -0.00178071233011 \\ 0.0000375480921629 & -0.00154805648671 \\ 0.0000286035266271 & -0.00136090991516 \\ 0.0000221916461008 & -0.00120785502708 \\ 0.0000174945666035 & -0.00108088575828 \\ 0.0000139875666870 & -0.000974244878765 \\ \ldots \text{ } & \ldots \\ \ldots \text{ continue} & \text{at row 90}\ldots \\ \ldots \text{ } & \ldots \\ 0.0000000285776133075 & -0.0000638676539217 \\ 0.0000000274286147892 & -0.0000627660387118 \\ 0.0000000263376884568 & -0.0000616955986773 \\ 0.0000000253013130350 & -0.0000606551185701 \\ 0.0000000243162155134 & -0.0000596434431489 \\ 0.0000000233793512466 & -0.0000586594736066 \\ \ldots & \ldots \\ \end{matrix}$ The problem for the approximation of a useful result based on these methods is the (slow) monotone decrease (or increase) of these series and hence of the partial sums. That may also be the reason for the observed difference of the diagonalization-method to this type of computation: even using 200 and more terms (as I did) leaves a remainder in the order of 1e-10 or the like. The nice thing is now, that we have an interval for the result, from where we may interpolate a better estimate. However, I don't see it yet (to use the mean seems to be too simple...)