Transseries, nest-series, and other exotic series representations for tetration
#1
Hi.

I heard once (interestingly enough, in a discussion about tetration, or a related sequence), something called "transseries", a formal algebra that among other things, subsumes formal nested sums, sums of exponentials, and other types of interesting series, way, way more general stuff than simple power series/Taylor series. It encompasses even the Mittag-Leffler expansion, the Newton series, and much more. See the paper here:

http://arxiv.org/abs/0801.4877

It seems, as I'll explore a bit below, this might be very useful in resolving the problems with continuous summation and the Faulhaber formula. Of course the goal for this is obvious: to sum up Ansus' formula for the tetration. (Yeah, I'm a big fan of Ansus' continuum sum formula if you haven't noticed it by now Smile That is, of course, because it seems like it may hold the promise to tetrate arbitrary bases instead of just some restricted subset of them.)

Consider, for example, the problem of summing the function \( f(z) = e^{e^z} \), or the double-exponential function, with a fractional/real/complex number of terms. The way one would at first think of doing this is to just apply Faulhaber's formula to the Taylor series (see http://math.eretrandre.org/tetrationforu...353&page=2). However, this formula seems to have some very restrictive convergence conditions. For instance, it doesn't work on functions with singularities... yet even here, if one applies Faulhaber's formula directly to the Taylor series for this function at \( z = 0 \), even though the function is entire, the Faulhaber formula still diverges! Aiee... Apparently, Faulhaber's formula does not only not converge for functions with singularities, but entire functions beyond exponential type (even \( f(z) = e^{z^2} \) doesn't work -- again, it's not of exponential type. We can't even sum the Gaussian either! Even if we only want sums on real numbers instead of complex, no joy, because it's still not of exponential type on the complex plane.). This was a sort of saddening discovery, as it means it won't work for Tetration directly, since Tetration has both things counting against it: it's not entire, and it's definitely not of exponential type, growing far more strongly than any exponential. Thus suggests that we may need a type of representation that is more powerful, more exotic, and "more conducive" to doing continuum summing for these functions...

The double-exponential function: an entire function not of exponential type

Instead of using an ordinary power series to tackle this problem, we instead use a case of transseries instead. For our double-exponential example, this leads us to a series in terms of \( e^z \), and get

\( f(z) = \sum_{n=0}^{\infty} \frac{1}{n!} e^{nz} \)

by formally compositing exp and its representation as a Taylor series at 0. Since by Faulhaber's formula we have \( \sum_{k=0}^{z-1} e^{nk} = \frac{e^{nz} - 1}{e^{n} - 1} \), we get

\( \sum_{k=0}^{z-1} e^{e^k} = \sum_{k=0}^{z-1} f(k) = \sum_{n=0}^{\infty} \frac{1}{n!} \left(\sum_{k=0}^{z-1} e^{nk}\right) = \sum_{n=0}^{\infty} \frac{1}{n!} \sum_{k=0}^{z-1} e^{nk} = \sum_{n=0}^{\infty} \frac{e^{nz} - 1}{n! \left(e^{n} - 1\right)} \)

and the last series does converge. Graphs are given below, showing \( e^{e^x} \) and \( \sum_{n=0}^{x-1} e^{e^n} \) for \( -2 \le x \le 2 \).

   
   

The reciprocal function: a function with a singularity

Another example. Take \( f(z) = \frac{1}{1 + z} \). It has a singularity at \( z = -1 \), and so Faulhaber's formula on a Taylor series whipped up at \( z = 0 \) fails. However, once again, we can use transseries to get us out of the bind. One type of transseries that works is the Newton series of \( f(z) \), a type of difference series. See: http://en.wikipedia.org/wiki/Difference_...ton_series. For this function, we use \( a = 0 \):

\( f(z) = \sum_{k=0}^{\infty} \frac{\Delta^k[f](a)}{k!} (z)_k = \sum_{k=0}^{\infty} \frac{(-1)^k}{(k+1)!} (z)_k \).

As the falling factorials are polynomials in \( z \), we can see this is a type of transseries. The sum relation on falling factorials, by the umbral calculus (and can be obtained from Faulhaber's formula so is still consistent with the technique used for the double-exponential function) is \( \sum_{n=0}^{z-1} (n)_k = \frac{(z)_{k+1}}{k+1} \), akin to integrals of powers, and we derive

\( \sum_{n=0}^{z-1} f(n) = \sum_{k=0}^{\infty} \frac{(-1)^k}{(k+1) (k+1)!} (z)_{k+1} \).

A numerical test (2800 terms) at z = 1.5 yields ~1.28037230, suggesting if we subtract \( \gamma \) we get 0.70315664, and so, as expected, we recover the digamma-function, notice \( \Psi(1.5 + 1) \approx 0.70315664 \), i.e.

\( \sum_{n=0}^{z-1} \frac{1}{n+1} = \gamma + \Psi(z+1) = H_z \).

I choose not to reproduce the graph here, for the graph of the digamma function is already well-known, and this gives what looks to be the same graph. I'm not exactly sure as to how to go about the formal proof of the above hypothesis, though I suppose some sort of expansion of the digamma function may be useful here, but I'm not sure which. Especially considering the displacement by the Euler-constant.

Tetration

Thus, I wonder about the possibility of representing tetration, not directly as a Taylor series but instead as some type of transseries, such as a series of exp, or as, perhaps more simply and generally, just a nested power sum like

\( \mathrm{tet}(z) = \sum_{n=0}^{\infty} a_n \left(\sum_{m=0}^{\infty} b_m z^m\right)^n \)

or with even more nestings, etc. Note that the double exponential is also a series of the above type, if we set \( a_n = b_n = \frac{1}{n!} \). Ideally it would be good to try representations in which the operations of integral, continuum sum, multiply by \( \log(b)^z \) (where b is the base of the tetration, note this isn't needed for tetration of base e), and exponential function, are easy to do. Since the tetration tet grows up toward an infinite number of nestings of exp due to the relation \( \mathrm{tet}_b(n) = \exp^n_b(1) \) (and note also the "Cantor-Bouquet-fractal" structure in the graph of, say, Kouznetsov's tet construction on the z-plane, approximating the Julia set of exp in structure, and also that to do a continuum sum of a triple exponential you need to nest three sums), I'd suspect one would need a transseries with an infinite number of nested sums to pull it off, essentially summing over a 2-dimensional matrix of coefficients. (Hmm... yet another matrix method...) However I couldn't test this numerically, as I'm not sure how to implement the necessary operations (continuum sum, integral, mult by \( \log(b)^z \) and exp) to run Ansus' formula on such a matrix of coefficients representing truncations of infinitely nested sums (note already the above can be represented by a \( 2 \times \infty \) matrix with the first row being the sequence \( b \) and the second being the sequence \( a \).). and see if we can get some kind of convergence out of it. Especially exp. It's bad enough for a single sum (Bell polynomials and all), never mind arbitrarily many nestings. And the continuum sum too, what with that power of a sum in there. And not only that, but consider the existence of multiple transseries representations for the same function, e.g. just for the simple double exponential we had both its Taylor series and the exp-of-exp series, both of which could be represented in a \( 2 \times \infty \) matrix of coefficients for a double sum. Thus we might also need to consider exotic representations of the involved operators (especially \( exp \)) to get convergence, too. More reason to find that Mittag-Leffler stuff? Yet I'm really not sure how to go about taking its exp, or how we'd apply Faulhaber's formula to that puppy.

Finally, note that the Mittag-Leffler expansion is a double sum but without powering of the sum, so perhaps maybe its explicit generation from the formula is not necessary (since the function is initially unknown, so too are all the coefficients), only operations on double sums of similar form.
#2
Howdy,
Thanks for the information on transseries, I hadn't heard of them before. The most general method I've developed for extending tetration is based on using a system of nested summations like you are talking about. See
http://tetration.org/Combinatorics/index.html
and http://tetration.org/Combinatorics/Schro...index.html .
Daniel
Daniel
#3
(11/26/2009, 03:57 PM)Daniel Wrote: The most general method I've developed for extending tetration is based on using a system of nested summations like you are talking about.

So you can compute a converging nested series for a fractional iterate of \( e^x-1 \)? This would be very interesting as it was shown by Baker and Écalle that the (ordinary) power series of the regular iterates of \( e^x-1 \) do not converge (except for integer iterates of course). I think though there is a paper of Écalle where he shows that they are Borel-summable despite. Unfortunately its difficult (by a lack of theorems/propositions) to see on your site what you are actually able to do with your sums.

@Mike3: Yes really interesting stuff those transseries, but in the moment out of my scope to dive deeper into the topic to be able to apply it to continuum sums.
#4
(11/26/2009, 03:57 PM)Daniel Wrote: Howdy,
Thanks for the information on transseries, I hadn't heard of them before. The most general method I've developed for extending tetration is based on using a system of nested summations like you are talking about. See
http://tetration.org/Combinatorics/index.html
and http://tetration.org/Combinatorics/Schro...index.html .
Daniel

Yes. The problem with that method seems to be that it appears to be the so-called "regular iteration" of a function near a fixed point. This works good for tetration to bases \( e^{-e} \le b \le e^{1/e} \) but not outside that range, where it generates an entire tetrational, which is not good because such a thing is not real at the real axis for, e.g. \( b > e^{1/e} \), and inconsistent with the behavior of tetrational in that range, where it has singularities. This seems to be a pitfall of every fixed-point iteration method I've seen so far, unless I've missed something...

The method I mentioned, however, does not approach the problem from the direction of dynamical systems, but rather from that of superfunctions, functional equations, algebra and analysis, though this may overlap, and does not involve fixed points.

It uses what I call "Ansus' formula" for the tetration (after the poster who first posted it here):

\(
\log_b\left(\frac{\mathrm{tet}'_b(x)}{\mathrm{tet}'_b(0) \log(b)^x}\right) = \sum_{n=0}^{z-1} \mathrm{tet}_b(n)
\)

The way this is derived is from the recurrent equation of the iteration

\( \mathrm{tet}_b(z + 1) = b^{\mathrm{tet}_b(z)} \).

with

\( \mathrm{tet}_b(0) = 1 \).

By differentiating the recurrent equation we get a recurrent equation for the derivative, which can be turned into an "indefinite product", which can then be turned via an exp/log identity (see here: http://math.eretrandre.org/tetrationforu...hp?tid=273) into a sum, and the result is the above formula with a sum from 0 to \( z-1 \). The idea is then to generalize this to fractional, real, or complex values of \( z \), obtaining what I call "continuum sum", but it could also be called "fractional sum", "sum with non-integer bounds", and "continuous antidifference".

Now obviously there are many ways to interpolate a sum, just as there are to interpolate tetration directly, however, the Ansus formula transfers the problem of tetration into the domain of sums, which are an easier operation to study. So if we have an interpolation for sums, we should be able to use that formula to get one for tetration.

The trick is, what is the most "natural" interpolation of a sum? For power functions, we have a formula called "Faulhaber's Formula". It turns out one can actually get to this formula after starting with just basic identities for sums (just ask if you want to see the method), plus some analytic continuation. Thus the possibility of applying that to an analytic function given by a Taylor series presents itself, as it's a sum of powers.

The problem, however, is that as mentioned it only appears to work for a restricted subset of analytic functions, which seem to imply (though I do not yet have the rigorous proof, but basically the coefficients must decay more rapidly than the Bernoulli numbers grow) the function must be entire and of at most exponential type. Tetration violates both requirements.

However, perhaps Faulhaber's formula is not as useless as it may seem here. There are much more general series representations than just Taylor series, including the transseries I mentioned, which is actually quite general in itself: it includes Taylor series, and also other types of expansions such as nested power series, "exp-series" (like what I gave for the double-exponential function), Newton series, Mittag-Leffler star expansion, and more. They can have larger regions of convergence than Taylor series, such as a whole half-plane and even an entire star like the Mittag-Leffler star (e.g., the Mittag-Leffler star expansion (see here: http://eom.springer.de/s/s087230.htm), but it doesn't provide the necessary magic coefficients and I've had an awful time trying to find them.). As could be seen from the first post, the use of a properly-chosen transseries can provide the continuum sum of even more functions than with the direct application of Faulhaber to Taylor series. (two examples were given in my opening post, that of the double exponential function and that of the reciprocal function, neither of which Faulhaber-on-Taylor could sum due to the first being not of exponential type and the second having a singularity (so not entire))

For extending tetration, the problem is then finding a good transseries representation and defining continuum sum, exp, integral, and mult by \( \log(b)^z \) operators so they'll converge for the tetration series, and also such that the iteration of Ansus' formula converges. And so far, I don't have much on this.
#5
What are "magic" coefficients?
#6
(11/28/2009, 04:56 AM)andydude Wrote: What are "magic" coefficients?

On this page:

http://eom.springer.de/s/s087230.htm

there's a formula for the "Mittag-Leffler expansion in a star", which is not a Taylor series, but a different type of series that is a sum of polynomials that converges over a whole star (it's explained on the page -- and contrast this with a Taylor series which only converges in a circle when the function is not entire). It looks like a two nested sums:

\( f(z) = \sum_{n=0}^{\infty} \sum_{\nu=0}^{k_n} c_{\nu}^{(n)} \frac{f^{(\nu)}(a)}{\nu!} (z - a)^{\nu} \)

(and is a special case of the "nested series" and "transseries" I mention in the thread title)

The "magic" numbers are the polynomial degrees \( k_n \) and the coefficients \( c_{\nu}^{(n)} \) on the terms. According to the site these are "independent of the form of \( f(z) \) and can be evaluated once and for all", yet how to do this is not explained.
#7
Hmm. My hypothesis that the double-sum, or at least the exp-series, can only provide continuum sums of functions for "double-exponential type" or less seems to be wrong. Indeed, it seems that if an exp-series exists and converges for a function, then its continuum sum does too.

Consider the triple-exponential function \( f(x) = \exp^3(x) = e^{e^{e^x}} \). This can be expressed as an exp-series

\( e^{e^{e^x}} = \sum_{n=0}^{\infty} a_n e^{nx} \)

where \( a_n \) are the coefficients of the Taylor series of \( e^{e^x} \) at \( x = 0 \) (MacLaurin series). Taking the continuum sum gives,

\( \begin{align}\sum_{n=0}^{x-1} e^{e^{e^n}} &= a_0 x + \sum_{n=1}^{\infty} \frac{a_n}{e^{n} - 1} \left(e^{nx} - 1\right) \\ &= \left(a_0 - \sum_{n=1}^{\infty} \frac{a_n}{e^{n} - 1}\right) x + \sum_{n=1}^{\infty} \frac{a_n}{e^{n} - 1} e^{nx}\end{align} \)

As the coefficients of both sums are smaller than those of the original sum (because \( e^{n} - 1 > 0 \) for all \( n > 0 \)), if the original series converges at 0 and at the point \( x \), so does this (see series comparison test). Since we have an expression for \( e^{e^{e^x}} \), we have achieved its continuum sum and the proof (or disproof, insofar as my original hypothesis that such a series could not yield a continuum sum of something faster than a double exponential, but as a proof this proves even more, namely that any convergent exp-series' continuum sum also converges, unlike the case with Taylor series summed via direct application of Faulhaber's formula) is complete.

As exp-series look to be a special case of nested series, this suggests even 2 layers of nesting may be able to represent tetration and continuum-sum it, though there's still no proof for that. The special case of exp-series themselves do not appear useful for doing tetration with Ansus' formula, however, for two reasons: any function constructed with them is \( 2\pi i \)-periodic, yet tetration seems not to be given pretty much every "good" extension there is -- though they may be able to express tetration for the base \( b = e^{e^{1-e}} \) whose regular tetration is periodic with the required period (and so the exp-series can be recovered via the Fourier series), but a single base isn't very useful. And, we can't even represent \( f(x) = x \) as an exp-series, thus we can't even continuum-sum one exp-series to another exp-series so this is not very useful insofar as trying to iteratively apply Ansus' formula to generate tetrationals goes!
#8
(11/26/2009, 04:42 PM)bo198214 Wrote:
(11/26/2009, 03:57 PM)Daniel Wrote: The most general method I've developed for extending tetration is based on using a system of nested summations like you are talking about.

So you can compute a converging nested series for a fractional iterate of \( e^x-1 \)? This would be very interesting as it was shown by Baker and Écalle that the (ordinary) power series of the regular iterates of \( e^x-1 \) do not converge (except for integer iterates of course). I think though there is a paper of Écalle where he shows that they are Borel-summable despite.
I can compute a nested series for the fractional iterates of \( e^x-1 \), but I don't claim the series converges. I think the series is a formal power series. It is interesting to know that the series is Borel-summable.

(11/26/2009, 04:42 PM)bo198214 Wrote: Unfortunately its difficult (by a lack of theorems/propositions) to see on your site what you are actually able to do with your sums.
I developed Schroeder summations while looking for a tool that illuminates the combinatorial structure underlying iterated functions \( f^n(z) \) and their derivatives \( D^mf^n(z) \). So Schroeder summations are general, they are relevant to \( e^x-1 \), tetration, pentation and so on, just as long as the function is differentiable in the complex plane and has a fixed point.

Schroeder summations are consistent with what is known from complex dynamics, particularly the classification of fixed points. In fact the Schroeder summations can be used to derive and explain in detail the classification of fixed points including Schroeder's equation and Abel's equation and the conditions in which they can be used.
Daniel
#9
(11/29/2009, 09:09 AM)Daniel Wrote: I can compute a nested series for the fractional iterates of \( e^x-1 \), but I don't claim the series converges. I think the series is a formal power series. It is interesting to know that the series is Borel-summable.

So your Schröder sums compute the regular iteration, is that true? I think it is very important to know those equalities. For example it took a while until I realized that the matrix approach introduced by Gottfried is actually equal to the regular iteration.
As a test, e.g. the regular half-iterate of \( e^x-1 \) has as the first 10 coefficients:
\( 0 \), \( 1 \), \( \frac{1}{4} \), \( \frac{1}{48} \), \( 0 \), \( \frac{1}{3840} \), \( -\frac{7}{92160} \), \( \frac{1}{645120} \), \( \frac{53}{3440640} \), \( -\frac{281}{30965760} \)

Or generally the \( t \)-th iterate has as the first 10 coefficients

\( 0 \),
\( 1 \),
\( \frac{1}{2} t \),
\( \frac{1}{4} t^{2} - \frac{1}{12} t \),
\( \frac{1}{8} t^{3} - \frac{5}{48} t^{2} + \frac{1}{48} t \),
\( \frac{1}{16} t^{4} - \frac{13}{144} t^{3} + \frac{1}{24} t^{2} - \frac{1}{180} t \),
\( \frac{1}{32} t^{5} - \frac{77}{1152} t^{4} + \frac{89}{1728} t^{3} - \frac{91}{5760} t^{2} + \frac{11}{8640} t \),
\( \frac{1}{64} t^{6} - \frac{29}{640} t^{5} + \frac{175}{3456} t^{4} - \frac{149}{5760} t^{3} + \frac{91}{17280} t^{2} - \frac{1}{6720} t \),
\( \frac{1}{128} t^{7} - \frac{223}{7680} t^{6} + \frac{1501}{34560} t^{5} - \frac{37}{1152} t^{4} + \frac{391}{34560} t^{3} - \frac{43}{32256} t^{2} - \frac{11}{241920} t \),
\( \frac{1}{256} t^{8} - \frac{481}{26880} t^{7} + \frac{2821}{82944} t^{6} - \frac{13943}{414720} t^{5} + \frac{725}{41472} t^{4} - \frac{2357}{580608} t^{3} + \frac{17}{107520} t^{2} + \frac{29}{1451520} t \),

Does that match your findings?
I think these formulas are completely derivable from integer-iteration. If one knows that each coefficient is just a polynomial then this polynomial is determined by the number of degree plus 1 values for \( t \) and these can be gained by just so many consecutive integer values. So this sounds really like your Schröder summation.

However an alternative approach is just to solve the equation \( f^{\circ t}\circ f=f\circ f^{\circ t} \) for \( f^{\circ t} \), where \( f \) and \( f^{\circ t} \) are treated as formal powerseries.
#10
(11/29/2009, 09:38 AM)bo198214 Wrote:
(11/29/2009, 09:09 AM)Daniel Wrote: I can compute a nested series for the fractional iterates of \( e^x-1 \), but I don't claim the series converges. I think the series is a formal power series. It is interesting to know that the series is Borel-summable.

So your Schröder sums compute the regular iteration, is that true? I think it is very important to know those equalities. For example it took a while until I realized that the matrix approach introduced by Gottfried is actually equal to the regular iteration.
As a test, e.g. the regular half-iterate of \( e^x-1 \) has as the first 10 coefficients:
\( 0 \), \( 1 \), \( \frac{1}{4} \), \( \frac{1}{48} \), \( 0 \), \( \frac{1}{3840} \), \( -\frac{7}{92160} \), \( \frac{1}{645120} \), \( \frac{53}{3440640} \), \( -\frac{281}{30965760} \)

Or generally the \( t \)-th iterate has as the first 10 coefficients

\( 0 \),
\( 1 \),
\( \frac{1}{2} t \),
\( \frac{1}{4} t^{2} - \frac{1}{12} t \),
\( \frac{1}{8} t^{3} - \frac{5}{48} t^{2} + \frac{1}{48} t \),
\( \frac{1}{16} t^{4} - \frac{13}{144} t^{3} + \frac{1}{24} t^{2} - \frac{1}{180} t \),
\( \frac{1}{32} t^{5} - \frac{77}{1152} t^{4} + \frac{89}{1728} t^{3} - \frac{91}{5760} t^{2} + \frac{11}{8640} t \),
\( \frac{1}{64} t^{6} - \frac{29}{640} t^{5} + \frac{175}{3456} t^{4} - \frac{149}{5760} t^{3} + \frac{91}{17280} t^{2} - \frac{1}{6720} t \),
\( \frac{1}{128} t^{7} - \frac{223}{7680} t^{6} + \frac{1501}{34560} t^{5} - \frac{37}{1152} t^{4} + \frac{391}{34560} t^{3} - \frac{43}{32256} t^{2} - \frac{11}{241920} t \),
\( \frac{1}{256} t^{8} - \frac{481}{26880} t^{7} + \frac{2821}{82944} t^{6} - \frac{13943}{414720} t^{5} + \frac{725}{41472} t^{4} - \frac{2357}{580608} t^{3} + \frac{17}{107520} t^{2} + \frac{29}{1451520} t \),

Does that match your findings?
I think these formulas are completely derivable from integer-iteration. If one knows that each coefficient is just a polynomial then this polynomial is determined by the number of degree plus 1 values for \( t \) and these can be gained by just so many consecutive integer values. So this sounds really like your Schröder summation.

However an alternative approach is just to solve the equation \( f^{\circ t}\circ f=f\circ f^{\circ t} \) for \( f^{\circ t} \), where \( f \) and \( f^{\circ t} \) are treated as formal powerseries.

Yes, this does match my findings. See Hierarchies of Height n at http://tetration.org/Combinatorics/Schro...index.html to see the results of my derivation. Note: multiply my terms by 1/n! to get your terms.

I agree there are alternate ways to iterate \( e^x-1 \), there are at least three ways I know of from my own research. Schroeder summations are not an efficient to iterate \( e^x-1 \). It requires 2312 summations in order to evaluate the tenth term. What they do is show that there is a combinatorial structure underlying all iterated functions, Schroeder's Fourth Problem http://www.research.att.com/~njas/sequences/A000311 . Also Schroeder summations are produced using Faà di Bruno's formula which is an example of a Hopf algebra which is important in several different areas of quantum field theory including renormalization. It is my hope that this might shine some light on how to show that our formulations of iterated functions and tetration are actually convergent.
Daniel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Exotic fixpoint formulas tommy1729 2 695 06/20/2023, 10:10 PM
Last Post: tommy1729
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 13,287 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 9,828 02/22/2023, 08:58 PM
Last Post: tommy1729
Question Tetration Asymptotic Series Catullus 18 6,441 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 4,463 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 1,295 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 1,235 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 2,612 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 6,854 03/21/2020, 08:28 AM
Last Post: Daniel
Question Taylor series of i[x] Xorter 12 29,498 02/20/2018, 09:55 PM
Last Post: Xorter



Users browsing this thread: 1 Guest(s)