Half-iterate exp(z)-1: hypothese on growth of coefficients
#21
(08/11/2022, 08:52 PM)ottfried Wrote:
(08/11/2022, 07:03 PM)Leo.W Wrote: And then \(O(a^{2^n})\) is not Borel summable even after k times, but still summable by contour integrals with Residue theorem. And hence we can sum \(O(^n10)\) like \(1-10+10^{10}-10^{10^{10}}+\cdot\)
Ok, this goes now a bit off topic here, but have you seen my evaluations of that alternating series? That has been ...

Sorry my focus has been abberated, I had a quick idea, maybe we can prove the asymp of terms in sum by simply residue theorem:
\[f(z)=\sum_{n\ge1}{a_nz^n}, f^2(z)=e^z-1\]
\[a_n=\frac{1}{2\pi i}\int_{\gamma}{\frac{f(\tau)\mathrm{d}\tau}{\tau^{n+1}}}\]
where \(\gamma\) denotes some 1-time winded curve winding the point 0. (can't recall any specific name though)
then by transform and assuming that f(z) behave quite like z around 0 by asymp, we have
\[a_n=\frac{1}{2\pi i}\int_{\gamma}{\frac{(e^t-1)f'(t)\mathrm{d}\tau}{f(t)^{n+1}}}\]
\[a_n=[t^{-1}]\bigg(\frac{(e^t-1)f'(t)}{f(t)^{n+1}}\bigg)\]
expanding \(e^t-1,f'(t),f(t)^{n+1}\) and allocate the coefficients of \([t^{-1}]\) we can arrange and get a reccurence.
Then the recurrence should prove the asymp of a_n.
Regards, Leo Smile
#22
(08/12/2022, 05:41 AM)Leo.W Wrote: expanding \(e^t-1,f'(t),f(t)^{n+1}\) and allocate the coefficients of \([t^{-1}]\) we can arrange and get a reccurence.
Then the recurrence should prove the asymp of a_n.

What is \([t^{-1}]\)?
#23
(08/12/2022, 05:18 PM)bo198214 Wrote:
(08/12/2022, 05:41 AM)Leo.W Wrote: expanding \(e^t-1,f'(t),f(t)^{n+1}\) and allocate the coefficients of \([t^{-1}]\) we can arrange and get a reccurence.
Then the recurrence should prove the asymp of a_n.

What is \([t^{-1}]\)?

It's blanket notation, only for convenience to represent the coefficient of a specific term, for example let \(f(z)=z^3+2z^2-z+5-\frac{\pi}{z^2}\)
then \([z^3]f(z)=1\)
\([z^2]f(z)=2\)
\([z^1]f(z)=-1\)
\([1]f(z)=5\)
\([z^{-2}]f(z)=-\pi\)
Regards, Leo Smile
#24
(08/12/2022, 05:25 PM)Leo.W Wrote: It's branket notation, only for convenience to represent the coefficient of a specific term, for example let \(f(z)=z^3+2z^2-z+5-\frac{\pi}{z^2}\)
then \([z^3]f(z)=1\)
\([z^2]f(z)=2\)
\([z^1]f(z)=-1\)
\([1]f(z)=5\)
\([z^{-2}]f(z)=-\pi\)

Never heard about that, but easy to understand Smile
#25
(08/12/2022, 05:41 AM)Leo.W Wrote:
(08/11/2022, 08:52 PM)ottfried Wrote:
(08/11/2022, 07:03 PM)Leo.W Wrote: And then \(O(a^{2^n})\) is not Borel summable even after k times, but still summable by contour integrals with Residue theorem. And hence we can sum \(O(^n10)\) like \(1-10+10^{10}-10^{10^{10}}+\cdot\)
Ok, this goes now a bit off topic here, but have you seen my evaluations of that alternating series? That has been ...

Sorry my focus has been abberated, I had a quick idea, maybe we can prove the asymp of terms in sum by simply residue theorem:
\[f(z)=\sum_{n\ge1}{a_nz^n}, f^2(z)=e^z-1\]
\[a_n=\frac{1}{2\pi i}\int_{\gamma}{\frac{f(\tau)\mathrm{d}\tau}{\tau^{n+1}}}\]
where \(\gamma\) denotes some 1-time winded curve winding the point 0. (can't recall any specific name though)
then by transform and assuming that f(z) behave quite like z around 0 by asymp, we have
\[a_n=\frac{1}{2\pi i}\int_{\gamma}{\frac{(e^t-1)f'(t)\mathrm{d}\tau}{f(t)^{n+1}}}\]
\[a_n=[t^{-1}]\bigg(\frac{(e^t-1)f'(t)}{f(t)^{n+1}}\bigg)\]
expanding \(e^t-1,f'(t),f(t)^{n+1}\) and allocate the coefficients of \([t^{-1}]\) we can arrange and get a reccurence.
Then the recurrence should prove the asymp of a_n.

ok ... but that still has alot of self-reference it seems to me.

there might be a way around it though ... but i dont see it ...

Also - as i kinda pointed out - assuming it is linear and analytic and yet being neither might be an issue when using theorems that normally require exactly that !!
How do i even know it will not give a close to linear function that is analytic in some radius ?

Do not get me wrong , I love the idea.


regards

tommy1729
#26
(08/12/2022, 05:41 AM)Leo.W Wrote:
(08/11/2022, 08:52 PM)ottfried Wrote:
(08/11/2022, 07:03 PM)Leo.W Wrote: And then \(O(a^{2^n})\) is not Borel summable even after k times, but still summable by contour integrals with Residue theorem. And hence we can sum \(O(^n10)\) like \(1-10+10^{10}-10^{10^{10}}+\cdot\)
Ok, this goes now a bit off topic here, but have you seen my evaluations of that alternating series? That has been ...


\[a_n=\frac{1}{2\pi i}\int_{\gamma}{\frac{f(\tau)\mathrm{d}\tau}{\tau^{n+1}}}\]

This doesn't converge. \(f\) isn't holomorphic in a neighborhood of zero, that's the whole discussion here. How close it is to being analytic. \(f\) is analytic in the left half plane, and is only continuous as \(z \to 0\), where it tends to \(0\). You can't take a jordan curve about \(0\).

This Cauchy integral stuff doesn't really help.

Also, as to the note, we are trying to prove it is \(1\)-borel summable, as you are describing, so the coefficients are going something like \(c^nn!\). That's the goal of the theorem, which as we see through numerical tests, that yes, the numbers seem to be pointing towards this. I'm well aware we can borel sum much larger coefficients, but that's not the goal of this theorem; this is specific to the growth of these coefficients--which passing through Borel summation seems like a clean way to approach it.



Quote:if i recall correctly , convergeance on the boundary ( in general ) can be an undecidable problem even if we know boundaries on the taylor coefficients.

That would be related to number theory or analytic number theory usually ... or always ?

forgot the details ...

just a thought.


regards

tommy1729

Yeah this is very related to number theory, which is why I was so confident at first Tongue  But then I realized this specific problem proves to be just a bit too hard for the tools I'm used to. But I think I see a proof, but I'm not 100%, I might do a more detailed write up in a bit. Because this has proven to be very interesting. The key appears to be something I never noticed before; that:

\[
\sum_{n=0}^N d_n z^n + \vartheta(z) z^{N+1} = g(z)\\
\]

And we can make this approximation as accurate as we want. Which is: for all \(|z| < \delta\) and \(\Re(z) < 0\) and for all \(\epsilon>0\), there exists \(N\) and \(\vartheta\) such that:

\[
|g(z) - \sum_{n=0}^N d_n z^n + \vartheta z^{N+1}| < \epsilon\\
\]

(Note, these choices of \(N\) and \(\vartheta\) depend on \(z\) and \(\epsilon\)--otherwise we'd have holomorphy).

This is detailed in Knopp's book, and I had never seen that before, so I'm trying to make sense of how we can use that \(g(z)\) is continuous as \(z \to 0\), to pull out a \(n!\), through the binomial coefficients. Which should say that \(d_n = O(c^nn!)\). I see a general shape of the argument but I can't nail it yet.
#27
Okay, I'd like to write a proposal for the solution, which depends on a single condition. I will detail the condition as we progress. But before that, I'd like to keep it vague. As to this; I'd like to argue a tad loosely; so ignore if I haven't made absolute samurai work of the \(\epsilon - \delta\) arguments. But I believe this question is answered absolutely in the affirmative. If Gottfried's construction of the sequence obeys the condition I am to release.


Note; this will be a fairly detailed post, and I want to get everything correct. As to that, it will be very god damned lengthy. I am first posting this here; and then I hope to answer the question on MO. But I don't want to answer on MO yet, as I'm still only 90% sure. This result is very deep, as to that. Expect me to describe some complex results off hand; if you have any questions please ask.

---------------------

We start by describing the set of functions \(C^1(H,H)\). This is the set of functions \(f\) such that \(f\) is holomorphic for \(\Re(z) < 0\) and sends to \(\Re(z) < 0\) (\(C^1\) is meant to be interpreted as once complex differentiable). Whereby, we call the set \(H = \{\Re(z) < 0\}\). These functions are effectively mappable to \(F : \mathbb{D} \to \mathbb{D}\), where \(\mathbb{D}\subset\mathbb{C}\) is the unit disk. And the set of functions \(C^1(H,H)\) is biholomorphic to \(C^1(\mathbb{D},\mathbb{D})\) through a linear fractional transformation. By which \(F(z) = \mu(f(\mu^{-1}(z)))\). This is important to remember, as a general construction.

The function \(\mu\) isn't too hard to find, but I'm too lazy to do it at the moment. And we've more important things to discuss.

An additional requirement, we are going to ask of our functions, is that \(f(0) = 0\), and which in the biholomorphic case \(F(1) = 1\). Such that, these functions are continuable to the edge of their domain. And the edge of that domain is a fixed point. This allows us to restrict \(\mu\). By which we have \(\mu : H \to \mathbb{D}\) while \(\mu(0) = 1\). Since we are mapping the boundary of a simply connected domain using an LFT, we are allowed to choose a single point; and then this mapping is unique.

The last real requirement we will make is that \(f\)'s maximal domain is \(H\), and that \(F\)'s maximal domain is \(\mathbb{D}\). From here, we enter into our problem. More traditionally this would be done using \(F\) and expanding divergent series (a la Ramanujan-Hardy circle method, as Tommy pointed out). We don't need to go the full analytic number theory route though. We get something stranger. But not as crazy as the infinite pochhammer; as also, Leo pointed out.

-------------------------

We begin by making a description that Knopp describes. But we do so, based on an additional criterion on our function \(F\). This is the part, I'm having trouble with; and which this is not quite a theorem, but rather a conditional theorem. But, assuming we can expand:

\[
|F(z+1) - \sum_{k=0}^N D_k z^k| \le \vartheta z^{N+1}\\
\]

So long as \(z+1 \in \mathbb{D}\)--where \(0 \le \vartheta < 1\) is not analytic, and not a proper derivative--just a constant which satisfies the asymptotic as \(z \to 0\). This DOES NOT MEAN that we can assign \(\vartheta\) as the \(N+1\)'th derivative at \(F(1)\). It means something much deeper; which I believe Gottfried's series is following. But I cannot be sure, as I haven't run the numbers myself. But assuming that Gottfried's numerical construction obeys the existence conditions to follow. We can agree.

--------------------------

The second disclaimer is made here. This will serve as the complex iteration/complex dynamics/tetration stuff so familiar in this forum. It is that (Recalling \(H = \{\Re(z) < 0\}\)):

\[
f(z) = e^z - 1\\
\]

Satisfies:

\[
f^{\circ n}(H) \to 0\,\,\text{as}\,\,n\to\infty\\
\]

This is pretty standard. Additionally we have that:

\[
f^{\circ n}(-H) \to \infty\\
\]

And for the line \(z \in i\mathbb{R}\), we are given absolute chaos. There exists a fixed point at \(0\), which we write as:

\[
f(z) = z + O(z^2)\\
\]

By then, we have an abel function \(\alpha\) about the attracting petal. We are again, following Milnor, who follows Ecalle. By which we have \(\alpha\) holomorphic for \(\Re(z) < 0\). We can hammer this home by recalling that, \(f'(z) \neq 0\). And we can always write:

\[
\alpha(f^{\circ n}(z)) - n = \alpha(z)\\
\]

By which an inverse function:

\[
f^{\circ -n} \left(\alpha^{-1}(z)\right) = \alpha^{-1}(z-n)\\
\]

And since \(f\) is non-singular \(f'(z) \neq 0\). We have a surjective map that is locally injective:

\[
\begin{align}
\alpha(z)&: H \to \mathbb{C}_{\Re(z) > 0}\\
\alpha^{-1}(z)&:\mathbb{C}_{\Re(z) > 0,|\Im(z)| < \pi} \to H\\
\end{align}
\]

Again, we need to only follow Milnor for this result. And it is fairly standard--happy to further describe this result if asked. This is, as it's known the Fatou coordinate. This is the regular iteration of \(f(z)\), in the attracting petal. Where as, if we did this in the repelling petal, we would be doing the "cheta" iteration as we recall.


But regardless, we can now get to the function we actually care about:

---------------------------


\[
g(z) = \alpha^{-1}(1/2 + \alpha(z))
\]

And this function satisfies all of the descriptions we made above. In that \(g \in C^1(H,H)\), and \(g(0) = 0\); the maximal domain of \(g\) is \(H\). This means there is a correspondent \(G : \mathbb{D} \to \mathbb{D}\). Such that \(G(1) = 1\) and it satisfies all the nice things \(g\) satisfies.

Now we know that the maximal domain of holomorphy is \(H\), by which, we have the maximal holomorphy of \(G\) is \(\mathbb{D}\). This means many tools from analytic number theory come into play. When we make guesses of series about zero; where we have a maximal domain of \(\mathbb{D}\); we are saying that the boundary of \(\mathbb{D}\) is full of singularities. So when we find a point on the boundary, that is singularity-less; (\(g(0) = 0\), \(G(1) = 1\)). We have a lot of tools to handle these exceptions.

BUT ONLY IF IT'S THE MAXIMAL DOMAIN. So what I will add again, is that \(g : H \to H\); and trying to analytically continue it to a larger domain is impossible. So, just as similarly, \(G\) is on its maximal domain at \(\mathbb{D}\). This is extraordinarily important to us, because we are summing a divergent series. And specifically for this reason. For this we can go down the rabbit hole of Ramanujan-Hardy little circle method. But we don't need to--we just need a couple of realizations from this.

Which most predominantly being, IF WE CAN FIND A SERIES:

\[
\begin{align}
g(z) &= \sum_{k=0}^\infty d_kz^k\\
G(1+z) &= \sum_{k=0}^\infty D_k z^k\\
\end{align}
\]

That they must follow similar rules. So, this isn't analytic number theory exactly. We just have to steal a couple ideas. And it's literally like \(1/100\)'th of the real analytic number theory work. Ramanujan and Hardy went really hard describing exact asymptotics of the coefficients \(D_k\)--we're just trying to pull out a simple bound. So thank god, we don't actually have to use the little circle method.

---------------------------

And here is the assumption in its complete and final form. Which, I think is perfectly reasonable; and which follows from Gottfried's coefficients. And even after reading Gottfried's construction of the coefficients--I'm sure this is valid. The trouble is I'm not 100% on constructing these coefficients on my own; so if they can be constructed in this manner--then Gottfried's solution is Borel summable. And we've constructed a new expression of \(\eta\) iteration.

But before we get there. Let's steal from analytic number theory. Let's take:

\[
G(z) = \sum_{n=0}^\infty G_n z^n\\
\]

We get the perfect estimate of:

\[
\limsup_{n\to\infty} \sqrt[n]{G_n} = 1\\
\]

Due to Cauchy's root test. Additionally, we know that:

\[
\lim_{z \to 1^-} G(z) =1\\
\]

And so, we can abel sum \(G(z)\) at \(G(1)\). Which means that:

\[
\sum_{n=0}^\infty G_n = 1\\
\]

When we Abel sum it. This is extraordinarily important because it means that \(G_n\) is Borel summable as well.

--------------------------------------------------------

Now we do the real magic, even though Abel has essentially solved our problem. We want to prove that:

\[
\sum_{n=0}^N G_n(1+z)^n = \sum_{k=0}^ND_k z^k\\
\]

And that \(D_k\) is Borel summable. The thing we can do now, because we have Abel summation--sort of has its home in "umbral calculus". But despite the shady history of "umbral calculus"--it has a strong footing in modern mathematics. But only if you can make a strong guess of the error terms in your operations. Again, which brings us to some of the ideas of analytic number theory. And here is where we invoke our "conditional theorem"--that Gottfried's series expansion satisfies the above \(\vartheta\) existence. By which. I can write:

\[
G(1+z) = \sum_{n=0}^\infty G_n(1+z)^n = \sum_{k=0}^N D_k z^k + \vartheta z^{N+1}\\
\]

For \(z\) and \(1+z\) in \(\mathbb{D}\). By which this is just \(z \in \mathbb{D} \cap (1+\mathbb{D})\). By which, if this approximation is valid, then we have our result. (I'm getting ptsd flashbacks from struggling to work through the little-circle method Sad )

--------------------------------------------

So let's hit it home. The values \(D_k\) can be found by abel summing \(G(1+z)\) and its derivatives. The values must satisfy this \(\vartheta\) condition. And then:

\[
\sum_{k=0}^N D_k z^k = \sum_{k=0}^N \sum_{j=0}^k \binom{k}{j} G_{k-j}z^{k-j} + \vartheta z^{N+1}\\
\]

By which we can pull out a bound of \(D_k = O(k!)\) through the binomial coefficient. This bound then applies as we conjugate back using the LFT in the beginning of this post. And we get that \(d_k = O(c^kk!)\).


I think that's as close as I'm going to get to a proof. I'd need to find something more detailed to find an exact proof. This is exactly the answer though. It'd probably take about 10 pages of strong rigour and references I was too lazy to do. But that's the answer.
#28
(08/12/2022, 10:53 PM)JmsNxn Wrote:
(08/12/2022, 05:41 AM)Leo.W Wrote:
(08/11/2022, 08:52 PM)ottfried Wrote:
(08/11/2022, 07:03 PM)Leo.W Wrote: And then \(O(a^{2^n})\) is not Borel summable even after k times, but still summable by contour integrals with Residue theorem. And hence we can sum \(O(^n10)\) like \(1-10+10^{10}-10^{10^{10}}+\cdot\)
Ok, this goes now a bit off topic here, but have you seen my evaluations of that alternating series? That has been ...


\[a_n=\frac{1}{2\pi i}\int_{\gamma}{\frac{f(\tau)\mathrm{d}\tau}{\tau^{n+1}}}\]

This doesn't converge. \(f\) isn't holomorphic in a neighborhood of zero, that's the whole discussion here. How close it is to being analytic. \(f\) is analytic in the left half plane, and is only continuous as \(z \to 0\), where it tends to \(0\). You can't take a jordan curve about \(0\).

This Cauchy integral stuff doesn't really help.

Nope, James, the cauchy integral works. f(z) here refers not to the divergent sum but a truly holomorphic function around z=0. f(z) is analytic around 0, and has a nearest branch cut at z=-1. Btw the integral isn't the point, the last formula is, and this holds true for arbitrary n as f(z) a divergent series.
\[a_n=[z^{-1}]\big(\frac{(e^z-1)f'(z)}{f(z)^{n+1}}\big)\]
For example, we set initially by our knowledge \(f(z)=z+\frac{z^2}{4}+\frac{z^3}{48}+O(z^4)\), and we compute
\(\frac{(e^z-1)f'(z)}{f(z)^{1+1}}=\frac{1}{z}+\frac{1}{2}+\frac{z}{8}+O(z^2)\) agree with \(a_1=1\)
\(\frac{(e^z-1)f'(z)}{f(z)^{2+1}}=\frac{1}{z^2}+\frac{1}{4z}+\frac{1}{24}+O(z^1)\) agree with \(a_2=\frac{1}{4}\)
There surely are self-references, however it allows one to write a recurrence formula for a_n.
Firstly we assume \(f(z)=\sum_{n\ge1}{a_nz^n}\) with\(a_1=1\), and then we have \(f'(z)=\sum_{n\ge0}{(n+1)a_{n+1}z^n}\), with more computation \((e^z-1)f'(z)=\sum_{n\ge1}{c_nz^n}\),
we have \( c_n=\sum_{k=1}^n{\frac{ka_k}{(n+1-k)!}} \) and after more calculation we'll get some \(a_n=T(a_n,a_{n-1},\cdots,a_1)\)
Due to Faa di bruno's formula this a_n formula should be hard to expand.
We can develop many other formulae by different integral transformation, I didnt find an approps one
Here's a complex plot of f(z), showing its holomorphity. I defined a artificial branch cut at \(Re(z)<0 \wedge Im(z)=\pm1\) and a 2D real-to-real plot


Attached Files Thumbnail(s)
       
Regards, Leo Smile
#29
Hey, Leo.

I'm disagreeing with you--and until I see concrete evidence I will continue to do so. This is not the function in question at all. You are talking about a different function entirely.

You have developed a Taylor series about \(0\). This is a question about the divergent series that Gottfried has developed. You have made a function \(g\) that is holomorphic near zero (which I'm not sure about, as I'm pretty sure this contradicts Baker--Which Gottfried mentions). There exists no half iterate that is holomorphic near zero. So you've made a mistake, or Baker has made a mistake. We cannot expand an iteration in a neighborhood of a parabolic fixed point, unless that function is an LFT. Bo and myself even saw this recently in the Karlin Mcgregor paper, where it is written in the beginning.

By your logic, we've now solved \(f(z) = e^{z}-1\) and found a function \(f^{\circ t}(z)\) which is holomorphic about \(z = 0\). That just doesn't happen. Because you are brushing the paths of the Julia set and the attracting/repelling petal. The abel function cannot be expanded in a neighborhood of zero. And through common transformations, we can turn your iteration into an abel function, and now we have an abel function which is holomorphic for \(z \neq 0\) and \(|z| < \delta\). That doesn't happen. Abel functions are not holomorphic like this at parabolic points.

I can't speak to the accuracy of this graph, because I do not understand how you are constructing it. Additionally, a graph doesn't mean holomorphy. Just because it looks holomorphic, doesn't make it holomorphic. What is far more likely, is that you have expanded an asymptotic series, that converges fairly well, and iterated functional relationships to express how close of an asymptotic it is.

This is equivalent to saying that you've found a holomorphic iteration \(\exp^{\circ t}_{\eta}(z)\) such that this expression is holomorphic in \(z\) about \(e\). It's well known this isn't possible.

I apologize, but until I see some form of hard evidence I'll continue to disagree with you. Baker himself has a paper about no Taylor expansion for a half iterate of \(e^z -1\) about \(z=0\). Straight from the horses mouth. See the top answer here https://mathoverflow.net/questions/4347/...ar-and-exp

So unless you want to contradict baker, I'm sorry.
#30
Just want to throw in - because Leo.W seems to make quite some stretches to get a formula for the coefficients -
there is a well-known formula for the coefficients of the t-iteration of a function/formal powerseries with fixed point multiplier 1 by Jabotinsky:
\[ {f^{\circ t}}_1=1, \quad {f^{\circ t}}_n=\sum_{m=0}^{n-1} \binom{t}{m} \sum_{k=0}^m \binom{m}{k} (-1)^{m-k}
    {f^{\circ k}}_{n} \]
Just plug in t=1/2 and f(x) = e^x - 1 (as powerseries of course)

There is also a recurrence that you can directly achieve from comparing the formal powerseries \(f\circ f^{\circ t} = f^{\circ t}\circ f\), which I don't have handy though and which typically does not help much either Wink (but is faster in terms computation).


Possibly Related Threads…
Thread Author Replies Views Last Post
  logit coefficients growth pattern bo198214 21 7,020 09/09/2022, 03:00 AM
Last Post: tommy1729
Question Repeated Differentiation Leading to Tetrationally Fast Growth Catullus 5 2,228 07/16/2022, 07:26 AM
Last Post: tommy1729
  Why the beta-method is non-zero in the upper half plane JmsNxn 0 1,420 09/01/2021, 01:57 AM
Last Post: JmsNxn
  Half-iterates and periodic stuff , my mod method [2019] tommy1729 0 3,194 09/09/2019, 10:55 PM
Last Post: tommy1729
  Approximation to half-iterate by high indexed natural iterates (base on ShlThrb) Gottfried 1 4,680 09/09/2019, 10:50 PM
Last Post: tommy1729
  Between exp^[h] and elementary growth tommy1729 0 3,584 09/04/2017, 11:12 PM
Last Post: tommy1729
  Does tetration take the right half plane to itself? JmsNxn 7 17,117 05/16/2017, 08:46 PM
Last Post: JmsNxn
  Half-iteration of x^(n^2) + 1 tommy1729 3 10,428 03/09/2017, 10:02 PM
Last Post: Xorter
  Uniqueness of half-iterate of exp(x) ? tommy1729 14 36,457 01/09/2017, 02:41 AM
Last Post: Gottfried
  Taylor polynomial. System of equations for the coefficients. marraco 17 38,096 08/23/2016, 11:25 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)