logit coefficients growth pattern
#1
This is so fascinating ... inspired by Gottfried's investigations I was playing around with the logit of other functions, namely \(x\mapsto xe^x\), \(x\mapsto x+\frac{1}{2}x^2\), \(x\mapsto x+\frac{1}{3}x^2\) and \(x\mapsto x+3x^2+7x^3\). It looks like this sinoidal pattern only depends on the coefficient \(c\) of \(x^2\) !
It seems that the growth has the form
$$ a_k = \frac{(k-3)!}{\left(\frac{2\pi}{c}\right)^k} $$
i.e. \(j_k/a_k\) shows this sinoidal pattern. I did not yet investigate the cases \(c=0\) ...

Here is the numerical evidence:
               
Reply
#2
Super!

I see that the local maxima/minima still follow soft/smooth curves. I guess this might be better linearized by changing the additive constant in the factorial expression (I'd found \( (k-3)! \) by playing with this term). Perhaps even \( \Gamma(1+k - d) \) - with \( d \) better adapted values, maybe fractional...
Gottfried Helms, Kassel
Reply
#3
(08/20/2022, 12:32 PM)Gottfried Wrote: Super!

I see that the local maxima/minima still follow soft/smooth curves. I guess this might be better linearized by changing the additive constant in the factorial expression (I'd found \( (k-3)! \) by playing with this term). Perhaps even \( \Gamma(1+k - d) \) - with \( d \) better adapted values, maybe fractional...

You mean the last picture where the minima/maxima slightly increase. Yes, but actually the interval is just too short to really see how it continues for bigger indexes and computation time is more than exponential. But yes when I plugged in 2.9 instead of 3 it looked more equal-sized (I don't provide picture).

But this really looks like there is continuous periodic formula for the coefficients - I mean in the limit to high indexes.
Reply
#4
(08/20/2022, 01:56 PM)bo198214 Wrote: You mean the last picture where the minima/maxima slightly increase. Yes, but actually the interval is just too short to really see how it continues for bigger indexes and computation time is more than exponential. But yes when I plugged in 2.9 instead of 3 it looked more equal-sized (I don't provide picture).

Hmm, I mean to see a slowly decreasing hullcurve for the maxima in the leading plots ; but of course 512 coefficients seem to be too little. How do you compute the coefficients? Using Pari/GP and self-tailored routines for triangular matrixes I get cpu-times for n x n matrixes with coordinates n: 16*[12,14,15,16,32] secs: [6,12,15,20,314] with integer arithmetic and trendcalculation in Excel gives potenziell-estimate with exponent 3.8 or even 4.0 tell me that calculating n=1024 needs 68 mins. For n=2048, which were my next goal, my routines would likely need 18 hrs. My computations work on (optimezed) procedures for the mercator-series for the carlemanmatrixes, and I don't think I can tweek the timeconsumption further down. With float numbers (but with risk of too few decimals provided) I get better timings: n=1024 shall need 23 mins. For n=2048 I should need 4.25 hrs...

So a far better calculation method were a good thing ...  Dodgy  
Anyway, I'll try to reproduce and extend some of your curves today and/or tomorow. Curious!
(08/20/2022, 01:56 PM)bo198214 Wrote: But this really looks like there is continuous periodic formula for the coefficients - I mean in the limit to high indexes.
Yes, that seems true surely.
I'm moreover curious, whether a stretch of the x-axis should be an option, to capture the periodicity better. Say log(2.1) instead or so ... and see whether there would be a meaningful value there - yet I did not collect exampledata so far...
Gottfried Helms, Kassel
Reply
#5
(08/20/2022, 03:31 PM)Gottfried Wrote: How do you compute the coefficients? Using Pari/GP and self-tailored routines for triangular matrixes I get cpu-times for n x n matrixes with coordinates n: 16*[12,14,15,16,32] secs: [6,12,15,20,314] with integer arithmetic and trendcalculation in Excel gives potenziell-estimate with exponent 3.8 or even 4.0 tell me that calculating n=1024 needs 68 mins. For n=2048, which were my next goal, my routines would likely need 18 hrs. My computations work on (optimezed) procedures for the mercator-series for the carlemanmatrixes, and I don't think I can tweek the timeconsumption further down. With float numbers (but with risk of too few decimals provided) I get better timings: n=1024 shall need 23 mins. For n=2048 I should need 4.25 hrs...

In Sage I work with Integer fractions (QQ) und using ordinary recursion formula that comes from the defining equation.
I easily can compute more values if I feel like it because I save all the previous values and the recursion just can compute the next values.
There is though no optimization, I think it take a little longer 1h 30m perhaps for 1024 values.

But Gottfried, with doubling to 2048 you get only a tenth more in x values, you go from 10=log2(1024) to 11=log2(2048\), you will not see much more - thats what I mean - its exponential.
(08/20/2022, 03:31 PM)Gottfried Wrote: Anyway, I'll try to reproduce and extend some of your curves today and/or tomorow. Curious!
Haha, now we are in the field of experimental mathematics, as in physics, results have to be reproduced by different teams, lol.
(08/20/2022, 03:31 PM)Gottfried Wrote: I'm moreover curious, whether a stretch of the x-axis should be an option, to capture the periodicity better. Say log(2.1) instead or so ... and see whether there would be a meaningful value there - yet I did not collect exampledata so far...
you mean that the zeros go to integers?
Reply
#6
(08/20/2022, 04:09 PM)bo198214 Wrote: But Gottfried, with doubling to 2048 you get only a tenth more in x values, you go from 10=log2(1024) to 11=log2(2048\), you will not see much more - thats what I mean - its exponential.

Yesss, I'm aware of that - that wil give a very small progress. But reduces the "space of possible patterns" . Let's see...

(08/20/2022, 04:09 PM)bo198214 Wrote:
(08/20/2022, 03:31 PM)Gottfried Wrote: I'm moreover curious, whether a stretch of the x-axis should be an option, to capture the periodicity better. Say log(2.1) instead or so ... and see whether there would be a meaningful value there - yet I did not collect exampledata so far...
you mean that the zeros go to integers?  

Yes, I had several times tried to find an idea for this; it seems that the periodicities/wave length on the log of the index k are not really constant but might at least go to a limit. I've overlapped the sinusoidal curves, after rescaling the amplitude to maximum height with real sinus curves and saw this non-matching... 
But this all needs so many coefficients that one needs a supercomputer or some abo on R.P.(?) Brent for better matrix-modules... Except you hit a nugget by chance, as it seems to be with the exp(x)-1-curve...   



update: attached three articles of R.P.Brent on (computability) efficiency of composition of powerseries.
Brent: Complexity of Composition Of Powerseries 1980 (rpb050i.pdf)
Brent/Kung: Fast Algorithms for Manipulating FormalPowerseries 1978  (rpb045.pdf)
Brent/Traub: Complexity Of Composition ... 1991  (abstract) (rpb050a)
(didn't save the links from where I downloaded them, sorry, likely has/had a personal or university homepage)
Also don't know at the moment, whether he is the Brent known for the superior fast matrix operations modules...


Attached Files
.pdf   Brent CompositionOfPowerseries rpb050i.pdf (Size: 936.87 KB / Downloads: 33)
.pdf   Brent FormalPowerseries rpb045.pdf (Size: 885.97 KB / Downloads: 26)
.pdf   Brent ComplexityOfComposition rpb050a.pdf (Size: 74.66 KB / Downloads: 27)
Gottfried Helms, Kassel
Reply
#7
(08/20/2022, 04:19 PM)Gottfried Wrote: Yes, I had several times tried to find an idea for this; it seems that the periodicities/wave length on the log of the index k are not really constant but might at least go to a limit. I've overlapped the sinusoidal curves, after rescaling the amplitude to maximum height with real sinus curves and saw this non-matching... 

I totally thought so that you tried that but did not find a proper matching - otherwise you had shown that already - the similarity is just too intriguing, lol.
Reply
#8
And now it gets really cool!
We can connect back to the question which functions do have all analytic iterates at the fixed point and which not. 
The previous functions in this thread are all polynomial or are entire, hence the logit will not converge (as well as most of the corresponding parabolic iterations).
But in my post here, I constructed a parabolic function where all iterates are analytic at the fixed point.
This function was \(\arctan(1+\tan(x)) + \pi\left\lfloor \frac{x+\frac{\pi}{2}}{\pi}\right\rfloor\)
with the fixed point \(\frac{\pi}{2}\). When we conjugate the fixed point to 0, the function can be written as
$$f(x)=-{\rm arccot}(1-\cot(x))$$
   
As we are only interested in a small vicinity around 0, I omit the branch compensation. So we are looking for a powerseries expansion of this function. Actually Sage has difficulties calculating the powerseries because division of 0, so I took the detour differentiating the function \(f'(x) = \frac{\cot(x)^2 + 1}{(1-\cot(x))^2 + 1}\), then calculating the formal powerseries and integrating it, so these are the coefficients of f
0, 1, 1, 1, 2/3, 0, -43/45, -29/15, -778/315, -374/189, 122/14175, ...
From the example before one could conclude that the coefficient growth would be \(\frac{(k-3)!}{(2\pi)^k}\). But this is totally not the case, the logit is converging! The logit j has the coefficients:
0, 0, 1, 0, -1/3, 0, 2/45, 0, -1/315, 0, 2/14175, 0, -2/467775, 0, ...
the repeating 0 are interesting, because you can not produce these e.g. with polynomials. So here we have a completely different behaviour:
And now that I am experimenting with I even can give an explicit formula for the coefficients of the logit of \(-{\rm arccot}(1-\cot(x))\)!
$$j_k = \frac{-\cos(\frac{\pi}{2}k)}{2}\frac{2^k}{k!}, \quad k\ge 1$$

So all the global behaviour of a holomorphic function is concentrated in the powerseries development in one point (the whole function can be reconstructed by analytic continuation). And for the logit to converge (equivalent to all regular iterates are analytic at the fixed point) seems quite to depend on the global behaviour of the function - meromorphic function with countable isolated singularities can not have a convergent logit (only talking about parabolic fixed points here). While some multivalued functions (i.e. they have branch points) can have a convergent logit - as I just showed. But we can not read all these properties from the coefficients, so the convergence of the logit will remain a mystery! Big Grin
Reply
#9
Oh, and I forgot to mention some other connections:
First, the logit is also approachable as $${\rm logit}[f]=\frac{\partial f^{\circ t}(x)}{\partial t}\big|_{t=0}$$
In our case we know that the regular iterations are \(f^{\circ t}(x)=-{\rm arccot}(t+\cot(x))\) hence 
$$ j = {\rm logit}[f] = \frac{\partial f^{\circ t}(x)}{\partial t}\big|_{t=0}= \frac{1}{(t - \cot(x))^2 + 1}\big|_{t=0} = \frac{1}{\cot(x)^2+1} $$

Also the julia function/logit can be used to reconstruct the Abel function which is \(-\cot(x)\) in our case:
$$ \alpha(x) = \int \frac{dx}{j(x)} = \int (\cot(x)^2+1) dx = -\cot(x)$$

From the standpoint of formal powerseries this is even more interesting:

$$ \alpha(x) = \int \frac{dx}{j(x)} = \int x^{-2} + \frac{1}{3} + \frac{1}{15}x^2 + \frac{2}{189}x^4 + \frac{1}{675}x^6 + \frac{2}{10395}x^8 + ... dx $$

Interesting here is that it does not contain the \(x^{-1}\) term, which in turn when we take the integral does not result in a \(\log\) term, \(\cot\) only has a pole:
$$ \alpha(x) =  - x^{-1}  + \frac{1}{3}x  + \frac{1}{45}x^3 + \frac{2}{945}x^5  +\frac{1}{4725}x^7 + \frac{2}{93555}x^9 + ...$$

And this pole (instead of a log-involved singularity) was the warranty for the smoothness of \(-{\rm arccot}(t-\cot(x))\).
Reply
#10
Quick question, I'm a little confused here.

Is this still guessing the asymptotics of a "half root" at a parabolic fixed point?

Or is it something different, (sorry just a tad confused).

If this is happening elsewhere though; maybe Borel summation would be a valuable method of approaching fractional iteration? Huh

By which we could get similar Euler expressions (Like how Euler analytically defines \(\sum_k (-1)^kk! z^k\)) of half iterates (and arbitrary iterates) using some kind of modified Laplace transform. All we would need is a bound like \(j_k = O(c^kk!)\).
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Half-iterate exp(z)-1: hypothese on growth of coefficients Gottfried 48 3,195 09/09/2022, 12:24 AM
Last Post: tommy1729
Question Repeated Differentiation Leading to Tetrationally Fast Growth Catullus 5 533 07/16/2022, 07:26 AM
Last Post: tommy1729
  Between exp^[h] and elementary growth tommy1729 0 3,095 09/04/2017, 11:12 PM
Last Post: tommy1729
  Taylor polynomial. System of equations for the coefficients. marraco 17 32,806 08/23/2016, 11:25 AM
Last Post: Gottfried
  Tetration series for integer exponent. Can you find the pattern? marraco 20 34,241 02/21/2016, 03:27 PM
Last Post: marraco
  Growth rate of the recurrence x(n+1) = x(n) + (arcsinh( x/2 ))^[1/2] ( x(n) )? tommy1729 0 3,515 04/29/2013, 11:29 PM
Last Post: tommy1729
  Growth of superexponential Balarka Sen 7 16,545 03/06/2013, 11:55 PM
Last Post: tommy1729
  Can we prove these coefficients must be constant? JmsNxn 0 3,361 06/03/2012, 09:17 PM
Last Post: JmsNxn
  General question on function growth dyitto 2 7,675 03/08/2011, 04:41 PM
Last Post: dyitto
  Coefficients of Tetrational Function mike3 3 11,249 04/28/2010, 09:11 PM
Last Post: andydude



Users browsing this thread: 1 Guest(s)