Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
My favorite integer sequence
#41
(08/17/2014, 08:40 PM)tommy1729 Wrote:
(08/17/2014, 05:54 PM)sheldonison Wrote: Then has zeros where f(n) crisscrosses the A(n) partition function. Here is a possible estimate of the zero count of rho:



Uh ? can't we use d/dx ln(g(x)) = g ' (x) / g(x) ?

And further d/dx ln(g(exp(x)) = g ' (exp(x)) exp(x) / g(exp(x))

then since your LHS includes a ln I think we can substitute ln(x) = y or y = exp(x) and get

zerocount(rho(x)) ~~ 2 f ' (x) / f(x)

and zerocount(rho(-x)) ~~ f ' (x) / f(x)

and similar for the half-iterate of exp.

Im a bit tired so forgive if my calculus is bad today.

regards

tommy1729

Hey Tommy, I think your calculus is correct! Very nice. That simplifies the heck out of the equations .... by the way, that should be h(n) from the other post... kinda busy today and tomorrow morning though.
- Sheldon
Reply
#42
(08/11/2014, 05:19 PM)jaydfox Wrote:
(08/11/2014, 04:51 PM)sheldonison Wrote: Nice post. What is the most accurate version of the scaling constant that you know of? Earlier, you posted, "1.083063".

I'm currently working on a version that would allow me to calculate all the terms of the form [(2^(n-1)+1)...(2^n)]*(2^m), for a given small n and arbitrary m, which should run in about O(2^n m^5) time.

For example, for n=3, I could calculate 5..8 * 2^m, so let's say:
5, 6, 7, 8,
10, 12, 14, 16,
20, 24, 28, 32,
40, 48, 56, 64,
...
5*2^m, 6*2^m, 7*2^m, 8*2^m


This would allow me to more accurately estimate a_1, because I would be able to monitor the peaks and troughs of the oscillations in the relative error.

So hopefully, within a week or two, I should have an estimate of a_1 to at least 1-2 more digits than the current 1.083063. Who knows, I might soon have the most accurate estimate? Short of a closed form to calculate it, the only way I know to determine its value accurately is calculate very large terms in the sequence, isolate a local minimum and local maximum, and estimate the midpoint.

I'm getting closer to finding a better approximation of a_1. So far, I think it's slightly less than 1.083063, perhaps in the neighborhood of 1.0830628-1.0830629.

Time for more pictures. I finished the code I mentioned above. I calculated A(k) for k = m*2^n, where 513 <= m <= 1024, and n up to 150 (so the largest value was 1024 * 2^150, or 2^160). I have a calculation running in the background to get me up to 2^260, so I'll have new pictures tomorrow.

Note that earlier, I defined a_1 as the limit of f(k)/A(k). However, to me it is more useful to graph A(k)/f(k), because this helps me remember that A(k) over estimates for even k, and underestimates for odd k. So the graph will be a graph of 1/a_1 in the limit as k goes to infinity. So I made a secondary axis on the right side of the graphs, to show the value of a_1(k).

I'll start by graphing A(k)/f(k) versus k:

   

Then I have a few graphs versus log_2(k).

   

   

   

Finally, I have a graph versus log_2(log_2(k)), which I think helps show how quickly (or not) the series converges on f(x).

   

Stay tuned for more exciting pictures tomorrow (or sometime this week). Also, I'll try to post an updated version of the code later this week, which has a minor bug fix, as well as some shortcuts which cut the running time by about 40% for calculating the polynomials.
~ Jay Daniel Fox
Reply
#43
(08/19/2014, 01:36 AM)jaydfox Wrote: Stay tuned for more exciting pictures tomorrow (or sometime this week). Also, I'll try to post an updated version of the code later this week, which has a minor bug fix, as well as some shortcuts which cut the running time by about 40% for calculating the polynomials.

The background job crashed, due to exceeding the default 4M stack in PARI/gp. So here's the output up to the point where it crashed (it had finished up to 1024*2^241).

   

And here's an extended version graphed versus log log k:

   
~ Jay Daniel Fox
Reply
#44
(08/11/2014, 05:19 PM)jaydfox Wrote:
(08/11/2014, 04:51 PM)sheldonison Wrote: Nice post. What is the most accurate version of the scaling constant that you know of? Earlier, you posted, "1.083063".

I'm not having much luck finding references to the constant online. The closest two I've seen are the 1.083063 that I found earlier through a google search (post #3 in the current discussion), and a reference to 1/a_1 of 0.9233, on this page:
https://oeis.org/A002577



The latter has 4 sig-figs, though it's accurate to almost 5 (1/1.083063 ~= 0.923307)
I just found another esimate for a_1 (well, 1/a_1). Check this out:
http://www.sciencedirect.com/science/art...6503001237

Quote:Results on the number of binary partitions, Bn: In [8], Fröberg proves the following: Define

Then Bn=Cn·F(n), where (Cn) is a sequence bounded between 0.63722<Cn<1.920114 for all n⩾0. It is estimated (but unproven) that Cn tends to a limit C:≈0.923307±0.000001 as n→∞.



[8] C.-E. Fröberg
Accurate estimation of the number of binary partitions
BIT, 17 (1977), pp. 386–391

Abstract
Many authors have worked with the problem of binary partitions, but all estimates for the total number obtained so far are restricted to the exponential part only and hence very crude. The present paper is intended to give a final solution of the whole problem. © 1977 BIT Foundations.

And there it is: C:≈0.923307±0.000001, where C is 1/a_1 as I defined it.

That abstract looks familiar. I'm pretty sure it's the same paywalled paper I linked to before. It was written the year I was born, so it just seems a shame that there's still such a hefty fee to access it (over $40).
~ Jay Daniel Fox
Reply
#45
(08/19/2014, 05:31 PM)jaydfox Wrote: ....
And there it is: C:≈0.923307±0.000001, where C is 1/a_1 as I defined it.

That abstract looks familiar. I'm pretty sure it's the same paywalled paper I linked to before. It was written the year I was born, so it just seems a shame that there's still such a hefty fee to access it (over $40).
Too bad you can't find the original paper; $40 is pretty steep. I'm not in a math program, so I don't think I have access to the local university library without a student ID; never tried.

How large a value of "k" have you calculated A(k) for, and how long does it take? I know you were working on a logarithmic algorithm in terms of memory requirements... but I thought the time was still proportional to "k". I didn't follow where you got to after that. Your chart shows 2^240 which is pretty big ... impressive!

Also, I think , but I'm only using k=800,000, along with some questionable accelerator techniques to estimate the function, also assuming that it converges to a particular periodic function which I posted earlier, fairly quickly.



Then I took the inverse of the zerocount function at 16, 15.5, 15, 14.5. 14, 13.5, 13, 12.5 and averaged all of the alpha_1 values with a binomial weighting. I estimated the error term by varying the starting point from 16 to 15.5. Of course, it may be an illusion that this sum converges to alpha_1... I could try more terms though since I'm only using k=800000
- Sheldon
Reply
#46
(08/19/2014, 07:56 PM)sheldonison Wrote: How large a value of "k" have you calculated A(k) for, and how long does it take? I know you were working on a logarithmic algorithm in terms of memory requirements... but I thought the time was still proportional to "k". I didn't follow where you got to after that. Your chart shows 2^240 which is pretty big ... impressive!

Also, I think , but I'm only using k=800,000, along with some questionable accelerator techniques to estimate the function, also assuming that it converges to a particular periodic function which I posted earlier, fairly quickly.



Then I took the inverse of the zerocount function at 16, 15.5, 15, 14.5. 14, 13.5, 13, 12.5 and averaged all of the alpha_1 values with a binomial weighting. I estimated the error term by varying the starting point from 16 to 15.5. Of course, it may be an illusion that this sum converges to alpha_1... I could try more terms though since I'm only using k=800000

I need to take a look at your acceleration trick, because it's remarkably accurate, for only going up to about 2^20 terms. I've taken my calculations out to 2^432 terms so far, and I'm only getting a couple more digits than you are, using simple quadratic interpolation.

Code:
log_2(k)          C_k = A(k)/f(k)       a_1(k) = f(k)/A(k)   C (weighted avg.)     a_1 (weighted avg.)
309.113942364489  0.923305180556966373  1.08306551404462924
309.616329012193  0.923309674118625365  1.08306024298357084  0.923307427587674182  1.08306287821457317
310.118712300961  0.923305181556479624  1.08306551287216931  0.923307427588546353  1.08306287821355009
310.621091079723  0.923309673122600798  1.08306024415192708  0.923307427587672253  1.08306287821457543
311.123466524098  0.923305182549007791  1.08306551170790309  0.923307427588535283  1.08306287821356308
311.625837484441  0.923309672133524754  1.08306024531213255
...
430.098420027089  0.923309589752691560  1.08306034194646453
430.600126880573  0.923305265669410344  1.08306541420511103  0.923307427587743738  1.08306287821449158
431.101831191477  0.923309589259462704  1.08306034252503169  0.923307427587438262  1.08306287821484991
431.603534017818  0.923305266161417297  1.08306541362797178

The values of log_2(k) and C_k were found by linear regression. From there, in order to determine the approximate value of C, take 3 consecutive extrema (min/max/min or max/min/max), and take a weighted average, giving double weight to the middle value. Doing so, for the values above, gives the values in the last two columns.

Here's a graphical extrapolation, based on a grid of m*2^n, 2049 <= m <= 4096, 298 <= n <= 300, giving a range of about 2^309..2^312:

   

Here's a graph illustrating how I derived a quadratic fitting curve using linear least squares regression on a sample of 6 points near the apex (or 5 points, if the central point is very close to the apex), in order to get a reasonable estimate of the location of the local minimum/maximum:

   

Finally, after a few more hours of number crunching, I came up with a grid of points of of m*2^n, 2049 <= m <= 4096, 419 <= n <= 420, giving a range of about 2^430..2^432:

   
~ Jay Daniel Fox
Reply
#47
(08/20/2014, 07:42 AM)jaydfox Wrote:
(08/19/2014, 07:56 PM)sheldonison Wrote: How large a value of "k" have you calculated A(k) for, and how long does it take? ....
Then I took the inverse of the zerocount function at 16, 15.5, 15, 14.5. 14, 13.5, 13, 12.5 and averaged all of the alpha_1 values with a binomial weighting.... could try more terms though since I'm only using k=800000

I need to take a look at your acceleration trick, because it's remarkably accurate, for only going up to about 2^20 terms. I've taken my calculations out to 2^432 terms so far, and I'm only getting a couple more digits than you are, using simple quadratic interpolation.
I downloaded your memory efficient code, jdf_seq_v2.gp, and it works great I figured out how to initialize the SeqParts array for a power of 2, and then it is fast for numbers not to much bigger than that exact power of 2. What is the algorithm for initialization halfway to the next power of 2? For example (2^24+2^23)? I don't understand the polynomials well enough to write my own SeqPowers2 initialization starting with 3,6,12,24....
- Sheldon
Reply
#48
(08/20/2014, 02:11 PM)sheldonison Wrote: I downloaded your memory efficient code, jdf_seq_v2.gp, and it works great I figured out how to initialize the SeqParts array for a power of 2, and then it is fast for numbers not to much bigger than that exact power of 2. What is the algorithm for initialization halfway to the next power of 2? For example (2^24+2^23)? I don't understand the polynomials well enough to write my own SeqPowers2 initialization starting with 3,6,12,24....

Okay, here's the latest version:

.gp   jdf_seq_v3.gp (Size: 8.45 KB / Downloads: 418)

Note that for SeqPowers2 and SeqPowers2m, it converts large numbers to floating point for display purposes. To turn this off, set jPrec=0

You may have noticed that the graphs I've been posting in this discussion use matplotlib. I prefer graphing in SAGE/python with matplotlib. But my sequence code is in PARI/gp. I would actually prefer to write all my code in SAGE/python, but I'm trying to keep my code accessible to the group here, which means PARI/gp.

My workaround is to export the data from gp to a text file. I do this in my code by setting jWrite=1 when I'm ready to export data. I then import the data into SAGE, then save it to a SAGE native format for reuse later. From there, I can make all my graphs. (Sheldon, if you know of a better way to write stuff to disk from gp, please let me know.)

Anyway, updates in this code:
1. I created a slightly modified version of the NextSeq() function, aptly named NextSeq2(). It accomplishes the same effect as the previous version, but the bit checking is faster.

2. InitPolys will reuse previously calculated polynomials. It will only add what it needs to. If you run InitPolys(20), then InitPolys(25), it will only calculate 21..25 in the second call. Because of this, you can safely call InitPolys from other functions, to ensure that all required polynomials are available. See SeqPowers2, for example.

3. The new function SeqPowers2m, which takes two main arguments: n and m. The n parameter is the max exponent; the m parameter is also an exponent, but controls a sequential counter. The function calculates k*2^j, where [2^(m-1)+1 <= k <= 2^m], and [1 <= j <= n]. There is an optional parameter start, which limits j to the range [start <= j <= n].
Effectively, this calculates a grid of values, from about 2^(start+m-1) to 2^(n+m).

4. I made a bug fix and a couple performance enhancements to InitPolys:
4.1: The previous version scaled the polynomials incorrectly (they were reduced by one too many powers of 2). For example, the following code should always return 1, for any value of k:

Code:
subst(SeqPol[k+1], x, 2^(k-1))

The defect wasn't noticeable, because the code had a compensating "bug". In this case, two wrongs made a right. Anyway, both bugs are fixed now.

4.2: I realized that when calculating the points for polynomial interpolation, I had a running sum of terms, only to take the differences as the first step of interpolation. This seemed inefficient. Therefore, I don't calculate the values directly, but the first differences, and the interpolation code skips the first differences and starts on the second differences.

4.3: This is the big performance boost. I realized that I could reuse almost half the points from the previous polynomial grid. This saves some of the very expensive computations (high-degree polynomials with large rational coefficients). Speedup is roughly 40%. (Begin Edit) I.e., it takes 40% less time. Depending on your point of view, that means the speedup is roughly 70% ~= 1/(1-.4) (End Edit)

(Begin Edit)
5. I forgot to mentioned another big change in SeqNum: while I was writing SeqPowers2m, I found a pattern for calculating A(2^n) for integer n, which requires only one polynomial evaluation (as opposed to n evaluations). I haven't done a comparison between the old and new version, but I assume the speedup is significant, especially when n is very large.
(End Edit)

There are probably still a few bugs in the code, so if you run into any, please let me know.

I'm still getting around to writing the code that calculates arbitrary terms of the sequence. It's actually a bit complicated, which is why I've been putting it off.
~ Jay Daniel Fox
Reply
#49
(08/20/2014, 07:57 PM)jaydfox Wrote: Okay, here's the latest version:

(...)

There are probably still a few bugs in the code, so if you run into any, please let me know.

New version:

.gp   jdf_seq_v4.gp (Size: 8.87 KB / Downloads: 396)

Sorry to post two versions in one day, but the performance boost was too huge to ignore. In the table below, the top column shows the number of polynomials I calculated. Below is the time in seconds. As you can see, version 3 cuts the time by 30%-50%, compared to version 2. But version 4 blows them both out of the water, cutting the time by more than an order of magnitude.

Code:
Vers.   60      100     150     200
v2      0.7     9.1     91      --
v3      0.5     5.6     49
v4      0.15    0.5     3.2     15.3

How is this possible? Simple: I got rid of the rational coefficients. Essentially, I multiplied the nth polynomial by n!*2^(n-1), then divide through by that same constant when I use the polynomial. This saves at least n/2-1 divisions by a large number, plus all the underlying rational math that is working so hard to keep fractions in lowest terms and find least common multiples, etc.

No other significant changes in this version. A few minor updates (e.g., the show parameter in SeqPowers2m now cascades to the inner call to Seq())
~ Jay Daniel Fox
Reply
#50
(08/21/2014, 01:15 AM)jaydfox Wrote: I got rid of the rational coefficients. Essentially, I multiplied the nth polynomial by n!*2^(n-1), then divide through by that same constant when I use the polynomial.

I didn't explain that clearly. (I was in a hurry to post.)

I multiplied the nth polynomial by n!, and skipped the substitution of x with x/2^(n-1). So in the later subst statements, I don't have to put in the factor of 2^(n-1), and I divide the result by n!. If you compare versions 3 and 4, you should be able to get the general idea of the change. I had to change the functions InitPolys, NewtonInterp, SeqPowers2, and SeqPowers2m. The bulk of the changes were in NewtonInterp. Most of the changes in the other functions were just changes to the subst() calls. (Note: I'm quoting the function names from memory, as I'm on a different computer, so forgive me if I didn't spell them exactly right.)

Also, I think I left a function in the code, FactorPoly(n) or something like that, which I was using for debugging. It's probably out of date (i.e., broken), so don't waste time trying to understand it.
~ Jay Daniel Fox
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  My favorite theorem tommy1729 0 3,905 08/15/2015, 09:58 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)