I'm not sure if anyone else is still playing around with the binary partition function. I'm still learning lots of new things (about the function, or about complex analysis), so I keep poking at it.

Here are a few of my latest observations. Sorry for the long post; I probably should have split this up into multiple posts.

Locally Polynomial

While the relationship isn't exact, we can approximately say that as we double the scale on the x-axis, the function resembles a polynomial one degree higher. For example, if we say that the function resembles a 3rd order polynomial between about 4 and 8, then it will resemble a 4th order polynomial between about 8 and 16. (Those numbers are made-up examples, but better ones are demonstrated below.)

This relationship is easy to see when you build the discrete binary partition function, because of the polynomials that I have previously discussed. The polynomials are found by interpolating points at intervals of 2^(n-1) for the nth-degree polynomial. In other words, we have to double the scale to interpolate one degree higher.

I was curious to measure just how well each region could be approximated by a polynomial. My idea was simple: For any given x, determine the contribution of a_n x^n to f(x), for each n in the Taylor series. What I found was pretty interesting. Based on the definition of the power series, we can find the exact value of x for which a_k x^k = a_{k+1} + x^(k+1). Below this point, the function gets more of its "character" from the nth order term, while above that point, it gets more of its character from the (n+1)th order term.

For example, at x=12, the 2nd and 3rd order terms are equal: 1/4 12^2 = 1/48 12^3. At x=32, the 3rd and 4th order terms are equal: 1/48 32^3 = 1/1536 32^4. Given these two observations, we can say that f(x) behaves approximately like a 3rd order polynomial on the interval (12, 32).

Relatively Precise Approximation of f(x)

In general, at x = k 2^(k-1), the k-1 and k order terms are equal.

The really nice thing about this observation is that it leads to a relatively precise approximation of f(x). We have already seen an approximation earlier of ln(f(x)):

Note that ln(f(x)) = ln(x)^2/(2 ln(2)) is equivalent to log_2(f(x)) = log_2(x)^2/2. It takes (in my opinion) a very large value of log_2(x), which in turn implies an extremely large value of x, before this approximation gets close (i.e., before the ratio of the LHS and RHS approaches 1). And this is just an approximation of the logarithm of f(x). Exponentiate, and it's way off.

My approximation gets closer to 1, faster, and it approximates f(x) directly.

To use it, however, we have to be able to invert the relatively simple equation q(x) = (x-1) log_2(x) (Update) q(x) = x 2^(x-1) (End Update). This yields h(q(x)) = x. For example, h(12) = 3, h(32) = 4, h(448 )=7, etc.

h(x) can be solved iteratively (taking the limit, but truncating when the desired accuracy has been reached):

I also need to define a new constant, g_2, which I found experimentally, but which I believe I will be able to define rigorously in a later post:

With these definitions in place, the approximation of f(x) is:

Let's start with an easy example. For k = 448, we find that h(x) = 7. That is, 7*2^(7-1) = 448.

Therefore, we have (updated, see note at end):

This evaluates to f(x) ~= 1145355510.094

Compare this to the actual value of f(x) of 1042092969.261. The approximation is off by about 9.9%. For larger values of k, the approximation only gets better. Indeed, convergence in h(x) is linear. With just a small adjustment to this approximation, convergence in h(x) is quadratic:

I found this last adjustment experimentally as well, but I'm hoping there's a rigorous method to derive it. I suspect it's a tweak to the sum which yielded the g_2 "constant". g_2 isn't actually a constant, but a function of h(x), which goes to the "constant" value only in the limit as h(x) goes to infinity.

With this better approximation, f(448 ) ~= 1038353545.081, which is accurate to within 0.36%.

Coefficients for the Discrete Sequence

I've previously described how to derive polynomials which can be used to calculate arbitrary terms in the discrete sequence. I've tried analyzing the coefficients of the polynomials, and I'm just not finding any useful patterns. The coefficient of the highest order term matches the corresponding coefficient of f(x), and the coefficient of the next term (two degrees lower) is generally negative. However, I can't find any useful patterns yet. It's a shame, because a closed form solution for these coefficients might help determine the a_0 (or C) constants previously discussed: a_0 ~= 1.0830629, C = 1/a_0 ~= 0.923307

However, what does seem to bear fruit is to look at the polynomials evaluated at powers of 2. The first few are:

From left to right, these are the evaluations of the 0th, 1st, 2nd, 3rd, etc. order polynomials. From top to bottom, these are the polynomials evaluated at 2^k, where k is the row number, with the first row being k=-1 (to make the math simpler, so that the second row is evaluated at 2^0 = 1).

The diagonal elements are 1, with 0's above, and the following elements below:

The first column is 1 for all rows

The second column is 1 * 2^k

The third column is 2^{2(k-1)} = 1/4 2^{2k}

The fourth column 2^{3(k-2)} + 2*a_{k-1,3}, which is floor(1/48 2^{2k+2}) * 2^{k-2}

The fifth column is 2^{4(k-3)+1} + 4*a_{k-1,4}, which is (4*floor(1/1536 2^{2k+2})+1) * 2^{2k-8}

Note the use of the floor function to try to get find a way to get these terms to approximate the continuous function. The coefficients of the continuous function's Taylor series are highlighted in bold above.

Here, a_{k-1,3} is referring to the previous row, third column (or fourth column, since I consider the leftmost column the 0th column). In other words, for the third column:

k=2, 1 = 2^0 + 0 = floor(2^6/48 )

k=3, 10 = 2^3+2*1 = floor(2^8/48 )*2

k=4, 84 = 2^6 + 2*10 = floor(2^10/48 )*2^2

k=5, 680 = 2^9 + 2*84 = floor(2^12/48 )*2^3

k=6, 5456 = 2^12 + 2*680 = floor(2^14/48 )*2^4

Similarly for the fourth column:

k=3, 1 = 2^1 + 4*(-1/4) = 2^2 * floor(2^10/1536)*4+1

k=4, 36 = 2^5 + 4*1 = (floor(2^12/1536)*4+1) * 2^2

k=5, 656 = 2^9 + 4*36 = (floor(2^14/1536)*4+1) * 2^4

k=6, 10816 = 2^13 + 4*656 = (floor(2^16/1536 )*4+1) * 2^6

I feel like I'm close to a pattern here that will really speed up calculations. I might be able to ditch the polynomials entirely. I've found some other strange recurrences, one of which uses that Thue-Morse sequence, which Gottfried previously described in post #10 on the first page of this thread. I'll try to detail those later this week.

PS: The smiley is driving me nuts, because I have to insert a space to get it to be ignored.: f( should be f(8 )

PPS: I made an edit above, and tried to make it obvious what the edit was. And just had to make a second edit to clean up some TeX that concatenated two numbers, and changed the formatting of the scientific notation to remove ambiguity about the "e".

Here are a few of my latest observations. Sorry for the long post; I probably should have split this up into multiple posts.

Locally Polynomial

While the relationship isn't exact, we can approximately say that as we double the scale on the x-axis, the function resembles a polynomial one degree higher. For example, if we say that the function resembles a 3rd order polynomial between about 4 and 8, then it will resemble a 4th order polynomial between about 8 and 16. (Those numbers are made-up examples, but better ones are demonstrated below.)

This relationship is easy to see when you build the discrete binary partition function, because of the polynomials that I have previously discussed. The polynomials are found by interpolating points at intervals of 2^(n-1) for the nth-degree polynomial. In other words, we have to double the scale to interpolate one degree higher.

I was curious to measure just how well each region could be approximated by a polynomial. My idea was simple: For any given x, determine the contribution of a_n x^n to f(x), for each n in the Taylor series. What I found was pretty interesting. Based on the definition of the power series, we can find the exact value of x for which a_k x^k = a_{k+1} + x^(k+1). Below this point, the function gets more of its "character" from the nth order term, while above that point, it gets more of its character from the (n+1)th order term.

For example, at x=12, the 2nd and 3rd order terms are equal: 1/4 12^2 = 1/48 12^3. At x=32, the 3rd and 4th order terms are equal: 1/48 32^3 = 1/1536 32^4. Given these two observations, we can say that f(x) behaves approximately like a 3rd order polynomial on the interval (12, 32).

Relatively Precise Approximation of f(x)

In general, at x = k 2^(k-1), the k-1 and k order terms are equal.

The really nice thing about this observation is that it leads to a relatively precise approximation of f(x). We have already seen an approximation earlier of ln(f(x)):

(08/04/2014, 11:49 PM)sheldonison Wrote: I made some numerical approximations, and for big enough values of x, using Jay's Taylor series, I get that , which match the equations in Gottfried's link. If ln(x)=1E100, then ln(f(x))~=7.213E199, ln(f(f(x)))~=3.753E399

Note that ln(f(x)) = ln(x)^2/(2 ln(2)) is equivalent to log_2(f(x)) = log_2(x)^2/2. It takes (in my opinion) a very large value of log_2(x), which in turn implies an extremely large value of x, before this approximation gets close (i.e., before the ratio of the LHS and RHS approaches 1). And this is just an approximation of the logarithm of f(x). Exponentiate, and it's way off.

My approximation gets closer to 1, faster, and it approximates f(x) directly.

To use it, however, we have to be able to invert the relatively simple equation q(x) = (x-1) log_2(x) (Update) q(x) = x 2^(x-1) (End Update). This yields h(q(x)) = x. For example, h(12) = 3, h(32) = 4, h(448 )=7, etc.

h(x) can be solved iteratively (taking the limit, but truncating when the desired accuracy has been reached):

I also need to define a new constant, g_2, which I found experimentally, but which I believe I will be able to define rigorously in a later post:

With these definitions in place, the approximation of f(x) is:

Let's start with an easy example. For k = 448, we find that h(x) = 7. That is, 7*2^(7-1) = 448.

Therefore, we have (updated, see note at end):

This evaluates to f(x) ~= 1145355510.094

Compare this to the actual value of f(x) of 1042092969.261. The approximation is off by about 9.9%. For larger values of k, the approximation only gets better. Indeed, convergence in h(x) is linear. With just a small adjustment to this approximation, convergence in h(x) is quadratic:

I found this last adjustment experimentally as well, but I'm hoping there's a rigorous method to derive it. I suspect it's a tweak to the sum which yielded the g_2 "constant". g_2 isn't actually a constant, but a function of h(x), which goes to the "constant" value only in the limit as h(x) goes to infinity.

With this better approximation, f(448 ) ~= 1038353545.081, which is accurate to within 0.36%.

Coefficients for the Discrete Sequence

I've previously described how to derive polynomials which can be used to calculate arbitrary terms in the discrete sequence. I've tried analyzing the coefficients of the polynomials, and I'm just not finding any useful patterns. The coefficient of the highest order term matches the corresponding coefficient of f(x), and the coefficient of the next term (two degrees lower) is generally negative. However, I can't find any useful patterns yet. It's a shame, because a closed form solution for these coefficients might help determine the a_0 (or C) constants previously discussed: a_0 ~= 1.0830629, C = 1/a_0 ~= 0.923307

However, what does seem to bear fruit is to look at the polynomials evaluated at powers of 2. The first few are:

Code:

`1`

1 1

1 2 1

1 4 4 1

1 8 16 10 1

1 16 64 84 36 1

1 32 256 680 656 202 1

1 64 1024 5456 10816 8148 1828 1

From left to right, these are the evaluations of the 0th, 1st, 2nd, 3rd, etc. order polynomials. From top to bottom, these are the polynomials evaluated at 2^k, where k is the row number, with the first row being k=-1 (to make the math simpler, so that the second row is evaluated at 2^0 = 1).

The diagonal elements are 1, with 0's above, and the following elements below:

The first column is 1 for all rows

The second column is 1 * 2^k

The third column is 2^{2(k-1)} = 1/4 2^{2k}

The fourth column 2^{3(k-2)} + 2*a_{k-1,3}, which is floor(1/48 2^{2k+2}) * 2^{k-2}

The fifth column is 2^{4(k-3)+1} + 4*a_{k-1,4}, which is (4*floor(1/1536 2^{2k+2})+1) * 2^{2k-8}

Note the use of the floor function to try to get find a way to get these terms to approximate the continuous function. The coefficients of the continuous function's Taylor series are highlighted in bold above.

Here, a_{k-1,3} is referring to the previous row, third column (or fourth column, since I consider the leftmost column the 0th column). In other words, for the third column:

k=2, 1 = 2^0 + 0 = floor(2^6/48 )

k=3, 10 = 2^3+2*1 = floor(2^8/48 )*2

k=4, 84 = 2^6 + 2*10 = floor(2^10/48 )*2^2

k=5, 680 = 2^9 + 2*84 = floor(2^12/48 )*2^3

k=6, 5456 = 2^12 + 2*680 = floor(2^14/48 )*2^4

Similarly for the fourth column:

k=3, 1 = 2^1 + 4*(-1/4) = 2^2 * floor(2^10/1536)*4+1

k=4, 36 = 2^5 + 4*1 = (floor(2^12/1536)*4+1) * 2^2

k=5, 656 = 2^9 + 4*36 = (floor(2^14/1536)*4+1) * 2^4

k=6, 10816 = 2^13 + 4*656 = (floor(2^16/1536 )*4+1) * 2^6

I feel like I'm close to a pattern here that will really speed up calculations. I might be able to ditch the polynomials entirely. I've found some other strange recurrences, one of which uses that Thue-Morse sequence, which Gottfried previously described in post #10 on the first page of this thread. I'll try to detail those later this week.

PS: The smiley is driving me nuts, because I have to insert a space to get it to be ignored.: f( should be f(8 )

PPS: I made an edit above, and tried to make it obvious what the edit was. And just had to make a second edit to clean up some TeX that concatenated two numbers, and changed the formatting of the scientific notation to remove ambiguity about the "e".

~ Jay Daniel Fox