Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
My favorite integer sequence
#31
(08/09/2014, 07:02 PM)jaydfox Wrote: I'm also working on a second-order approximation. This would consist of two parts. The first is to fix oscillations in the error term (1/1.083...)*f(k)-A(k). These oscillations follow approximately a scaled version of f(-x). Remember the zeroes on the negative real line?

First of all, in thinking about it, the oscillations actually seem to follow something along the lines of G(f(-c x^2)), where G(x) is some scaling function (similar to x^p for some 0 < p < 1, or perhaps similar to asinh?), and c is a constant. It's not exact of course, but I think it gets me in the right ballpark. I'll show pictures later to demonstrate the similarities that I'm working with.

The -x^2 indicates a couple things. The negative sign means I want to look at the oscillations that are clearly visible in f(x) on the negative real axis. However, between each zero is an alternating local minimum or local maximum. The oscillations I'm looking at show both a local minimum and a local maximum at the same scale.

I.e., if x_k is a zero (e.g.,-1054.13), then approximately 2*x_k is another zero (e.g.-2359.66). In between, we have either a local minimum or a local maximum. However, with the oscillations of A(k)-f(k), we would have a local minimum and a local maximum over the same doubling of scale, and thus two "zeroes". We're working with a discrete sequence, so there won't actually be any zeroes. Anyway, the x^2 term takes care of this. It means that in going from x to 2x (technically, (2+h)x for a small h), we're looking at zeroes between -x^2 and -4x^2 (technically, -(4+4h)x^2 for the same h, ignoring the O(x) term), and thus we've considered two zeroes. (I'm having trouble putting it into words at the moment, so maybe some pictures later on will help.)

Okay, here are some pictures to illustrate the oscillations I was talking about. There are couple things to note. First of all, the self-similarity repeats at slightly larger than powers of 2. For example, the local maxima for the even terms (top "curve") are at 4, 10, 28, 68, 156, 348, 780, etc., each more than twice the previous index, but less than 3 times. The ratio gets close and closer to 2, though it take quite a while to get down as low as 2.1, let alone 2.01.

Second, while there is definitely some self-similarity, the vertical scale is not linear. While I am slightly more than doubling the scale in the x direction with each picture below, the y scale increases by much larger factors, up to 1000x per image. That's where the unknown G(x) above comes into play.
   
   
   
   
   
~ Jay Daniel Fox
Reply
#32
I think Jay defined the constant 1.08... as the ratio of my functional equation f(x+1) - f(x) = f(x/2) with his J ' (x) = J(x/2).

Im not sure if he intended that.

It is consistant if my f(x) is very close the Original sequence a(x) in the sense lim x-> +oo f(x)/a(x) = 1.

( then it follows lim x-> + oo J(x)/a(x) = J(x)/f(x) = 1.08... )

I have the feeling Jay's post/idea is not complete yet or still under development , probably there will be an edit or a followup.

A little bit strange notation I think , but that might be due to the unfinished brainstorming too.

Anyway , thanks for the posts Jay !

I have to think about Jay's last post now.

My first critical remark is that the analogue for exp does not seem convincing right now ; I do not see (1+1/n)^n.
But I might need to read again and Jay will probably do an edit or followup. Its just a first impression anyway.

regards

tommy1729

EDIT : I was referring to Jay's post 30 mainly. He posted nr 31 while I made this reply.
Reply
#33
Considering your lastest posts Jay , are you still convinced that J(x)/1.08... is a very good approximation to A(x) ?

I mean considering your plots , it does not seem like

lim n-> + oo J(n)/A(n) = 1.08... holds true.

When I talked about " proving " I meant " deciding if true " actually.

Negative axis seems to be popular lately Smile

regards

tommy1729
Reply
#34
(08/09/2014, 10:52 PM)tommy1729 Wrote: I have the feeling Jay's post/idea is not complete yet or still under development , probably there will be an edit or a followup.

A little bit strange notation I think , but that might be due to the unfinished brainstorming too.
Yes, still brainstorming, and I'm not attached to any particular notation yet. But I needed something so I could start to formalize...

Quote:My first critical remark is that the analogue for exp does not seem convincing right now ; I do not see (1+1/n)^n.
But I might need to read again and Jay will probably do an edit or followup. Its just a first impression anyway.

(...)

I was referring to Jay's post 30 mainly. He posted nr 31 while I made this reply.

Take exp_1:
Code:
k    E_1(k)    E_{1/2}(k)   E_{1/3}(k)
0    1         1            1
1    2         3/2          4/3
2    4         9/4          16/9
3    8         27/4         64/27

exp_1(1) = 2 = (1+1)^1
exp_{1/2}(1) = 2.25 = (1+1/2)^2
exp_{1/3}(1) = 2.370370... = (1+1/3)^3
exp_{1/n}(1) = (1+1/n)^n

I could have defined exp_h that way to begin with, but I defined it as a sequence with a recurrence relation similar to A(k), to help draw the analogy between a_1 = 1.08306 and e_1= e/2 = 1.35914.

(08/09/2014, 11:10 PM)tommy1729 Wrote: Considering your lastest posts Jay , are you still convinced that J(x)/1.08... is a very good approximation to A(x) ?

The reason I wanted to set 1.08306 as a_1 was to make clear what it was. By itself, J(x) is not a perfect approximation of A(k). To Tommy's question, do I consider J(x) a "very good approximation" to A(x)? A "good approximation", yes, but not a "very good approximation".

But by re-imagining A(k) as A_1(k), it makes clear why it's not such a good approximation. In reality, it's not so much an issue of J(x) being a poor approximation of A(k). It's that A_1(k) uses such a large step size, that it's a poor approximation of J(x). In this sense, J(x) is the right function to approximate A_1(k), but not the right function to approximate A(k). The distinction is not mathematical, but metaphysical. Wink
~ Jay Daniel Fox
Reply
#35
(08/09/2014, 11:10 PM)tommy1729 Wrote: Considering your lastest posts Jay , are you still convinced that J(x)/1.08... is a very good approximation to A(x) ?

I mean considering your plots , it does not seem like

lim n-> + oo J(n)/A(n) = 1.08... holds true.

The absolute error grows without bound, so in that sense, no, it's not a good approximation. However, the relative error decreases to 0 in the limit, so yes, I still consider it a good first-order approximation. If A(k) is approximated by J(k)/a_1, then the absolute error is (A(k) - J(k)/a_1), which I graphed previously. The relative error is the absolute error divided by A(k), or (A(k) - J(k)/a_1) / A(k), which is 1-J(k)/(a_1 A(k)).

The graphs below use an alternate relative error of (a_1 A(k))/J(k) - 1. For small errors, this alternate definition is essentially equivalent (e.g., 1-999/1000 ~= 1000/999 - 1), and I didn't want to redo the graphs, so they'll have to do. You can't actually see the difference in the graphs (i.e., comparing side by side), but since the equation is written at the top of the graphs, I figured I should disclose the discrepancy.

   
   
   

Skipping a doubling of the scale on the k-axis:

   

And skipping several more:

   

As you can see, the relative error continues to shrink. The sequence A(n) will cross the function J(x) an infinite number of times, and the absolute error will grow without bound, but the relative error will shrink to 0. I'm still working on proving it formally, but it seems intuitively clear, based on analysis up to 2^21 terms in the sequence, plus crude analysis of the 2^m terms up to m=150.

There's also the semi-relative
~ Jay Daniel Fox
Reply
#36
Some variants :

In general f(n) = f(n-1) + f(n/a1) + f(n/a2) + ...
or f(n) = f(n-1) + f(n/a1) - f(n/a2) + f(n/a3) - ...

where 2<a1<a2<a3<... and all devisions are rounded to the smallest integer (floor). Also f(0) = 0 and f(1) = 1.

In particular

f(n) = f(n-1) + f(n/3) + f(n/5) + f(n/7) + f(n/9) + ...

and

g(n) = g(n-1) + g(n/3) - g(n/5) + g(n/7) - g(n/9) + ...

The growth rates of these 2 are fascinating.
Clearly this has number theoretic value.


also the sum 1/a1 + 1/a2 + ... is fascinating when it converges.

For instance

A(n) = A(n-1) + A(n/3)

B(n) = B(n-1) + B(n/4) + B(n/12)

Notice 1/3 = 1/4 + 1/12.

Does this imply that A(n) is close to B(n) ??

Many ideas and conjectures pop up.

regards

tommy1729
Reply
#37
(08/09/2014, 10:51 PM)jaydfox Wrote:
(08/09/2014, 07:02 PM)jaydfox Wrote: I'm also working on a second-order approximation. This would consist of two parts. The first is to fix oscillations in the error term (1/1.083...)*f(k)-A(k). These oscillations follow approximately a scaled version of f(-x). Remember the zeroes on the negative real line?

First of all, in thinking about it, the oscillations actually seem to follow something along the lines of G(f(-c x^2)), where G(x) is some scaling function (similar to x^p for some 0 < p < 1, or perhaps similar to asinh?), and c is a constant. It's not exact of course, but I think it gets me in the right ballpark. I'll show pictures later to demonstrate the similarities that I'm working with.

Nice post. What is the most accurate version of the scaling constant that you know of? Earlier, you posted, "1.083063".
- Sheldon
Reply
#38
(08/11/2014, 04:51 PM)sheldonison Wrote: Nice post. What is the most accurate version of the scaling constant that you know of? Earlier, you posted, "1.083063".

I'm not having much luck finding references to the constant online. The closest two I've seen are the 1.083063 that I found earlier through a google search (post #3 in the current discussion), and a reference to 1/a_1 of 0.9233, on this page:
https://oeis.org/A002577

   

The latter has 4 sig-figs, though it's accurate to almost 5 (1/1.083063 ~= 0.923307)

I found that 0.9233 by accident, searching the OEIS to see if they had an integer sequence for the 2^m terms of A(n). Interestingly, I see some of my ideas listed on the page for that integer sequence, so I suppose that means I'm on the right track? I'm just a few decades late to the party. Wink

The code I posted earlier will generate arbitrary terms in the sequence with indexes of 2^m, in approximately O(m^5) time.

I.e., I can generate terms A(1), A(2), A(4), A(8 ), ..., A(2^m)

The code can be modified to calculate an arbitrary term, e.g., A(12345678987665321385) or whatever I'd like, with only a modest increase in running time. However, it becomes far less flexible (it's a one-shot deal, all that effort to calculate a single term), so I haven't implemented it yet in code. (I've used it in Excel for modestly-sized terms, so I know it works.)

I'm currently working on a version that would allow me to calculate all the terms of the form [(2^(n-1)+1)...(2^n)]*(2^m), for a given small n and arbitrary m, which should run in about O(2^n m^5) time.

For example, for n=3, I could calculate 5..8 * 2^m, so let's say:
5, 6, 7, 8,
10, 12, 14, 16,
20, 24, 28, 32,
40, 48, 56, 64,
...
5*2^m, 6*2^m, 7*2^m, 8*2^m


This would allow me to more accurately estimate a_1, because I would be able to monitor the peaks and troughs of the oscillations in the relative error.

So hopefully, within a week or two, I should have an estimate of a_1 to at least 1-2 more digits than the current 1.083063. Who knows, I might soon have the most accurate estimate? Short of a closed form to calculate it, the only way I know to determine its value accurately is calculate very large terms in the sequence, isolate a local minimum and local maximum, and estimate the midpoint.

BTW, the locations of the local maxima and minima follow a recurrence similar to locations of the zeroes. I don't have the numbers in front of me to post, so I will try to find which spreadsheet I buried them in and post the details later. I'll be working at a factory location today, so it'll probably be at least a day or two before I can update the group.
~ Jay Daniel Fox
Reply
#39
(08/09/2014, 10:51 PM)jaydfox Wrote: [quote='jaydfox' pid='7372' dateline='1407607336']
I'm also working on a second-order approximation. This would consist of two parts. The first is to fix oscillations in the error term (1/1.083...)*f(k)-A(k). ....



Then has zeros where f(n) crisscrosses the A(n) partition function. Here is a possible estimate of the zero count of rho:

Calculus simplifications, Tommy's next post

My conjecture is that an ideal entire asymptotic of a function will have a zero count for ln(x) as x goes to infinity, of twice this derivative. The derivative function is also a guide to where each Taylor series coefficient of f(x) is best approximated by the asymptotic; see http://math.eretrandre.org/tetrationforu...863&page=8, post#76. The zeros of (exp^{0.5}/asymptotic-1 )also appears to have that same pseudo period; which I will post later.

For the partition function A(k), the zerocount approximation zerocount((ln(100000))-zerocount(ln(70))=8.95, which is a reasonable approximation for the number of pseudo periodic cycles for between 70 and 100000.

I'm doing some numerical calculations, and I have one more conjecture. As x gets arbitrarily large. the number and location of the zeros of f(x) at the negative real axis can also be approximated by the same function, with half the period! Actually, it may be that the approximation converges quicker for the zeros at the negative real axis. The zeros occur where the derivative is ~= n+0.50146

Calculus simplifications, Tommy's next post

Anyway, this is all very fascinating to me, and interesting enough that I need to formalize these ideas more, and do some similar numerical calculations for the exp^{0.5} asymptotic and the 2sinh^{0.5} asymtotic. But there is no more time today ....
- Sheldon
Reply
#40
(08/17/2014, 05:54 PM)sheldonison Wrote: Then has zeros where f(n) crisscrosses the A(n) partition function. Here is a possible estimate of the zero count of rho:



Uh ? can't we use d/dx ln(g(x)) = g ' (x) / g(x) ?

And further d/dx ln(g(exp(x)) = g ' (exp(x)) exp(x) / g(exp(x))

then since your LHS includes a ln I think we can substitute ln(x) = y or y = exp(x) and get

zerocount(rho(x)) ~~ 2 f ' (x) / f(x)

and zerocount(rho(-x)) ~~ f ' (x) / f(x)

and similar for the half-iterate of exp.

Im a bit tired so forgive if my calculus is bad today.

regards

tommy1729
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  My favorite theorem tommy1729 0 912 08/15/2015, 09:58 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)