• 1 Vote(s) - 5 Average
• 1
• 2
• 3
• 4
• 5
 My favorite integer sequence jaydfox Long Time Fellow Posts: 423 Threads: 30 Joined: Aug 2007 08/09/2014, 10:51 PM (This post was last modified: 08/09/2014, 10:58 PM by jaydfox.) (08/09/2014, 07:02 PM)jaydfox Wrote: I'm also working on a second-order approximation. This would consist of two parts. The first is to fix oscillations in the error term (1/1.083...)*f(k)-A(k). These oscillations follow approximately a scaled version of f(-x). Remember the zeroes on the negative real line? First of all, in thinking about it, the oscillations actually seem to follow something along the lines of G(f(-c x^2)), where G(x) is some scaling function (similar to x^p for some 0 < p < 1, or perhaps similar to asinh?), and c is a constant. It's not exact of course, but I think it gets me in the right ballpark. I'll show pictures later to demonstrate the similarities that I'm working with. The -x^2 indicates a couple things. The negative sign means I want to look at the oscillations that are clearly visible in f(x) on the negative real axis. However, between each zero is an alternating local minimum or local maximum. The oscillations I'm looking at show both a local minimum and a local maximum at the same scale. I.e., if x_k is a zero (e.g.,-1054.13), then approximately 2*x_k is another zero (e.g.-2359.66). In between, we have either a local minimum or a local maximum. However, with the oscillations of A(k)-f(k), we would have a local minimum and a local maximum over the same doubling of scale, and thus two "zeroes". We're working with a discrete sequence, so there won't actually be any zeroes. Anyway, the x^2 term takes care of this. It means that in going from x to 2x (technically, (2+h)x for a small h), we're looking at zeroes between -x^2 and -4x^2 (technically, -(4+4h)x^2 for the same h, ignoring the O(x) term), and thus we've considered two zeroes. (I'm having trouble putting it into words at the moment, so maybe some pictures later on will help.) Okay, here are some pictures to illustrate the oscillations I was talking about. There are couple things to note. First of all, the self-similarity repeats at slightly larger than powers of 2. For example, the local maxima for the even terms (top "curve") are at 4, 10, 28, 68, 156, 348, 780, etc., each more than twice the previous index, but less than 3 times. The ratio gets close and closer to 2, though it take quite a while to get down as low as 2.1, let alone 2.01. Second, while there is definitely some self-similarity, the vertical scale is not linear. While I am slightly more than doubling the scale in the x direction with each picture below, the y scale increases by much larger factors, up to 1000x per image. That's where the unknown G(x) above comes into play.                     ~ Jay Daniel Fox tommy1729 Ultimate Fellow Posts: 1,352 Threads: 327 Joined: Feb 2009 08/09/2014, 10:52 PM (This post was last modified: 08/09/2014, 10:55 PM by tommy1729.) I think Jay defined the constant 1.08... as the ratio of my functional equation f(x+1) - f(x) = f(x/2) with his J ' (x) = J(x/2). Im not sure if he intended that. It is consistant if my f(x) is very close the Original sequence a(x) in the sense lim x-> +oo f(x)/a(x) = 1. ( then it follows lim x-> + oo J(x)/a(x) = J(x)/f(x) = 1.08... ) I have the feeling Jay's post/idea is not complete yet or still under development , probably there will be an edit or a followup. A little bit strange notation I think , but that might be due to the unfinished brainstorming too. Anyway , thanks for the posts Jay ! I have to think about Jay's last post now. My first critical remark is that the analogue for exp does not seem convincing right now ; I do not see (1+1/n)^n. But I might need to read again and Jay will probably do an edit or followup. Its just a first impression anyway. regards tommy1729 EDIT : I was referring to Jay's post 30 mainly. He posted nr 31 while I made this reply. tommy1729 Ultimate Fellow Posts: 1,352 Threads: 327 Joined: Feb 2009 08/09/2014, 11:10 PM Considering your lastest posts Jay , are you still convinced that J(x)/1.08... is a very good approximation to A(x) ? I mean considering your plots , it does not seem like lim n-> + oo J(n)/A(n) = 1.08... holds true. When I talked about " proving " I meant " deciding if true " actually. Negative axis seems to be popular lately regards tommy1729 jaydfox Long Time Fellow Posts: 423 Threads: 30 Joined: Aug 2007 08/09/2014, 11:46 PM (This post was last modified: 08/09/2014, 11:49 PM by jaydfox.) (08/09/2014, 10:52 PM)tommy1729 Wrote: I have the feeling Jay's post/idea is not complete yet or still under development , probably there will be an edit or a followup. A little bit strange notation I think , but that might be due to the unfinished brainstorming too.Yes, still brainstorming, and I'm not attached to any particular notation yet. But I needed something so I could start to formalize... Quote:My first critical remark is that the analogue for exp does not seem convincing right now ; I do not see (1+1/n)^n. But I might need to read again and Jay will probably do an edit or followup. Its just a first impression anyway. (...) I was referring to Jay's post 30 mainly. He posted nr 31 while I made this reply. Take exp_1: Code:k    E_1(k)    E_{1/2}(k)   E_{1/3}(k) 0    1         1            1 1    2         3/2          4/3 2    4         9/4          16/9 3    8         27/4         64/27 exp_1(1) = 2 = (1+1)^1 exp_{1/2}(1) = 2.25 = (1+1/2)^2 exp_{1/3}(1) = 2.370370... = (1+1/3)^3 exp_{1/n}(1) = (1+1/n)^n I could have defined exp_h that way to begin with, but I defined it as a sequence with a recurrence relation similar to A(k), to help draw the analogy between a_1 = 1.08306 and e_1= e/2 = 1.35914. (08/09/2014, 11:10 PM)tommy1729 Wrote: Considering your lastest posts Jay , are you still convinced that J(x)/1.08... is a very good approximation to A(x) ? The reason I wanted to set 1.08306 as a_1 was to make clear what it was. By itself, J(x) is not a perfect approximation of A(k). To Tommy's question, do I consider J(x) a "very good approximation" to A(x)? A "good approximation", yes, but not a "very good approximation". But by re-imagining A(k) as A_1(k), it makes clear why it's not such a good approximation. In reality, it's not so much an issue of J(x) being a poor approximation of A(k). It's that A_1(k) uses such a large step size, that it's a poor approximation of J(x). In this sense, J(x) is the right function to approximate A_1(k), but not the right function to approximate A(k). The distinction is not mathematical, but metaphysical. ~ Jay Daniel Fox jaydfox Long Time Fellow Posts: 423 Threads: 30 Joined: Aug 2007 08/10/2014, 12:30 AM (08/09/2014, 11:10 PM)tommy1729 Wrote: Considering your lastest posts Jay , are you still convinced that J(x)/1.08... is a very good approximation to A(x) ? I mean considering your plots , it does not seem like lim n-> + oo J(n)/A(n) = 1.08... holds true. The absolute error grows without bound, so in that sense, no, it's not a good approximation. However, the relative error decreases to 0 in the limit, so yes, I still consider it a good first-order approximation. If A(k) is approximated by J(k)/a_1, then the absolute error is (A(k) - J(k)/a_1), which I graphed previously. The relative error is the absolute error divided by A(k), or (A(k) - J(k)/a_1) / A(k), which is 1-J(k)/(a_1 A(k)). The graphs below use an alternate relative error of (a_1 A(k))/J(k) - 1. For small errors, this alternate definition is essentially equivalent (e.g., 1-999/1000 ~= 1000/999 - 1), and I didn't want to redo the graphs, so they'll have to do. You can't actually see the difference in the graphs (i.e., comparing side by side), but since the equation is written at the top of the graphs, I figured I should disclose the discrepancy.             Skipping a doubling of the scale on the k-axis:     And skipping several more:     As you can see, the relative error continues to shrink. The sequence A(n) will cross the function J(x) an infinite number of times, and the absolute error will grow without bound, but the relative error will shrink to 0. I'm still working on proving it formally, but it seems intuitively clear, based on analysis up to 2^21 terms in the sequence, plus crude analysis of the 2^m terms up to m=150. There's also the semi-relative ~ Jay Daniel Fox tommy1729 Ultimate Fellow Posts: 1,352 Threads: 327 Joined: Feb 2009 08/11/2014, 12:17 PM Some variants : In general f(n) = f(n-1) + f(n/a1) + f(n/a2) + ... or f(n) = f(n-1) + f(n/a1) - f(n/a2) + f(n/a3) - ... where 2

 Possibly Related Threads... Thread Author Replies Views Last Post My favorite theorem tommy1729 0 912 08/15/2015, 09:58 PM Last Post: tommy1729

Users browsing this thread: 1 Guest(s)