Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
My favorite integer sequence
#11
One more contribution.

Let as in the previous post be the generating function for the original sequence A .

Now we look at the sequences for which the powers are the generating functions. In the previous posts I'd already pointed to that of k=-1 where the sequence is the "ubiquitueous Thue-Morse-sequence".

First a list of the sequences. Left column is for , then for k=-2,-1,0,1 (which gives the column with our sequence under discussion), then k=2,3,4:

   

Now the matrices, for which that sequences are the "eigen-sequences".
Their generating follows a very simple pattern.
I always show the matrix and the associated sequence together:
(for sequence and more information see http://oeis.org/A106407 )
   

(for sequence and more information see http://oeis.org/A106400 )
   

   

(for sequence and more information see http://oeis.org/A018819 )
   
Gottfried Helms, Kassel
Reply
#12
(for sequence and more information see http://oeis.org/A171238 )
   

(for sequence and more information see http://oeis.org/<not in database>)
   

(for sequence and more information see http://oeis.org/<not in database>)
   

Gottfried
Gottfried Helms, Kassel
Reply
#13
(08/01/2014, 01:53 AM)jaydfox Wrote: I did some googling, and I found this reference:
http://link.springer.com/article/10.1007/BF01933448

I can't read the whole article, but when I was googling 1.083063 (going to 2^21 terms showed this as a slightly better approximation), I found this (the link is the article above):


So I'm hoping that if someone could get access to that article, they might be able to shed a little light on this constant factor.

Hi Jay, I couldn't find any freely available version too; but perhaps this is interesting for that question:
http://www-rp.lip6.fr/~latapy/Publis/dmccg01.pdf

Gottfried
Gottfried Helms, Kassel
Reply
#14
(08/01/2014, 11:36 PM)sheldonison Wrote:
(08/01/2014, 01:53 AM)jaydfox Wrote: If we take k very high, the continuous function seems to converge on a constant multiple of the discrete sequence... Based on analysis of the first 2^18 terms in the sequence, the constant is somewhere between 1.083035 and 1.083089, and probably pretty close to the center of that interval, about 1.083062.
....
I did some googling, and I found this reference:
http://link.springer.com/article/10.1007/BF01933448
...
So I'm hoping that if someone could get access to that article, they might be able to shed a little light on this constant factor.
Jay,
I can't read the article either; but I am very intrigued by this slowly converging ratio, and by your Taylor series approximation! Very impressive. I'm still working on understanding your "log_2(n)" term count memory model for the function too, but the Taylor series approximation is easy to calculate for reasonably large inputs.

At some point, I'd like to see if I can figure out how fast grows, as compared to tetration, . Of course, a graph of the Taylor series in the complex plane would be fun too, as well as the location of the zeros.

grows like .

Much much slower then exp or even its half-iterate.
And that for a function that is similarly defined. (second derivative = Original => second difference = Original )
Thats what I meant by counterintuitive.

I noticed the resemblance of the plots with the fake half-iterate exp too.

For the zero's Im convinced they are all real.

Math can be deceptively complicated or easy and its not even clear which one it is or which occurs most often.

It reminds me of the student who said at the beginning of calculus :

I know what a function is !!

and ended : I dont know what a function is !!

---

One of these generalizations is the following :

f(n) = f(n/2) + f(n/3) + ... + f(sqrt(n))

where all divisions and sqrt are rounded.

Mick posted an ("the" ??) analytic analogue here :

http://math.stackexchange.com/questions/...econd-type

Leibniz rule seems to get into circular stuff at first sight.

Thue-Morse ideas is another thing I discussed with him resulting in this :

http://math.stackexchange.com/questions/...nd-new-grh

Those darn zero positions Smile

regards

tommy1729
Reply
#15
(08/01/2014, 01:53 AM)jaydfox Wrote:
(07/31/2014, 01:51 AM)jaydfox Wrote: If we take k very high, the continuous function seems to converge on a constant multiple of the discrete sequence. I'm not sure what to make of this constant yet, but I assume it has an interesting interpretation. Based on analysis of the first 2^18 terms in the sequence, the constant is somewhere between 1.083035 and 1.083089, and probably pretty close to the center of that interval, about 1.083062.

I did some googling, and I found this reference:
http://link.springer.com/article/10.1007/BF01933448

I can't read the whole article, but when I was googling 1.083063 (going to 2^21 terms showed this as a slightly better approximation), I found this (the link is the article above):


So I'm hoping that if someone could get access to that article, they might be able to shed a little light on this constant factor.

Money is a virus.

regards

tommy1729
Reply
#16
(08/03/2014, 11:38 PM)tommy1729 Wrote: grows like .

Much much slower then exp or even its half-iterate.
And that for a function that is similarly defined. (second derivative = Original => second difference = Original )
Thats what I meant by counterintuitive.

I noticed the resemblance of the plots with the fake half-iterate exp too.

For the zero's Im convinced they are all real.

I made some numerical approximations, and for big enough values of x, using Jay's Taylor series, I get that , which match the equations in Gottfried's link. If ln(x)=1E100, then ln(f(x))~=7.213E199, ln(f(f(x)))~=3.753E399

So then . And yeah, that's growing much slower than
, if x is big enough.

This is despite many similarities in the complex plane graphs of f(x) and f(f(x)) as compared with the entire asymptotic to .
- Sheldon
Reply
#17
(08/01/2014, 11:57 PM)jaydfox Wrote: I'm working on a set of generator polynomials that would allow me to compute an arbitrary term in the sequence in log-polynomial time. For instance, the naive approach requires O(n) calculations to calculate the nth term.

The generator polynomials are a set of log_2(n) polynomials of degree 1 through log_2(n). Calculating the kth polynomial requires calculating k+1 points from the (k-1)th polynomial. I haven't fully worked out the math, but I think it runs in something like O(log_2(n) ^ m) time, where m is a small constant, maybe 3 or 4. So, for example, calculating the 2^100th term in the sequence took about 10 minutes (as opposed to O(2^100) calculations, which would take forever on pretty much any processor).

I'll try to post gp code next week (it's in SAGE at the moment, and I'm not sure who else is using SAGE besides me... Andy and Henryk were using it a few years ago, not sure if they still do...)

I'm not sure that "generator polynomial" is the right word for what I'm developing, because I've seen the term used in other contexts, and I'm not sure if it has a very specific meaning. In my context, I'm using it to generate coefficients for recurrence relationships.

To start with, I need to explain where and why I came up with these. It helps to look at the information flow diagram from an earlier post (post #5 in the current thread).

I was trying to figure out if I could define a given term in the sequence as combination of earlier elements. To do this formally, I suppose we should number the elements, based on the row and column in my diagram. Hopefully it's obvious that the left-most row is actually the row number. So starting with a couple rows over, we have A_0,0 = 1, A_1,0 = 2, A_5,0 = 14, A_6,1 = 6, etc.

So the simplest recurrence is:



We can apply it recursively to get:



Now for the more interesting part. Column 1 only adds the element to its right on the even rows:



Let's look at the recursive version:



Notice that clever use of the floor function has eliminated the need to make explicit cases for either j or k being odd. If [j/2]+1 > [k/2] (e.g., if k is 5 and j is 4, then [j/2]+1 = 3, and [k/2] = 2), then no summation occurs.

What gets more interesting still is when we go back to the first recursive formula, and try to recursively expand it in terms of the second recursive formula:



This representation is a bit unwieldly, especially because we need to continue expanding to the right with further summations. It helps to view it as a matrix, with the nested sums already calculated:

   

Now a bit of explanation here. Let's pick row 5. In row 5, you will see 1, 5, 6, 2, 0, 0

What this means, is that A_5,0 = 1*A_0,0 + 5*A_0,1 + 6*A_0,2 + 2*A_0,3 + 0*A_0,4 + 0*A_0,5 +..., which yields A_5,0 = 14, which is what we were looking for.

Because the coefficients are non-zero up to the fourth column, I can only apply this recurrence with j = 0 mod 8. (It should become clear with more examples).

So, for A_13,0, I get 1*A_8,0 + 5*A_8,1 + 6*A_8,2 + 2*A_8,3 + 0*A_8,4 + 0*A_8,5 +..., which is 1*36 + 5*10 + 6*4 + 2*2 + 0*1... which yields A_13,0 = 114, which is the value of A_13.

I could also have solved A_13 by using the appropriate coefficients from my table. A_13,0 = 1*A_0,0 + 13*A0,1 + 42*A0,2 + 44*A0,3 + 14*A0,4 + 0*A0,5..., which works out to 114, just as before.

If you want to check the next one, A_21, you will find that 1*202 + 5*36 + 6*10 + 2*4 yields 450, which is correct.

What about using the row 13 coefficients? As it turns out, the coefficients in rows 8 through 15 only are valid if j is a multiple of 16. For example, for A_21, you would get 1*A_8,0 + 13*A_8,1... = 1*36 + 13*10 + 42*4 + 44*2 + 14*1 = 436, which is incorrect (should have been 450).

However, all is right again when we go to A_29, which is equivalent to 5 (mod 8 ) and 13 (mod 16). In both cases (you can check), you get 1294. You can also use the coefficients for row 29 directly, which are 1, 29, 210, 504, 436, and 114. Summing them directly (i.e., using row 0), you get 1294.

Curiously, you'll notice that the second to last term was 436, which just happens to match the incorrect result from applying the row 13 coefficients to row 8. Little patterns like this present themselves all over the place, and I will explore some of those patterns in my next post.
~ Jay Daniel Fox
Reply
#18
(08/05/2014, 04:49 PM)jaydfox Wrote:
(08/01/2014, 11:57 PM)jaydfox Wrote: I'm working on a set of generator polynomials that would allow me to compute an arbitrary term in the sequence in log-polynomial time. For instance, the naive approach requires O(n) calculations to calculate the nth term.

The generator polynomials are a set of log_2(n) polynomials of degree 1 through log_2(n). Calculating the kth polynomial requires calculating k+1 points from the (k-1)th polynomial.

I'm not sure that "generator polynomial" is the right word for what I'm developing, because I've seen the term used in other contexts, and I'm not sure if it has a very specific meaning. In my context, I'm using it to generate coefficients for recurrence relationships.

To start with, I need to explain where and why I came up with these. It helps to look at the information flow diagram from an earlier post (post #5 in the current thread).

I was trying to figure out if I could define a given term in the sequence as combination of earlier elements.

...I will explore some of those patterns in my next post.

Okay, time to look at the patterns. The first picture shows how the coefficients can be calculated by a recurrence relation that's actually quite similar to the basic recurrence relation for the sequence itself:

   

Now, let's take a look at a semi-random sampling of such recurrences, all plotted together:

   

This recurrence relation is not particularly useful, however. To calculate the coefficients for the kth row, you have to calculate all k-1 rows before it. It calculates in O(n*log(n)) time (the log(n) factor is due to requiring log(n) columns of data for each row).

A more useful pattern is the following:

   

Notice that I can calculate entries spaced out at multiple of 2^m for the mth column. If you look closely, you should also notice that those entries follow polynomial growth.

In column 0, the entries are 1, which is a constant:
F_k,0 = 1
In column 1, the entries are 0, 1, 2, 3, 4, etc., which fit a linear polynomial:
F_k,1 = k

In column 2, the even entries are 0, 1, 4, 9, 16, etc., which fit a quadratic polynomial:
F_k,2 = [k/2]^2

In column 3, the 4th entries are 0, 1, 10, 35, 84, etc., which fit a cubic polynomial:
F_k,3 = 4/3*[k/4]^3 - 1/3*[k/4]).

In column 4, the 8th entries are 0, 1, 36, 201, 656, 1625, etc., which fit a degree 4 polynomial:
F_k,4 = 8/3*[k/8]^4 - 5/3*[k/8]^2)

In column 5, the 16th entries are 0, 1, 202, 1827, 8148, 25509, etc., which fit a degree 5 polynomial:
F_k,5 = 128/15*[k/16]^5 - 28/3*[k/16]^3 + 9/5*[k/16]

The polynomials can be calculated using your preferred method of polynomial interpolation (Lagrange, Newton, divided differences, etc.). To get the points, use the pattern from the image above.

For example, here is SAGE (python) code which calculates the polynomials (which still need to be rescaled by appropriate powers of 2, but this is just to demonstrate a proof of concept).

Code:
max_degree = 7
fs = []
f0(x) = 1
fs.append(f0)
for n in xrange(max_degree):
    pts = [0]
    diffs = [0]
    for k in xrange(n+1):
        pts.append(pts[-1]+fs[n](x=k*2+1))
    fn(x) = 0
    print pts
    for deg in xrange(n+1, -1, -1):
        diffs = [pts[i]-fn(x=i) for i in xrange(deg+1)]
        for itr in xrange(deg):
            for k in xrange(deg-itr):
                diffs[k] = diffs[k+1] - diffs[k]
        fn += diffs[0]/factorial(deg) * x^deg
    print fn
    fs.append(fn)

And here is the output:
Code:
[0, 1]
x |--> x
[0, 1, 4]
x |--> x^2
[0, 1, 10, 35]
x |--> 4/3*x^3 - 1/3*x
[0, 1, 36, 201, 656]
x |--> 8/3*x^4 - 5/3*x^2
[0, 1, 202, 1827, 8148, 25509]
x |--> 128/15*x^5 - 28/3*x^3 + 9/5*x
[0, 1, 1828, 27337, 167568, 664665, 2026564]
x |--> 2048/45*x^6 - 680/9*x^4 + 1397/45*x^2
[0, 1, 27338, 692003, 5866452, 29559717, 109082974, 326603719]
x |--> 131072/315*x^7 - 43648/45*x^5 + 30044/45*x^3 - 11843/105*x

I'll work on converting this to PARI/gp code, since I know that several of the active members of this forum seem to use that.
~ Jay Daniel Fox
Reply
#19
Some Top posters of the tetration forum :

Bo " administrator " 198214
Tommy " sinh " 1729
Gottfried " matrix " Helms
Jay "excel" fox
sheldon " theta wave "
mike "continuum sum " 3
Jms " transform " Nxn

Ive been wanting to post that silly thing for a long time Smile

Anyway the posts made by Gottfried and Jay look familiar to me.
I also used excel to accelerate the computation of the sequence and investigate its number theoretic properties.
I Always factor number sequences.


Anyways a few remarks/ideas :

1) Many years ago I mentioned the equations

A) f ' (2x) = 3 f(x) + C
B) f ' (x) = f(x/2) + f(x/3) + C

at , if im not mistaken , sci.math.

Seeing Jay's f ' (2x) = f(x) makes me wonder about those again.
Unfortunately I have holes in my brain.

2) Our sequence satisfies f(n) - f(n-1) - (f(n-1) - f(n-2)) = f(n-2) or similar if we IGNORE ZERO. I havent quite made that clear before.

3) What I was really looking for is a closed form expression for the sequence.
Some transformation or such like for the gamma function.

4) I would like a single functional equation for the series that is exact and does NOT use rounding. If possible.

5) I would like to comment that f(x) - f(x-1) can Always be approximated by
f(x) - ( f(x) - f ' (x) + f '' (x)/2 - f "' (x)/6 + f ""(x)/24 )

Hence differences / difference equations can be approximated by differential / differential equations.

This might be usefull as it has been in the past.
Maybe it can connect Jay's f(x) with the Original sequence , maybe express one in terms of the other or solve 3) or 4).

Those are the most important things for me.

regards

tommy1729
Reply
#20
(08/05/2014, 05:57 PM)jaydfox Wrote: I'm working on a set of generator polynomials that would allow me to compute an arbitrary term in the sequence in log-polynomial time. For instance, the naive approach requires O(n) calculations to calculate the nth term.
(...)
I'll work on converting this to PARI/gp code, since I know that several of the active members of this forum seem to use that.

Wow, that's an ingenious approach. I was trying to arrive at a similar short-cutted computation-method using the idea of squaring the matrix M until sufficient few columns are relevant and only compute the required intermediate members of the sequence, but it seemed to become too complex and I left it for the time now. I'll be happy when I can try the Pari/GP-code....

I'll also be much interested if I can learn about the approach how to determine the function f(x) which prognoses/approximates the elements of the sequence. I could implement the function f(x) in Pari/Gp but just did not get the clue how one would arrive at it.
Gottfried
Gottfried Helms, Kassel
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  My favorite theorem tommy1729 0 912 08/15/2015, 09:58 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)