• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Constructing the "analytical" formula for tetration. mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 01/17/2011, 01:05 PM (This post was last modified: 01/19/2011, 03:15 AM by mike3.) Hi. I was wondering about the problem of finding "analytical" formulas for tetration, that is, formulas that are more like "formulas" than "procedures" to compute tetration. This includes, e.g. infinite sums and other things like that. A "closed form" formula, that is, one in terms of a finite number of "conventional" functions, is most likely not possible. Now, it seems the Riemann mappings so far are hugely resistant to attempts to describe them, so I was curious on concentrating on the simpler problem of describing the regular iteration, which is what the Riemann mappings would deform. The regular tetrational, or "unipolar superfunction" of $\exp$, can be given by $UT(z) = L + \sum_{n=1}^{\infty} a_n L^{nz}$ with recursively-generated coefficients $a_1 = 1$ $a_n = \frac{B_n(1, 2! a_2, 3! a_3, \cdots, (n-1)! a_{n-1}, 0)}{n! (L^{n-1} - 1)}$. with $B_n$ the "complete" Bell polynomials. The above series is a Fourier series, but can be rearranged into a Taylor series if we so please: \begin{align} UT(z) &= L + \sum_{n=1}^{\infty} \left(a_n \sum_{k=0}^{\infty} \frac{(Ln)^k}{k!} z^k\right)\\ &= L + \sum_{n=1}^{\infty} \sum_{k=0}^{\infty} \frac{a_n (Ln)^k}{k!} z^k\\ &= L + \sum_{k=0}^{\infty} \left(\sum_{n=1}^{\infty} a_n (Ln)^k\right) \frac{z^k}{k!} \end{align} The problem, then, is the description of the coefficients $a_n$. The first few are: $a_1 = 1$ $a_2 = \frac{1}{2L - 2}$ $a_3 = \frac{L + 2}{6L^3 - 6L^2 - 6L + 6}$ $a_4 = \frac{L^3 + 5L^2 + 6L + 6}{24L^6 - 24L^5 - 24L^4 + 24L^2 + 24L - 24}$ $a_5 = \frac{L^6 + 9L^5 + 24L^4 + 40L^3 + 46L^2 + 36L + 24}{120L^10 - 120L^9 - 120L^8 + 240L^5 - 120L^2 - 120L + 120}$ ... By pattern recognition, it appears that $a_n = \frac{\sum_{j=0}^{\frac{(n-1)(n-2)}{2}} \mathrm{mag}_{n,j} L^j}{\prod_{j=2}^{n} j(L^{j-1} - 1)}$. The denominator can be given in true closed form via the "q-Pochhammer symbol" as $a_n = \frac{\sum_{j=0}^{\frac{(n-1)(n-2)}{2}} \mathrm{mag}_{n,j} L^j}{n! (-1)^{n-1} (L; L)_{n-1}}$. (The appearance of q-analogs is curious. I'm wondering about the connections between q-analogs, tetration, iteration, fractals, chaos, and continuum sums...) The problem, however, is the sequence of "magic" numbers $\mathrm{mag}_{n, j}$ for the numerators. For j starting at 0, we get n = 1: 1 n = 2: 1 n = 3: 2, 1 n = 4: 6, 6, 5, 1 n = 5: 24, 36, 46, 40, 24, 9, 1 n = 6: 120, 240, 390, 480, 514, 416, 301, 160, 64, 14, 1 n = 7: 720, 1800, 3480, 5250, 7028, 8056, 8252, 7426, 5979, 4208, 2542, 1295, 504, 139, 20, 1 n = 8: 5040, 15120, 33600, 58800, 91014, 124250, 155994, 177220, 186810, 181076, 163149, 134665, 102745, 71070, 44605, 24550, 11712, 4543, 1344, 265, 27, 1 n = 9: 40320, 141120, 352800, 695520, 1204056, 1855728, 2640832, 3473156, 4277156, 4942428, 5395818, 5561296, 5433412, 5021790, 4391304, 3625896, 2820686, 2056845, 1398299, 879339, 504762, 260613, 117748, 45178, 13845, 3156, 461, 35, 1 n = 10: 362880, 1451520, 4021920, 8769600, 16664760, 28264320, 44216040, 64324680, 88189476, 114342744, 141184014, 166279080, 187614312, 202901634, 210825718, 210403826, 201934358, 186191430, 164980407, 140216446, 114231817, 88934355, 66047166, 46576620, 31071602, 19460271, 11365652, 6112650, 2987358, 1298181, 488878, 153094, 37692, 6705, 749, 44, 1 ... The first column appears to be factorials, i.e. $\mathrm{mag}_{n, 0} = (n-1)!$, the last is of course $\mathrm{mag}_{n, \frac{(n-1)(n-2)}{2}} = 1$, and the second-to-last looks to be $\mathrm{mag}_{n, \frac{(n-1)(n-2)}{2} - 1} = \frac{(n-2)(n+1)}{2}$ but beyond that I'm not sure. Is there some way to actually prove or disprove these results from the recurrence formula, and even better, to get a "real" (perhaps a sum/product) formula for the coefficients $\mathrm{mag}_{n,j}$? What I'm really after here is some way to analyze that recurrence formula. How would one go about approaching such a problem? Also, $\sum_{j=0}^{\frac{(n-1)(n-2)}{2}} \mathrm{mag}_{n, j} = \frac{n!(n-1)!}{2^{n-1}} = \mathrm{Polygorial}(n, 3)$ for all its worth (I just plugged 1, 3, 18, 180, 2700 into the oeis.org dictionary.). For all that's worth. I don't know how you could decompose that in a "useful" manner. But again, no proofs, just guesses. Gottfried Ultimate Fellow Posts: 763 Threads: 118 Joined: Aug 2007 01/17/2011, 10:10 PM (This post was last modified: 01/17/2011, 10:34 PM by Gottfried.) Hi Mike - welcome in the club of pattern-matcher... :-) I've seen these coefficients in the inverse Schröder-function for the dxp(x)-iterates. I don't have it at hand and hope I recall it correct, but we had a thread in which -I think- Sheldon discussed that somehow complementary coefficients (which occur also in the non-inverse Schröder-function). which were $\begin{matrix} {rrrrrrrrrrrrrrr} . & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ 1 & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ -1 & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ 1 & 2 & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ -1 & -6 & -5 & -6 & . & . & . & . & . & . & . & . & . & . & . & . \\ 1 & 14 & 24 & 45 & 46 & 26 & 24 & . & . & . & . & . & . & . & . & . \\ -1 & -30 & -89 & -214 & -374 & -416 & -511 & -461 & -330 & -154 & -120 & . & . & . & . & . \\ 1 & 62 & 300 & 889 & 2177 & 3368 & 5188 & 6980 & 8007 & 7867 & 7188 & 6111 & 4270 & 2528 & 1044 & 720 \end{matrix}$ to compare to that of yours again: $\begin{matrix} {rrrrrrrrrrrrrrr} . & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ 1 & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ 1 & . & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ 2 & 1 & . & . & . & . & . & . & . & . & . & . & . & . & . & . \\ 6 & 6 & 5 & 1 & . & . & . & . & . & . & . & . & . & . & . & . \\ 24 & 36 & 46 & 40 & 24 & 9 & 1 & . & . & . & . & . & . & . & . & . \\ 120 & 240 & 390 & 480 & 514 & 416 & 301 & 160 & 64 & 14 & 1 & . & . & . & . & . \\ 720 & 1800 & 3480 & 5250 & 7028 & 8056 & 8252 & 7426 & 5979 & 4208 & 2542 & 1295 & 504 & 139 & 20 & 1 \end{matrix} $ I've had the same coefficients in my discussion of the Bell-matrix for dxp(x) (in my then notation Ut(x)) and its symbolic powers. Here we find that coefficients in the second column (the coefficents of Sheldon) and in the last column (your coefficients) of the A-coefficients-matrix as shown in http://go.helms-net.de/math/tetdocs/APT.htm . That they are from that columns corresponds to the fact, that the Schröder-function is a scaling of the iterate of (positive) infinite height, so that only that of the second column remain significant, and that of the inverse Schröder-function are that of the negative infinite height (this is just a guess, but accidentally seems to fit) which keep only that of the last column significant, because heights shift the columns down/upwards by construction. For a sketch see the following. I use the coefficients-matrix A_4 at x^4 according to my mentioned paper. A_4 is $\begin{matrix} {rrrr} . & -1 & 7 & -12 & 6 \\ . & -6 & 18 & -18 & 6 \\ . & -5 & 18 & -18 & 5 \\ . & -6 & 11 & -6 & 1 \end{matrix}$ and contains your coefficients of n=4 in the last column and the "complementary" in the second column. This is for height h=1. If we have height h=2 we get $\begin{matrix} {rrrr} . & . & . & . & . \\ . & -1 & . & . & . \\ . & -6 & 7 & . & . \\ . & -5 & 18 & -12 & . \\ . & -6 & 18 & -18 & 6 \\ . & . & 11 & -18 & 6 \\ . & . & . & -6 & 5 \\ . & . & . & . & 1 \end{matrix}$ With increasing height that columns get accordingly shifted. From height h=5 on the columns are strictly"separated", the rowsums consist only on the entries of one column only. $\begin{matrix} {rrrr} . & . & . & . & . \\ . & . & . & . & . \\ . & . & . & . & . \\ . & . & . & . & . \\ . & -1 & . & . & . \\ . & -6 & . & . & . \\ . & -5 & . & . & . \\ . & -6 & . & . & . \\ . & . & 7 & . & . \\ . & . & 18 & . & . \\ . & . & 18 & . & . \\ . & . & 11 & . & . \\ . & . & . & -12 & . \\ . & . & . & -18 & . \\ . & . & . & -18 & . \\ . & . & . & -6 & . \\ . & . & . & . & 6 \\ . & . & . & . & 6 \\ . & . & . & . & 5 \\ . & . & . & . & 1 \end{matrix}$ In the limit of height infinity only the last column is significant (if the log of the base is u>1) or the second (if the log of the base is u<1), so in the Schröder-function we rediscover only the coefficients of that column. Sorry I've no time to go into more detail here at the moment, but I hope it is understandable so far. I didn't find any simpler construction for that coefficents than just this - a construction of R. Mathar, which I described elsewhere in the forum used only the Stirlingnumbers but was also iterative and thus had the same complexity for the computation. The coefficients can also be found by solving linear equation-systems using that column-shifting depending on the height-parameter and the hypothesis of even divisibility by the denominator at integer hights. (I did some sketches for that solution but got tired so didn't make it a full explicite procedure) The relation to the q-analogues is very striking; however precisely this relation makes me skeptical, whether we are on the right track with the powerseries so far since exactly that q-analogues occured in an exercise of L.Euler, where he discussed another series for the log-function and used that q-analogues in the denominator. However: that "log" was correct only on the integer exponents, but was a false log at fractional exponents... a striking analogy to our tetrational problems with fractional heights while integer heights are easy to handle. I discussed this in short with Henryk here in the forum (but without conclusive results); search for the terms "false logarithm" and I think also in sci.math or sci.research. (I'm currently much occupied with my two statistical courses so please excuse the short style of this msg and the missing references into the msgs of our forum here. Perhaps I can come back to this later in the week. Please ask any specific question anyway) Gottfried Gottfried Helms, Kassel mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 01/22/2011, 04:00 AM (This post was last modified: 01/22/2011, 10:11 AM by mike3.) @Gottfried: That's interesting, your matrix seems to be related to the iteration of the decremented exponential. I'm thinking that perhaps that would be a better line of attack here. The regular Schroder function for the fixed point at 0 of a function with such a fixed point, i.e. $f(z) = \sum_{n=1}^{\infty} f_n z^n$ is given by $\chi(z) = \sum_{n=1}^{\infty} \chi_n z^n$ where the coefficients $\chi_n$ satisfy the recurrence $\chi_1 = 1$ $\chi_n = \frac{1}{a_1 - a_1^n} \sum_{m=1}^{n-1} \chi_m {f^{\cdot m}}_n$ where ${f^{\cdot m}}_n$ is the nth coefficient of the mth exponentiation (NOT iteration!) of $f$ (for iteration, it would be ${f^{\circ m}}_n$, and ${f^m}_n$ is too ambiguous here to be used), i.e. ${f^{\cdot m}}_n = \frac{1}{n!} \left[\frac{d^n}{dx^n} f(x)^m\right]_{x = 0}$. For $f(z) = e^{uz} - 1$, i.e. the decremented exponential, we get, using the binomial theorem (here, $\left\{{n \atop m}\right\}$ is a Stirling number of the 2nd kind), ${f^{\cdot m}}_n = u^n \frac{m!}{n!} \left\{{n \atop m}\right\}$. So, the recurrence is $\chi_n = \frac{u^{n-1}}{1 - u^{n-1}} \sum_{m=1}^{n-1} \frac{m!}{n!} \left\{{n \atop m}\right\} \chi_m$, which looks to be much more "linear" than the weird Bell-polynomials recurrence. Could it be possible to come up with an explicit formula for the $\chi_n$? Then the Lagrange inversion formula can be applied to obtain the inverse Schroder function $\chi^{-1}$, which then yields the coefficients of a Fourier series for the superfunction of the decremented exponential, which can be topo-conjugated to the desired $UT$ function (although it may be offset from the original, but at least we would have a formula with explicit, non-recursive coefficients). ADD: I tried some expansions of those $\chi_n$ coefficients. I noticed that the numerators seemed to contain coefficients that look like the second column of your matrices (the -1, -5, -6, -5 thing). Could there be some simpler way to transform that second column into the last column? If so, then we might be able to extract a simpler formula from the explicit formula for $\chi_n$ once it is developed. tommy1729 Ultimate Fellow Posts: 1,370 Threads: 335 Joined: Feb 2009 01/23/2011, 10:59 PM well i was about to tell you your problem had a combinatorial interpretation similar to stirling numbers of the 2nd kind. seems you found that yourself. how about the Lyapunov exponent and schroeder summations in the classic page : http://tetration.org/Tetration/index.html maybe they help , or reduce ... on the other hand perhaps you know that page by heart and considered this before. mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 01/24/2011, 04:34 AM (This post was last modified: 01/24/2011, 04:35 AM by mike3.) (01/23/2011, 10:59 PM)tommy1729 Wrote: well i was about to tell you your problem had a combinatorial interpretation similar to stirling numbers of the 2nd kind. You mean the original problem? I'd be curious to hear more. Do you have a way to get a non-recursive formula? And I've seen the site you linked to. It does not give non-recursive coefficient formulas. Gottfried Ultimate Fellow Posts: 763 Threads: 118 Joined: Aug 2007 01/24/2011, 08:56 AM Mike - without having it definite, I think that your recursive description should be the same as that in the eingenvalue/eigensystem-computation in the APT-article: it is just a simple procedure and works fast. I was as well looking for a simpler, non-recursive description But I didn't find anything convincing. The only thing I found was that method which solves linear equation systems using the (u-1)(u^2-1)...() - denominators and assumes that at integer heights the numerator must be evenly divisible by that denominator plus the assumption, that the first and last row of the coefficients-matrices are the Stirlingnumbers first and second kind, respectively (or opposite). With this each coefficient-matrix can be determined separately, nonrecursive at least in the sense that the coefficients-matrices (and thus the computation of their second and their last columns) are independent of each other. (Don't know whether the description of this in the mentioned APT-article was too short/imprecise?) I didn't proceed there - it seemed to be too complicated and does not give an optimization over the efficient recursive procedure ; maybe you could find an improvement once the system of conditions is formally stated in terms of linear equations/matrix-formula and thus possibly get a more explicite expression in terms of the "curlybraces" ;-) only. Well, I'll give it another try myself today, but my emphasis on this has a bit cooled down, to be frank... Gottfried Gottfried Helms, Kassel mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 01/28/2011, 03:12 AM (This post was last modified: 01/28/2011, 03:13 AM by mike3.) News: I may have just found a really complicated but "explicit" or "non-recursive" formula for the solutions of a general recurrence sequence of the form $a_1 = r_{1, 1}$, $a_n = \sum_{m=1}^{n-1} r_{n, m} a_m$ which these Schroder function coefficient equations belong to. It's not "nice", though, but it looks to work (no proof yet as it was found by pattern examination). If you want details, just ask. But at least it seems to show that an explicit formula exists, so that there may perhaps be a simpler, more "elegant" one. Gottfried Ultimate Fellow Posts: 763 Threads: 118 Joined: Aug 2007 01/28/2011, 02:49 PM (01/28/2011, 03:12 AM)mike3 Wrote: News: I may have just found a really complicated but "explicit" or "non-recursive" formula for the solutions of a general recurrence sequence of the form $a_1 = r_{1, 1}$, $a_n = \sum_{m=1}^{n-1} r_{n, m} a_m$ which these Schroder function coefficient equations belong to. It's not "nice", though, but it looks to work (no proof yet as it was found by pattern examination). If you want details, just ask. But at least it seems to show that an explicit formula exists, so that there may perhaps be a simpler, more "elegant" one.Wow. If this is not uncomfortably much work then show this please. Perhaps it gives a bit more insight into the whole problem as well as into that of my special "hobby", the iterationseries... or at least some fresh idea. Gottfried Gottfried Helms, Kassel mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 01/28/2011, 09:12 PM (This post was last modified: 01/29/2011, 12:25 AM by mike3.) Well, here goes... this is LONG! I'm going to also give an explanation of the derivation, since you were asking for more "insight" into the problem. OK, so you have the recurrence I gave. $a_1 = r_{1, 1}$ $a_n = \sum_{m=1}^{n-1} r_{n, m} a_m$. If we expand a few terms, we get $a_2 = r_{2, 1} r_{1, 1}$ $a_3 = r_{3, 1} r_{1, 1} + r_{3, 2} r_{2, 1} r_{1, 1}$ $a_4 = r_{4, 1} r_{1, 1} + r_{4, 2} r_{2, 1} r_{1, 1} + r_{4, 3} r_{3, 1} r_{1, 1} + r_{4, 3} r_{3, 2} r_{2, 1} r_{1, 1}$ $a_5 = r_{5, 1} r_{1, 1} + r_{5, 2} r_{2, 1} r_{1, 1} + r_{5, 3} r_{3, 1} r_{1, 1} + r_{5, 3} r_{3, 2} r_{2, 1} r_{1, 1} + r_{5, 4} r_{4, 1} r_{1, 1} + r_{5, 4} r_{4, 2} r_{2, 1} r_{1, 1} + r_{5, 4} r_{4, 3} r_{3, 1} r_{1, 1} + r_{5, 4} r_{4, 3} r_{3, 2} r_{2, 1} r_{1, 1}$ ... Looking at this, we see that the number of products in each $a_n$ is 1, 1, 2, 4, 8, ... , or $2^{n-2}$ for $n > 1$. It is possible to prove this by considering how the above process works: each sum adds together the previous terms, which are themselves sums, thus the number of products in that sum should equal the sum of the numbers of products in the previous terms. That is, $PN_1 = 1$ $PN_n = \sum_{m=1}^{n-1} PN_m$ Taking the forward difference $\Delta$ of that last equation gives $\Delta PN_n = \sum_{m=1}^{n} PN_m - \sum_{m=1}^{n-1} PN_m = PN_n$. This kind of difference equation has as solutions $PN_n = r 2^n + s$. We see that for the given initial conditions, i.e. $PN_2 = 1$ and $PN_3 = 2$, $PN_n = 2^{n-2}$ for $n > 1$. Now, we move on to consider the number of factors in each product. Counting them, we see that n = 1: 1 n = 2: 2 n = 3: 2, 3 n = 4: 2, 3, 3, 4 n = 5: 2, 3, 3, 4, 3, 4, 4, 5 ... Looking at the formula, we see the nth term should be the concatenation (due to the sum) of the previous terms with their elements each incremented by 1 (due to the multiplication of a new $r_{n, m}$ factor.). That is, $PFN_n = \mathrm{incr}(PFN_1) + \mathrm{incr}(PFN_2) + ... + \mathrm{incr}(PFN_{n-1}) = \sum_{m=1}^{n-1} \mathrm{incr}(PFN_m)$. (where the "+" are actually string concatenations. Formally, $PFN_n \in \mathbb{N}*$.) We can "refactor" this process by noting that the "increment" "distributes" over the concatenation, so $PFN_n = \sum_{m=1}^{n-1} \mathrm{incr}(PFN_m) = \mathrm{incr}\left(\sum_{m=1}^{n-1} PFN_m\right)$. Now, we know that the previous term is also an "incremented concatenation" of the terms before it, thus if we decremented that term and then concatenated itself to that, we would have the concatenation of all terms before $PFN_n$, and thus $PFN_n = \mathrm{incr}(\mathrm{decr}(PFN_{n-1}) + PFN_{n-1}) = PFN_{n-1} + \mathrm{incr}(PFN_{n-1})$ i.e. the sequence can be formed by taking the previous term, incrementing all its elements, and then concatenating it to the unincremented version, which is also easily observable from the pattern, but this gives a kind of "proof". Anyway, by looking at the dictionary of integer sequences at oeis.org, I found that these strings appear within the sequence of "binary weights of n", the number of 1 bits in an integer. Looking at it, it appears element j (first is j = 1) of $PFN_n$ is given by the number of 1 bits in $2^{n-1} + 2j - 1$. This can be written explicitly. Note that $\lfloor\frac{N}{2^k}\rfloor$ is a "lossy right shift" of $N$ (i.e. shift right k bits, knock off everything past the binary point, like the ">>" operation in the C programming language.). Then, we can get the kth bit (0th is the LSB) of $N$ as $\lfloor\frac{N}{2^k}\rfloor - 2\lfloor\frac{N}{2^{k+1}}\rfloor$. The second term is a lossy right shift by k+1 bits followed by a left shift by 1 bit, which is like the right shift by k bits but with its LSB cleared. So we can add up all the bits in $N$ (thus giving the number of 1 bits) by $\sum_{i=0}^{\lfloor\log_2(N)\rfloor} \lfloor\frac{N}{2^i}\rfloor - 2\lfloor\frac{N}{2^{i+1}}\rfloor$. Taking $N = 2^{n-1} + 2j - 1$ and noting that the first and last bit are always 1 (by virtue of $2^{n-1}$ and note also that this is always odd as $2^{n-1}$ is even and $2j - 1$ is odd, and even + odd = odd), we get ${(PFN_n)}_j = 2 + \sum_{i=0}^{\lfloor\log_2(j)\rfloor} \lfloor\frac{j}{2^i}\rfloor - 2\lfloor\frac{j}{2^{i+1}}\rfloor$. These connections to binary made me wonder about whether or not this whole thing could be interpreted in terms of binary. I noticed that the $r_{n,m}$ are present at most only once in a product, i.e. there are no higher-degree $r_{n,m}$ factors. And the indices, for the nth term of our original sequence, range from 1 to n. So I thought that each product could be represented as a binary nxn matrix with 1-bit elements, and we could look for patterns there. This led me to see part of, but not all of, the pattern. However, the most fruitful way of viewing it, I found, was to arrange the terms and factors like this: Code:r11 r21 r11 r31 r32     r21 r11 r11 r41 r42 r43 r43         r31 r32     r21     r21 r11 r11 r11 r11 r51 r52 r53 r53 r54 r54 r54 r54                 r41 r42 r43 r43         r31 r32         r31 r32     r21     r21     r21     r21 r11 r11 r11 r11 r11 r11 r11 r11 where the value of each term can be found by multiplying the columns' elements together and then adding up the results. We can now see an interesting alternating pattern... which looks like binary counting! We see that the factors appear and disappear as the bits of $2^{n-1} + 2j - 1$, which also explains the "number of 1 bits of $2^{n-1} + 2j - 1$" observation. The left indices of the $r_{n, m}$ in a product, then, are generated by acquiring the position of each one bit in $2^{n-1} + 2j - 1$, with the LSB counted as position 1, and putting them in numerical order for convenience. That is, $\mathrm{Left\ index\ of\ }k\mathrm{th\ factor\ of\ jth\ product\ in\ }n\mathrm{th\ term} = \mathrm{position\ of\ }k\mathrm{th\ 1\ bit\ in\ }2^{n-1} + 2j - 1\mathrm{,\ with\ the\ LSB\ counted\ as\ position\ 1}$. Now for the right indices. If we look at them, they look like (here, for the 5th term) Code:11 211 311 3211 411 4211 4311 43211 The same "bit index" interpretation works here as well. Ignoring the last "1", these terms give the positions of 1 bits in $2j - 1$. So, we have $\mathrm{Right\ index\ of\ }k\mathrm{th\ factor\ of\ jth\ product\ in\ }n\mathrm{th\ term} = \mathrm{if\ }k-1 > 0\mathrm{,\ position\ of\ }k-1\mathrm{th\ 1\ bit\ in\ }2j - 1\mathrm{,\ with\ the\ LSB\ counted\ as\ position\ 1,\ otherwise\ 1}$. Then, the not-quite-symbolic but still "explicit" and "non-recursive" formula is $a_n = \sum_{j=1}^{2^{n-2}} \prod_{k=1}^{(\mathrm{number\ of\ 1\ bits\ in\ }2^{n-1} + 2j - 1)} r_{(\mathrm{position\ of\ }k\mathrm{th\ 1\ bit\ in\ }2^{n-1} + 2j - 1\mathrm{,\ with\ the\ LSB\ counted\ as\ position\ 1}),(\mathrm{if\ }k-1 > 0\mathrm{,\ position\ of\ }k-1\mathrm{th\ 1\ bit\ in\ }2j - 1\mathrm{,\ with\ the\ LSB\ counted\ as\ position\ 1,\ otherwise\ 1})}$. Gottfried Ultimate Fellow Posts: 763 Threads: 118 Joined: Aug 2007 01/28/2011, 10:42 PM (01/28/2011, 09:12 PM)mike3 Wrote: Well, here goes... this is LONG! I'm going to also give an explanation of the derivation, since you were asking for more "insight" into the problem. Very interesting! and not soo long at all. I cannot chew it instantly but I'll take my time tomorrow and on sunday to interprete and relate it to the matrix-notation. Now since I see that 2,3,3,4,3,4,4,5,... pattern and the string-concatenation I get the impression that this might be one of the cases where the recursive description is replaced by an expansion back to the initial parameter(s), but where no obvious simplification is found - which would reflect my experience that I couldn't find a shorter description for the schröder-coefficients than by the explicite matrix-eigendecomposition (for the triangular case). But well - this is just an impression by the first reading, perhaps it is completely false. Anyway - thanks so far. I'll try to understand everything next two days. Gottfried Upps: one question: the r's in this posts are the chi's of the earlier one? Gottfried Helms, Kassel « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Constructing real tetration solutions Daniel 4 316 12/24/2019, 12:10 AM Last Post: sheldonison Recursive formula generating bounded hyper-operators JmsNxn 0 1,557 01/17/2017, 05:10 AM Last Post: JmsNxn Extrapolated Faá Di Bruno's Formula Xorter 1 2,190 11/19/2016, 02:37 PM Last Post: Xorter on constructing hyper operations for bases > eta JmsNxn 1 2,758 04/08/2015, 09:18 PM Last Post: marraco Explicit formula for the tetration to base $$e^{1/e}$$? mike3 1 3,056 02/13/2015, 02:26 PM Last Post: Gottfried Number theoretic formula for hyper operators (-oo, 2] at prime numbers JmsNxn 2 4,096 07/17/2012, 02:12 AM Last Post: JmsNxn fractional iteration by schröder and by binomial-formula Gottfried 0 2,650 11/23/2011, 04:45 PM Last Post: Gottfried simple base conversion formula for tetration JmsNxn 0 3,347 09/22/2011, 07:41 PM Last Post: JmsNxn Change of base formula using logarithmic semi operators JmsNxn 4 8,174 07/08/2011, 08:28 PM Last Post: JmsNxn Breaking New Ground In The Quest For The "Analytical" Formula For Tetration. mike3 5 8,786 05/09/2011, 05:08 AM Last Post: mike3

Users browsing this thread: 1 Guest(s)