A support for Andy's (P.Walker's) slog-matrix-method - Printable Version +- Tetration Forum (https://math.eretrandre.org/tetrationforum) +-- Forum: Tetration and Related Topics (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=3) +--- Thread: A support for Andy's (P.Walker's) slog-matrix-method (/showthread.php?tid=709) A support for Andy's (P.Walker's) slog-matrix-method - Gottfried - 11/14/2011 Hi - in a selfstudy of the possibility of defining a "bernoulli-polynomial"-like solution for the problem of summing of like powers of logarithms $s_p(a,b)=\log(1+a)^p + \log(2+a) ^p+ \ldots + \log(b)^p$ and its generalizations to arbitrary lower and upper summation-bounds a and b I tried the method of indefinite summation. There I had to find an infinite-sized matrix-reciprocal (inverse) in the same spirit as the matrix-reciprocal which occurs in the slog-ansatz of Andy Robbins(also used earlier by P.Walker). Interestingly the matrix-reciprocal, which can be defined in the same way, gives not only meaningfully approximated values. That would be nice enough, but we might not be able to check, whether the computed values are always true approximations to the expected values. Actually we get even more: we seem to get exactly the coefficients of the most meaningful closed-form-function for this sums-of-like-powers-problem, namely involving the lngamma-function. This occurence of the lngamma here is interesting in twofold manner: a) it supports Andy's/P.Walker's matrix-ansatz for the solution of the tetration/slog b) it supports the meaningfulness of the choice of the L. Euler's gamma-definition for the interpolation of the factorial besides of the criterion of log convexity (maybe this has then a similar effect for the solution of tetration). I began to write a small article about that for my "mathematical miniatures" website, but am a bit distracted currently by my teaching duties and my weak health, and do not know when I'll have time to polish it up fully for presentation. However I thought it might already be useful/interesting to be accessible here in the current state; I think it should be readable, be selfcontained enough and understandable so far. If not, I'd like to answer/elaborate on specific questions. I uploaded the *.pdf to this forum, see attachment Gottfried P.s.: this is very near to that first-time observation in the thread http://math.eretrandre.org/tetrationforum/showthread.php?tid=632 where I used that slog-matrix-computation rather as a curiosity, where here I'm focusing specifically on it. [attachment=915] RE: A support for Andy's (P.Walker's) slog-matrix-method - JmsNxn - 03/07/2021 Hey, Gottfried very interesting. I have too done indefinite summation excessively. I wrote a paper in my second year of undergrad. To summarize, I'll give the formula for indefinite summation as I wrote it. If $ |f(s)| \le C e^{\tau|\Im(s)| + \rho |\Re(s)|}\,\,\text{for}\,\,0\le \tau < \pi/2\,\,\rho > 0\,\,C>0\\ f\,\,\text{is holomorphic for}\,\,\Re(s) > 0\\$ Then the indefinite sum, $ F(s) = \sum_{j=1}^s f(j)\\ F(s) + f(s+1) = F(s+1)\\ F(1) = f(1)\\ F\,\,\text{is holomorphic for}\,\,\Re(s) > 0\\$ Can be given by the formula, for Euler's Gamma function $\Gamma(s)$, $ \vartheta(x) = \sum_{n=0}^\infty (\sum_{j=1}^{n+1} f(j))\frac{(-x)^n}{n!}\\ \Gamma(1-s)F(s) = \sum_{n=0}^\infty (\sum_{j=1}^{n+1} f(j))\frac{(-1)^n}{n!(n+1-s)} + \int_1^\infty \vartheta(x)x^{-s}\,dx\\ F(s) = \frac{d^{s-1}}{dx^{s-1}}|_{x=0} \vartheta(-x)\\$ If you're curious I can write a quick write-up. It's largely a simple consequence of Ramanujan's Master Theorem. I would link the original paper but, it has much to be desired from. It was one of the first papers I ever wrote so it's a tad hand-wavey. This function will be unique, so if your bernoulli sum $H$ satisfies, $ |H(s)| \le C e^{\tau|\Im(s)| + \rho |\Re(s)|}\,\,\text{for}\,\,0\le \tau < \pi/2\,\,\rho > 0\,\,C>0\\ H\,\,\text{is holomorphic for}\,\,\Re(s) > 0\\$ Then it is equivalent to $F$ when taking $f(s) = \log^a(s)$. Not too sure if this helps at all; but exponentially bounded indefinite sums are very simple to construct (largely due to Ramanujan, I just made a few short-cuts in his construction). RE: A support for Andy's (P.Walker's) slog-matrix-method - Gottfried - 03/07/2021 Hi James - nice to see some consideration of this. As the time goes I'm a bit exhausted on this thing, but I'd like to see your "tad" paper. Have you seen the discussion in MSE where I had some questions which were answered nicely? See https://math.stackexchange.com/questions/39378/series-of-logarithms-sum-limits-k-1-infty-lnk-ramanujan-summation Perhaps it would be a nice thing to polish the essay a bit with help of your expertise?    Gottfried RE: A support for Andy's (P.Walker's) slog-matrix-method - tommy1729 - 03/07/2021 A small remark sums and integrals are related. much has already been said about the " continu sum ". but one idea not mentioned often is if and how csum A(i) is related to csum B(i), where A and B are functional inverses of eachother ? We know how the analogue works for integrals. regards tommy1729 RE: A support for Andy's (P.Walker's) slog-matrix-method - JmsNxn - 03/08/2021 Actually the paper isn't as bad as I remember, lol. Here's a link: https://arxiv.org/pdf/1503.06211.pdf I do take for granted that the reader knows what Ramanujan's Master Theorem is. The version I use is, if, $ |f(z)| \le C e^{\alpha |\Im(z)| + \rho|\Re(z)|}\,\,\alpha < \pi/2,\,\,C,\rho > 0\\ f\,\,\text{is holomorphic for}\,\,\Re(z) > 0\\ \text{Then f can be represented as}\\ \Gamma(1-z)f(z) = \sum_{n=0}^\infty f(n+1)\frac{(-1)^n}{n!(n+1-z)} + \int_1^\infty (\sum_{n=0}^\infty f(n+1)\frac{(-x)^n}{n!})x^{-z}\,dx\\$ Which is nothing more than a slightly tweaked version of Ramanujan's Master Theorem. I choose to write this using fractional calculus, where if, $ \vartheta(x) = \sum_{n=0}^\infty f(n+1) \frac{x^n}{n!}\\ f(z) = \frac{d^{z-1}}{dx^{z-1}}|_{x=0} \vartheta(x)\\$