Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
[Exercise] A deal of Uniqueness-critrion:Gamma-functionas iteration
#1
I considered the iteration of the multiplication in the sense of the interpolation of the factorial. I adapted the idea of a truncation of a matrixoperator in the same way as Andy has described it with his slog-solution. The occuring powerseries are "automatically" that of the gamma/lngamma-function - it seems, that method provides a proper "uniqueness-criterion" implicitely (that's then also a backup for the possible justifications of Andy's method).

When I stuck at a certain problem I discussed this in the math.SE-forum, see the link here: math.stackexchange.com

I'll also put my results in my text on Uncompleting the Gamma (PDF) which I'll upload here if I'm finished as update of the provisorical chapter 4.

Gottfried
Gottfried Helms, Kassel
Reply
#2
(05/19/2011, 05:49 AM)Gottfried Wrote: I considered the iteration of the multiplication in the sense of the interpolation of the factorial. I adapted the idea of a truncation of a matrixoperator in the same way as Andy has described it with his slog-solution. The occuring powerseries are "automatically" that of the gamma/lngamma-function - it seems, that method provides a proper "uniqueness-criterion" implicitely (that's then also a backup for the possible justifications of Andy's method).    

When I stuck at a certain problem I discussed this in the math.SE-forum, see the link here: math.stackexchange.com

I'll also put my results in my text on Uncompleting the Gamma (PDF) which I'll upload here if I'm finished as update of the provisorical chapter 4.

Gottfried

The incomplete gamma function is one of the most interesting ones !!

I had the idea of removing the singularities of the sexp in a similar way as you removed the poles and zero's of the gamma.
Not sure if that will work. What do you think ??

If G(z) is the uncomplete ( or incomplete ) gamma function.

G(z) =/= 0 implies that there is an entire logG(z) analogue to loggamma !

Also logG(exp(z)) must be interesting.

Notice that all derivatives of G(z) has positive derivatives.
Im referring to fake function theory again yes.

I wonder how close G(z) , fakegamma(z) and fakeG(z) are.
And how many zero's the fake will have. Only finite zero's ??

Forgive me, maybe I said all that before.

the pxp analogue of G(z) is also interesting.

continu sums and products relate too.

A kind of superfactorial is also worth considering.

For the superfunction of G(z) we could perhaps try using the recent auxilliary function strategy :

F(s) = G( F(s-1) - exp(-s) )

So clearly it connects to many things here at the tetration forum.

regards

tommy1729
Reply
#3
(03/07/2021, 11:26 PM)tommy1729 Wrote:
(05/19/2011, 05:49 AM)Gottfried Wrote: I considered the iteration of the multiplication in the sense of the interpolation of the factorial. I adapted the idea of a truncation of a matrixoperator in the same way as Andy has described it with his slog-solution. The occuring powerseries are "automatically" that of the gamma/lngamma-function - it seems, that method provides a proper "uniqueness-criterion" implicitely (that's then also a backup for the possible justifications of Andy's method).    

When I stuck at a certain problem I discussed this in the math.SE-forum, see the link here: math.stackexchange.com

I'll also put my results in my text on Uncompleting the Gamma (PDF) which I'll upload here if I'm finished as update of the provisorical chapter 4.

Gottfried

The incomplete gamma function is one of the most interesting ones !!

I had the idea of removing the singularities of the sexp in a similar way as you removed the poles and zero's of the gamma.
Not sure if that will work. What do you think ??

If G(z) is the uncomplete ( or incomplete ) gamma function.

G(z) =/= 0 implies that there is an entire logG(z) analogue to loggamma !

Also logG(exp(z)) must be interesting.

Notice that all derivatives of G(z) has positive derivatives.
Im referring to fake function theory again yes.

I wonder how close G(z) , fakegamma(z) and fakeG(z) are.
And how many zero's the fake will have. Only finite zero's ??

Forgive me, maybe I said all that before.

the pxp analogue of G(z) is also interesting.

continu sums and products relate too.

A kind of superfactorial is also worth considering.

For the superfunction of G(z) we could perhaps try using the recent auxilliary function strategy :

F(s) = G( F(s-1) - exp(-s) )

So clearly it connects to many things here at the tetration forum.

regards

tommy1729

Another question : 

Is G(z) a superfunction ? of what ?

Now we can define F(z) := G( invG(z) + 1 ) and then plug in approximations of G and invG.
But I guess more can be done. 
Is F(z) entire and similar questions would be imho interesting.

Also finding tetration by infinite compositions such as f(z+1) = exp(f(z) - G(-z)) have crossed my mind.

Not only because G(-z) grows to 0 faster than exp(-z) but also because of its behavior on the complex plane.
Afterall G does not have a period and does not have a fetish for making copies.

A riemann surface for invG and similar would be nice to see.

Btw the (normal) inverse gamma function is often approximated by using lambertW.

I think I posted a formula somewhere here ...

regards

tommy1729
Reply
#4
Heh, this is fairly interesting. I'd like to give my two cents on what you are doing briefly by taking a detour through fractional calculus. This relates very largely to the indefinite sum, fractional iteration, differsums, and recurrence relations in general. It's largely why I got so into fractional calculus in the first place--which seems to me, you're stumbling upon one of the fundamental analytic continuations I use.

To begin, observe, one I consider, the simplest way to analytically continue the Gamma function.



Break the integral into . When we look at,



And this converges everywhere in . The second part of the integral,



Converges everywhere in the complex plane, giving us the analytic continuation of into ,



It's difficult who to attribute this analytic continuation to, I have heard it attributed to Euler, so it goes back a fair way.

The second player to the game is Ramanujan, who although didn't provide an absolutely rigorous description of the following phenomena, but still displayed the correct answer in a lot of cases (we can now rigorously say where and when it works). This is called Ramanujan's master theorem. The manner in way Ramanujan would write it is very indicative of the general theory.

Let us consider a type of operator (really a function, but this was the original language),



And then note that,



Which of course, is nothing but a formalism, which does display some truth. The correct way, as would write it now, is to write,



And, Ramanujan wrote, sort of unjustified, that,



Now, this clearly won't work for arbitrary functions . It will only work for certain functions, and to determine which functions is surprisingly not that hard. Let us take a function holomorphic for , let us assume that for as . And to keep things simple, lets assume that as for arbitrary .

Now write the inverse mellin transform as,



Now, the poles of this integrand in the left half plane, are at the negative integers, and their residues are,



Doing a standard exercise in contour integration, and thanks to the nice asymptotics of the Gamma function (think Sterling), and the exponential bounds on , we get,



Now, by way of the mellin transform (thanks to the fourier transform), we know instantly that and therefore,



And so we can take the mellin transform, and by the invertible nature,



Now, we enter in the above analytic continuation we did for the Gamma function (which works in the exact same manner as above), to arrive at,



Which is valid for . And upon here, we enter the third player, the Riemann-Liouville differintegral. Now there are a bunch of differintegrals, but the one we want is the Riemann-Liouville because it is really the mellin transform in disguise.  We write this, in a slightly nonstandard form, but it's really nothing but a change of variables. Here, we change our function of interest to remove the pesky alternation. Let us write,





Now, we have an equivalence here. I'm going to skip a bit, but bear with me. Assume that has decay of the form for as in this sector. Then there exists a function which is holomorphic for and satisfies and in a fast enough manner to be integrated, and for the sum to be entire.

With this we have an isomorphism between differintegration, and very well behaved functions in the right half plane . Now, it's important to remember the rules of differintegration. This is that,



So that integration is inherently integration with the lower limit at negative infinity (this is important to ensure consistency). And it is hereupon where we can return to Ramanujan's notation a bit more rigorously. I'll take the example of fractional iteration, as it's a nice one to start with (though the same heuristic works for indefinite sum, indefinite product, and the sort). Let us write,



Which is a linear operator. Now, we write,



Let's assume that this is differintegrable (which is the real challenge, in showing this happens). For the case that has a fixed point with multiplier this is doable, thanks to Schroder's equation (and is doable in other scenarios as well). And using Ramanujan's abuse of notation, we write,



And, let us combine this with the semi-group property of the differintegral,



Which tells us the semi-group property of the differintegral preserves when iterating operators! Thus, all we really need is to prove the auxiliary function is differintegrable to express it in a manner that only depends on its natural iterates. We use to create ; so long as we have a differintegrable condition (which proves very hard to guarantee from first principles unfortunately).  Also, next to this, we are guaranteed a uniqueness condition. I refer to this as Ramanujan's identity theorem, a much stronger version was proven by Carlson (earning him a fields medal)--but we don't need the strength of Carlson.

Assume that is one of these well bounded functions. And let's assume that for all . Well by the isomorphism, and the interchange, our function,



And therefore, when taking the differintegral, we prove that . Which implies that if , is bounded in this nice manner, it must be zero everywhere. From this, we can derive, if we have two functions and bounded in the appropriate manner, and satisfy then is bounded the same and is zero on the naturals, therefore everywhere and . Which lets us conclude we have a uniqueness condition on this space.

This is infinitely helpful when talking about iterations. For the indefinite sum, for instance, all we need is that it's bounded in this manner--it's unique. For the fractional iteration, for instance, all we need is that it's bounded in this manner--it's unique. Etc etc...

And here is where we come to what I think you are beginning to uncover. Though, you are looking at the Gamma function, and rather than the Gamma function as multiplier. Let us take a function which exists in this bounded space, and let us assume that for the natural numbers. Then we are guaranteed that everywhere, because it is in this space. Now, this is actually a uniqueness condition for the Gamma function in disguise. Where we are instead declaring the uniqueness of the equation,



Which appears trivial, but it really isn't. Because there can be no function and such that,




where and .


Now how this all ties to Andy's slog, seems to be that we are going to work in the same manner. Now I can't speak for Andy's slog (I do not know enough), but if it's holomorphic for and it satisfies these bounds (which makes perfect sense because of how slow the slog function grows). We would declare a uniqueness condition as,



Which means there is only one function in this space which interpolates the values of Andy's slog at --and this solution happens to solve the slog equation. Now I do not know if this works, because I don't know where Andy's slog is holomorphic (or if it even is). I do not know if any slog will work tbh, because I've never studied their domains of holomorphy. But the bounds shouldn't be a problem, just whether they're holomorphic in a right half plane. This doesn't help constructively much, because finding the values are rather impossible without solving first.

This means, the manner you are using, truncating matrices and approximating solutions--may be equivalent to the Ramanujan method using the differintegral. It certainly is using the gamma function. Which may be helpful for me to re-explain. Assume that,



interpolates the factorial and satisfies Then, additionally assume that is in this nice space, then . Now, this may seem trivial, again. But the manner in which you are breaking up the Gamma function in your paper, is actually pretty much all of the above in disguise.

I hope this might be helpful. Anyway, had fun writing it. Least of all, it should explain a lot of the work I've been showing you lately.

Regards, James.


EDIT:

I also thought it may be helpful to describe how the functional relationship of Indefinite Summation can be interpreted in the fractional calculus method. Let us assume that is in this bounded space; and define,



Now, these functions satisfy the relationship,



Now, we know that,



And, as I showed in my paper that I posted in the other thread, the function is also differintegrable. Therefore define,



Then,



Which is the functional equation of indefinite summation. We can do this with many functional equations; and it seems pretty universal in its effectiveness. The same procedure works with indefinite products (but it's a tad less effective). It works with Matrix operators as well, if we consider a diagonalizable square nxn matrix; and assuming we have a certain kind of distribution of the eigenvalues (I don't know the word for it, I can write it out if you want), then the iteration,



Is just as convergent in the matrix space. Giving a rather quick way of iterating matrices... This may help in your truncating matrix method and iterating them. But, don't ask me how--it just occurred to me it might help.
Reply
#5
Hi James -

 this seems really great what, and how, you're explaining/discussing that things. Even it takes me in derivations, which I'd always avoided due to lack of original/basic insight (my formal math-edication has been limited to linear algebra, while I completely missed any progress in the analysis-course, for students of "general informatik" in the early 1970ies). But how you expose consideration & arguing is really a delight, and perhaps I'll find the energy to step into it even deeper. Unfortunately since my retiring two years ago my mental energy for those things degrades already and I think I'll have to accept, that not many new things shall get entry in my mind... But really, your style of discourse is attractive again.     
I had tried to step into this fractional calculus some years ago, did some exercises, but when it came to the "half-derivative of zeta at zero" I gave up (even had tried to get help on this in MSE; that has been 7 years ago, I've been much fitter...). I'll surely read your answer again with hope to be able to catch up some more insight, let's see. And nice to see, that one gets up this explorative adventure with the "uncompleting the gamma" with a qualified access at all. <heart-warming> ... :-)    

At the moment I'm absorbed with the other problem (discussing x --> x-1/x) and properties of periodic points im comparision with what I've learned about periodic points of the exp(z) function, and writing up an essay on my explorations and discoveries. The occuring problem to make a vision of a continuous iteration for instance through a 3-periodic set in the exp(z) compared with a 3-periodic set in the x-->x-1/x problem is absorbing me most massively at the moment.        

Just wanted -at least- respond to your engagement timely enough to show that I'm valuating it very much!

Gottfried
Gottfried Helms, Kassel
Reply
#6
Hey, so I read your post trying to find the half-derivative of zeta. It's very important to remember in fractional calculus that the differintegral always depends on a base point.

The standard form of any differintegral is written as,



Now each, provides different solutions for each value . Particularly, the Gamma function derivative of monomials works as,



But, for ,



Additionally, we get a lot of problems when considering transcendental functions for finite. Which is epitomized by,



To get the above identity, we use Riemann-Liouville's differintegral, which is when we set (or more accurately, we set the point on the Riemann Sphere). This differintegral will not converge on monomials without A LOT of massaging and discussion of distribution theory (something I think I wasted way too much time on). But, if we write,



This is probably the best analytic continuation of the Riemann Liouville I came up with. And if you note,



Which means it's the differintegral which preserves the exponential. Even if is complex, we can choose different branches of which corresponds to different branches of the Riemann-Liouville integral (which is sort of like integrating in different directions and rotating the direction collecting powers of ).

Now, as to the zeta function, you have two choices. Expand the zeta function into a Taylor series, and differintegrate term wise. Or consider the zeta function as a transcendental function (which is a much better choice in my opinion).

The trouble is, the zeta function has decay as rather than . This isn't much of a problem really, as we just rotate the integration and add a exponential factor.



For some . This will almost equal the expected result if it weren't for the pesky first term of the zeta function, the constant . Let's ignore it briefly and write,



To uncover the value at we have to use the above analytic continuation, this is,



Which is where we have to use pesky distribution theory. If and for all other , then we can conclude,



Which tells us that is holomorphic for and (which avoids all that pesky nonsense).

Now, you're safest bet to uncover the fractional derivative at zero using the Riemann Liouville differintegral is to analytically continue this Dirichlet series (which isn't hard at all considering the complexity of the dirichlet coefficients). This will give you a function,



Remembering there is a choice of which branch you are using, and there is a different differintegral for each .

Now the other way, is fairly simpler, but I absolutely hate it for power series. Largely because it will not behave well on the prototypical power series . This is much plainer and it is just differintegrating termwise (which is boring).

Anywho, that's my take on the fractional derivative of .

Regards, James

EDIT: It's important to remember that the Riemann-Liouville integral is largely meant for when our function has decay to zero in some direction, as it goes to infinity. If the function decays to a constant, like how then the differintegration can't be holomorphic at , we will have a discontinuity here. This also happens if we have for monomials, which would mean the differintegration is holomorphic for . I thought it may be helpful to write out the formula for the differintegrations of monomials using Riemann-Liouville.



And again we have to use a type of distribution where for and it equals zero everywhere else. Which is a perfectly natural interpolation of but looks a lot uglier than the typical formula for a monomial. But, when you enter the distribution field of fractional calculus this formula can be very helpful, especially when correcting errors, and interpreting things like --it essentially gives us justification for throwing away , especially when we let . It preserves the semi-group property and the relationship ; however it really only does this trivially. This is rather unconventional--and is because we are analyticially continuing the operator as a semi-group and sort of ignoring the differential relationship--I haven't investigated it much but it's nice. So it disagrees, somewhat, with what we expect a differintegral to do--especially for negative real values (when we fractionally iterate the integral); because, well, it doesn't integrate at all--it just nulls it.

The alternative way to do this, is to no longer insist that interpolates differentiation necessarily, but is instead just a semigroup. And part of the kernel of this semi-group is the space of all polynomials,



I like this interpretation because it essentially restricts the Riemann-Liouville differintegral to non-polynomials; and only works with functions expressible by power-series with an infinite number of terms. Both interpretations are equally valid though; largely because in the distribution sense we are only worried about a measure zero set of values (so we can throw it away almost everywhere); they're equivalent almost everywhere.

Non-integer monomials behave nice though, thought I'd add this too.



Which means even more functions are in the kernel. This produces some trouble when talking about the logarithm, but only if we think about it dramatically. Remembering these are semigroups, we still have,



Which is holomorphic for and is a semi-group. However we lose right side associativity, . This isn't a problem, we just have to remember to be very careful once we analytically continue the Riemann-Liouville integral. The correct manner of interpreting this, is that MOST OF THE TIME when but NOT ALWAYS. Especially once the differintegral is extended to a larger space of functions. However, the semi-group property is still conserved. And the reason it works for the logarithm, is because we are NOT performing/allowing the differintegral identity (recall that is integration with initial point at infinity, which is infinite for , this is meaningless no matter how we slice it).

So try to remember that is a semi-group acting on a space of integrable functions at infinity (with certain decay conditions)--and in this space interpolates the derivative. When we extend the operator (analytically continue the operator), it is still a semi-group but it no longer necessarily interpolates the derivative. For all of our purposes this is great news. We don't really care that it interpolates the derivative as much as we care that the semi-group property holds.

This whole edit (which I must've done ten or twelve times so far, lol) is an argument that The Riemann Liouville integral can be analytically continued and has a very robust kernel. Also, it's meant as an argument, that this function is infinitely more important/effective for transcendental/infinite power series functions. As a nice end to end, not all algebraic functions suffer this problem.



Equals a transcendental function and is not everywhere. It equals something with the function I can't remember now, but,



And we can use a power series expansion to derive the general formula.



This has a closed form but I'm too lazy to go through it right now, lol. But it does equal for .

PS:

I'm sorry to hear you're having problems doing math like you used to. That must be very hard, and I can't even imagine. I appreciate you reading what I've written. I'd say just keep at the math, because even if it's hard, it's the type of thing that can only help your brain Smile.

Ironically enough, I'm rather horrible with matrices and linear algebra (at best, I have a working rudimentary knowledge), analysis and integrals and the sort we're always my strong suit. It's always lovely to see matrix people coming to similar results as analysis people. I like to think of the history of Schrodinger's mathematical formalism and Heisenberg's mathematical formalism; one with complex analysis, the other with matrices; and the eventual equivalence proven by Neumann. When I read what you wrote in the initial post I found it strikingly similar to the uniqueness condition imposed by Ramanujan's master theorem; especially when you pulled out the incomplete Gamma function. It took me a very long time to make sense of the equations, and the exact language needed. But if,



For a holomorphic function . There is only ONE function which makes our function belong to the space we want. Which, if we change to a holomorphic function, to get a function ; we still get , but . This implies that is not in our nice space. And that there is only one function which gives us our uniqueness condition. And this function happens to be the incomplete Mellin transform/(incomplete Gamma function). And then, the added benefit is that in this space it must preserve the semi-group property! And the semi-group property of the differintegral is easily translatable into functional equations!
Reply
#7
see also : 

https://math.eretrandre.org/tetrationfor...p?tid=1309

where I SERIOUSLY consider this uncompleted gamma for tetration.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  [exercise] fractional iteration of f(z)= 2*sinh (log(z)) ? Gottfried 4 1,358 03/14/2021, 05:32 PM
Last Post: tommy1729
  A conjectured uniqueness criteria for analytic tetration Vladimir Reshetnikov 13 21,377 02/17/2017, 05:21 AM
Last Post: JmsNxn
  Uniqueness of half-iterate of exp(x) ? tommy1729 14 27,174 01/09/2017, 02:41 AM
Last Post: Gottfried
  Removing the branch points in the base: a uniqueness condition? fivexthethird 0 2,909 03/19/2016, 10:44 AM
Last Post: fivexthethird
  Tommy's Gamma trick ? tommy1729 7 11,865 11/07/2015, 01:02 PM
Last Post: tommy1729
  [2014] Beyond Gamma and Barnes-G tommy1729 1 4,155 12/28/2014, 05:48 PM
Last Post: MphLee
  [2014] Uniqueness of periodic superfunction tommy1729 0 3,330 11/09/2014, 10:20 PM
Last Post: tommy1729
  A limit exercise with Ei and slog. tommy1729 0 3,273 09/09/2014, 08:00 PM
Last Post: tommy1729
  Real-analytic tetration uniqueness criterion? mike3 25 37,900 06/15/2014, 10:17 PM
Last Post: tommy1729
  exp^[1/2](x) uniqueness from 2sinh ? tommy1729 1 4,048 06/03/2014, 09:58 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)