Holomorphic semi operators, using the beta method - Printable Version +- Tetration Forum ( https://math.eretrandre.org/tetrationforum)+-- Forum: Tetration and Related Topics ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=1)+--- Forum: Hyperoperations and Related Studies ( https://math.eretrandre.org/tetrationforum/forumdisplay.php?fid=11)+--- Thread: Holomorphic semi operators, using the beta method ( /showthread.php?tid=1386) |

RE: Holomorphic semi operators, using the beta method - tommy1729 - 05/26/2022
(05/10/2022, 08:29 PM)JmsNxn Wrote:(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation. We talked about those fractional derivatives in the past , some 10 years ago. The thing is they are just some gamma type interpolations. But they usually do not satisfy the functional equations you want or even when they do it is hard to prove. Also if there is no closed form using infinite sums , then this cannot be the solution. Now you could try to alter the formula by replacing your list of function f_n(x) by G ( f_n(x) ) assuming that still makes the sum converge. And then with this functionally invertible G you can retrieve your f_n and somehow that satisfies the functional equation. But there is no easy way to find this G , in fact this is G is not easier than any other method as far as we know. I hesitated to post this because I know you like these gamma and fractional derivatives ideas and invested alot of time and effort in it. But as for now I see little progress or hope... Im sorry. regards tommy1729 RE: Holomorphic semi operators, using the beta method - JmsNxn - 05/26/2022
(05/26/2022, 10:00 PM)tommy1729 Wrote:(05/10/2022, 08:29 PM)JmsNxn Wrote:(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation. ...? Well first of all, you can solve many difference equations using them. And I can't make sense of anything else you said. $$ \alpha \uparrow^n z = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \alpha \uparrow^{n} k+1 \frac{w^k}{k!}\\ $$ I mean, that's absolute. I don't think you know what you're talking about, tommy. I think you're misinterpreting a lot of things... Who cares if there's no closed form for the sum? What does that have to do with anything... This is precisely the action of iterating linear operators. So if \(E\) is a linear operator with specified conditions, then \(E^z e^{Ew} = \frac{d^z}{dw^z} e^{Ew}\). This converges fairly often, and the criterion of its convergence is pretty loose. I honestly have no clue what you're talking about. Maybe your equations aren't working when you try to solve whatever difference equations you are trying to solve--but it works fine for the purposes I've used it for. The equations I've used it for do in fact work. And the above (about semi operators \(\uparrow^s\)) was a conjecture, with a good amount of evidence to back it. EDIT: TO clarify. The correct way to say it, is if \(F\) is holomorphic in the right half plane: $$ |F(z)| \le e^{\rho|\Re(z)| + \tau|\Im(z)|}\,\,\text{for}\,\,\tau,\rho>0\,\,\tau < \pi/2\\ $$ Then; as you call the gamma interpolation, is equivalent to \(F\). So that \(F(z)\) is fully determined by its behaviour on the naturals. Suppose that: $$ EF(n) = F(n+1)\\ $$ And let's also suppose that: $$ EF(z)\,\,\text{has the same bounds as above}\\ $$ Then: $$ EF(z) = F(z+1)\\ $$ This is because: $$ EF(z) - F(z+1) = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \left(EF(k+1) - F(k+2)\right)\frac{w^k}{k!} = \frac{d^{z-1}}{dw^{z-1}} 0 = 0\\ $$ This works for composition, because composition is a linear operator. Write \(Ez = g(z)\) locally about fixed points with real positive multipliers (it works with complex multipliers but it's very tricky). Then the above "gamma interpolation" of \(g^{\circ n}\) just produces the standard Schroder iteration about that fixed point. This is apparent without doing any work at all. the function \(g^{\circ z}\), using the Schroder iteration and keeping the multiplier real positive, satisfies the above bounds I wrote, and is holomorphic in a right half plane. So we just use Ramanujan's master theorem. Absolutely it satisfies the equation. I really don't understand what you're talking about at all. And I would appreciate clarity before you call work useless. RE: Holomorphic semi operators, using the beta method - tommy1729 - 05/26/2022
(05/26/2022, 10:18 PM)JmsNxn Wrote:(05/26/2022, 10:00 PM)tommy1729 Wrote:(05/10/2022, 08:29 PM)JmsNxn Wrote:(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation. ah yes but now you used a totally different equation that only looks similar. The issue in the equation you provided now ( but not the equation i quoted ) is that n ( from the up arrow ) needs to be integer, then it works ofcourse by taylors theorem. but if n is NOT integer and the whole function is suppose to satisfy functional equation for both integer and noninteger n , now that is a whole different story. ** In the example i actually quoted your s from the arrow and the related w , might be problematic when s is noninteger. Sure you get values , but there is no reason they satisfy desired equations. When s is a half-integer it relates to gamma( half-integer ) and thus sqrt pi. That pi might not belong to the ideas of tetration and alike so functional equations might not hold. I never talked about linear operators. Most ideas are nonlinear. regards tommy1729 RE: Holomorphic semi operators, using the beta method - JmsNxn - 05/26/2022
Oh okay, Yes, I understand your concern now. I actually had a rough outline of how it could work. This is a little tricky but I'll try my best. Consider $$ \alpha \uparrow^s z+1\\ $$ Assume it satisfies Ramanujan's bounds, so the "gamma interpolation" works. Now, here is the real kicker. We also need: $$ \alpha \uparrow^{s-1} \left( \alpha \uparrow^s z\right)\\ $$ To be a holomorphic function for \(\Re(s) > A\) for some \(A\) and satisfy Ramanujan's bounds. Where they are defined similarly. IF (and that's a big IF) you can show this, then we're done. This is because: $$ \left(\alpha \uparrow^s z+1\right) - \left(\alpha \uparrow^{s-1} \left( \alpha \uparrow^s z\right)\right)\Big{|}_{s \in \mathbb{N}} = 0\\ $$ So Ramanujan pretty much takes care of everything, if you can show it's appropriately bounded. It's still a big if though, but I made a good amount of head way. RE: Holomorphic semi operators, using the beta method - tommy1729 - 05/26/2022
(05/26/2022, 10:49 PM)JmsNxn Wrote: ...I snipped the part I completely understood. This statement seems intuitively correct to me. But Im still a bit troubled. why positive integer s is sufficient and implies it for positive real s. Maybe im tired. I can see both are in the same vector space so that is good. And we have countable parameters what seems good too ( implying countable s will be sufficient ) Maybe im lazy now. But this matters alot regards tommy1729 RE: Holomorphic semi operators, using the beta method - MphLee - 05/26/2022
(05/26/2022, 09:04 PM)tommy1729 Wrote: Forgive me the off-topic in the off-topic. But I highlight how much we would profit conceptually to have an unifying framework/language to treat, express/compare all the questions of this kind. Notice how asking that a solution to Bennett equation can be modified into something that satisfies Goodstein would need and use the same framework. Same goes for the question of how all the various hyperoperations (lower, offsets, ackermann...) do compare with each other. RE: Holomorphic semi operators, using the beta method - JmsNxn - 05/26/2022
Hey, Tommy! Not a problem. I'll explain again. I made an edit to the post above after you had already posted. I'll put it here. TO clarify. The correct way to say it, is if \(F\) is holomorphic in the right half plane: $$ |F(z)| \le e^{\rho|\Re(z)| + \tau|\Im(z)|}\,\,\text{for}\,\,\tau,\rho>0\,\,\tau < \pi/2\\ $$ Then; as you call the gamma interpolation, is equivalent to \(F\). So that \(F(z)\) is fully determined by its behaviour on the naturals. Suppose that: $$ EF(n) = F(n+1)\\ $$ And let's also suppose that: $$ EF(z)\,\,\text{has the same bounds as above}\\ $$ Then: $$ EF(z) = F(z+1)\\ $$ This is because: $$ EF(z) - F(z+1) = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \left(EF(k+1) - F(k+2)\right)\frac{w^k}{k!} = \frac{d^{z-1}}{dw^{z-1}} 0 = 0\\ $$ This works for composition, because composition is a linear operator. Write \(Ez = g(z)\) locally about fixed points with real positive multipliers (it works with complex multipliers but it's very tricky). Then the above "gamma interpolation" of \(g^{\circ n}\) just produces the standard Schroder iteration about that fixed point. This is apparent without doing any work at all. the function \(g^{\circ z}\), using the Schroder iteration and keeping the multiplier real positive, satisfies the above bounds I wrote, and is holomorphic in a right half plane. So we just use Ramanujan's master theorem. Absolutely it satisfies the equation. Now Applying this logic to the above, if \(\alpha \uparrow^s z\) is in this Ramanujan space (across \(s\)), as well as \(\alpha \uparrow^{s-1}\left(\alpha \uparrow^s z\right)\). Then, their difference is certainly in this ramanujan space. Then: $$ \alpha \uparrow^s z - \alpha \uparrow^{s-1}\left(\alpha \uparrow^s z\right) = \frac{d^{s-2}}{dw^{s-2}}\Big{|}_{w=0}\sum_{k=0}^\infty \left(\alpha \uparrow^{k+2} z - \alpha \uparrow^{k+1}\left(\alpha \uparrow^{k+2} z\right)\right)\frac{w^k}{k!} = \frac{d^{z-1}}{dw^{z-1}} 0 = 0\\ $$ So since they agree on the naturals, they agree continuously. The downside is that we need to ensure that: $$ \alpha \uparrow^{s-1}\left(\alpha \uparrow^s z\right)\\ $$ Is in this Ramanujan space. The closest I ever got to showing this, was showing that: $$ \alpha \uparrow^{s-1} \alpha = \alpha \uparrow^s 2\\ $$ This is actually manageable, to show this isn't too out there. But even then, it requires showing that the "gamma interpolation" works for hyper-operators. Where last I looked at it, there was a lemma that escaped being proven which held it together... I'd have to go over my old notes, But I believe I reduced the problem into showing that the following converges: $$ F(s) = \frac{d^{s-2}}{dw^{s-2}}\sum_{n=0}^\infty \alpha \uparrow^{n+2} \infty \frac{w^n}{n!}\\ $$ So if you can interpolate the fixed points of the bounded hyperoperators, you can interpolate the hyperoperators themselves. This is very doable, because as you increase \(s\) you get closer and closer to \(\alpha\). RE: Holomorphic semi operators, using the beta method - JmsNxn - 05/26/2022
(05/26/2022, 11:33 PM)MphLee Wrote: I agree entirely. That's why I use \(\langle s \rangle\) for the bennet procedure. It would produce something very different than the \(\uparrow\) procedure, despite both satisfy Goodstein's equation. Welcome to the wonderful world of advanced mathematics, Mphlee; where everyone calls everything something different, lol. RE: Holomorphic semi operators, using the beta method - MphLee - 05/27/2022
(05/26/2022, 11:46 PM)JmsNxn Wrote: Welcome to the wonderful world of advanced mathematics, Mphlee; where everyone calls everything something different, lol. Hahah, I know, I'm perfectly used to it... that's why I need definitions. RE: Holomorphic semi operators, using the beta method - JmsNxn - 05/29/2022
Vittorio's limit formula: Hey, everyone! So MphLee shared with me some of his work. And he explained a fixed point formula for Goodstein's equation. And I'd like to present it here as an algorithm which can be used to solve our problem. Let's write: $$ f(y,s) = x [s] y\\ $$ Where this \([s]\) is interpreted as the modified bennet operations. Now, we are going to look at the inverse in \(y\), and call it \(f^{-1}(y,s)\). So for \(s=0\) this becomes \(f^{-1}(y,0) = y-x\), and for \(s=1\) this becomes \(f^{-1}(y,1) = y/x\) and for \(s=2\) this becomes \(f^{-1}(y,2) = \log_x(y)\). Where, we have an analytic function \(f^{-1}(y,s)\) for \(s \in [0,2]\) and \(y>e\). (Remember \(x>e\)). What Vittorio's limit formula describes is a very very weird way of identifying Goodstein's equation. In many ways, we are just searching for \(g\) such that: $$ g(y,s) = g(g^{-1}(y,s+1) + 1,s+1)\\ $$ Now, this may seem obvious. But, when you attach it to the fact that we can use bennet to approximate these solutions--and that these solutions exist. Well, why not just use Vittorio's limit formula (which, although Mphlee didn't write it out perfect, this is exactly what he meant). $$ g_n(y,s) = g_{n-1}(g^{-1}_{n-1}(y,s+1) +1,s+1)\\ $$ We are just solving this limit. And in solving this limit we have perfection... Now, Mphlee mostly described this as an algebraic/categorical property any Goodstein looking sequence satisfies. I want to say, that this is a polynomial operation which converges. And it converges fast. And I'm naming it after Vittorio. The main object we have to consider here are binary operations. And we have to create an iterative procedure off of binary operations. This is something I never would've thought of. Although this appears at face value, as something we've all seen. I believe this deserves to be called Vittorio's limit formula. Which, I'll describe perfectly. Start with the modified Bennet operators: $$ x\,[s]\,y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\ $$ Call: $$ x\,[s]^{-1}\, y\,\,\text{the inverse of}\,\,x\,[s]\,y\,\,\text{in}\,\, y\\ $$ Now, we are looking to solve \([1,2]\) while we solve \([0,1]\). This is done to solve the pasting issue. Now let's call: $$ x\,[s]_1\,y = x\,[s+1]\,\left( (x\,[s+1]^{-1}\,y) +1\right)\\ $$ Now, we can continue this operation: $$ x\,[s]_n\,y = x\,[s+1]_{n-1}\,\left( (x\,[s+1]_{n-1}^{-1}\,y) +1\right)\\ $$ The really weird part now, is that this solution doesn't work on its own. You have to solve for \([s+1]_n\) while you solve for \([s]_n\). The thing is... we can solve for: $$ \begin{align} x\,[s]_{n}\,y &= f(y)\\ x\,[s+1]_n\,y &= f^{\circ y}(q)\\ \end{align} $$ You can actually do this pretty fucking fast... It just looks like iterating a linear function. EDIT:ACk! made a small mistake here, this is an idempotent iteration like this, the actual iteration is a little more difficult, I'll write it up when I can make sense of controlling the convergence of this... Ladies and Gents, I present Vittorio's Limit Formula And how you turn Bennet into Goodstein PS: To Mphlee, the object \(x\,[s]\,y\) is in your attractive basin.... You wrote it, but you didn't see it as a polynomial. \(x\,[s]\,y\) is so close to \(x\,\langle s\rangle y\) that iterations of your identity, converge to it...... God, I hope you get this..... |