Thread Rating:
  • 2 Vote(s) - 2 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Holomorphic semi operators, using the beta method
#51
(05/10/2022, 08:29 PM)JmsNxn Wrote:
(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.

First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.

If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !

Therefore the superfunction operator is an analytic operator.

this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.

Secondly we want

x <s> 1 = x for all s.

by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.

This gives us an opportunity to get analytic hyperoperators.

Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.

You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).

This implies that

x <0> y = x + y is WRONG.

We get by the above :

x < 0 > y = x + y - 1

( x <0> 1 = x !! )

x < 1 > y = x y

( the super of +x + 1 - 1 aka +x y times )

x < 2 > y = x^y

( the super of x y ;  taking x * ... y times )

x < 3 > y = x^^ y 

( starting at x and repeating x^... )

This also allows us to compute x < n > y for any n , even negative.

That is a sketch of my idea.


Not sure how this relates to 2 < s > 2 = 4 ...

Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.

Gotta run.



Regards

tommy1729

Tom Marcel Raes

Hey, Tommy

Well aware there are many different ways of approaching this problem. The way I am doing is just one way. Quite frankly, this looks a lot like my original approach from a long time ago. There, I started with \(x \uparrow^0 y = x \cdot y\), and then do everything you just described. And try to define \(x \uparrow^s y\). Where notably I set \(x \uparrow^s 1  = x\). I'm trying something very very different here. This does not equate to the old studies on this, that's why I shied away from the uparrow notation, because I wanted to insinuate an entirely different object.

That's why again, I'm only interested for \( 0\le s \le 2\) at the moment, because Bennet comes exceedingly close to satisfying the Goodstein recursion in this interval.

I'm well aware of the method you are describing though, I'm still confident a solution for that is given by:

$$
\alpha \uparrow^s (z+1) = \frac{d^{s-1}}{dw^{s-1}}\frac{d^{z-1}}{du^{z-1}} \Big{|}_{w=0}\Big{|}_{u=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{n+1} (k+2) \frac{w^nu^k}{n!k!}\,\,\Re(s) >0,\,\Re(z) > 0,\,\alpha \in(1,\eta)\\
$$

...

We talked about those fractional derivatives in the past , some 10 years ago.

The thing is they are just some gamma type interpolations.

But they usually do not satisfy the functional equations you want or even when they do it is hard to prove.

Also if there is no closed form using infinite sums , then this cannot be the solution.

Now you could try to alter the formula by replacing your list of function f_n(x) by G ( f_n(x) ) assuming that still makes the sum converge.

And then with this functionally invertible G you can retrieve your f_n and somehow that satisfies the functional equation.

But there is no easy way to find this G , in fact this is G is not easier than any other method as far as we know.



I hesitated to post this because I know you like these gamma and fractional derivatives ideas and invested alot of time and effort in it.
But as for now I see little progress or hope...

Im sorry.

regards

tommy1729
Reply
#52
(05/26/2022, 10:00 PM)tommy1729 Wrote:
(05/10/2022, 08:29 PM)JmsNxn Wrote:
(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.

First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.

If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !

Therefore the superfunction operator is an analytic operator.

this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.

Secondly we want

x <s> 1 = x for all s.

by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.

This gives us an opportunity to get analytic hyperoperators.

Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.

You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).

This implies that

x <0> y = x + y is WRONG.

We get by the above :

x < 0 > y = x + y - 1

( x <0> 1 = x !! )

x < 1 > y = x y

( the super of +x + 1 - 1 aka +x y times )

x < 2 > y = x^y

( the super of x y ;  taking x * ... y times )

x < 3 > y = x^^ y 

( starting at x and repeating x^... )

This also allows us to compute x < n > y for any n , even negative.

That is a sketch of my idea.


Not sure how this relates to 2 < s > 2 = 4 ...

Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.

Gotta run.



Regards

tommy1729

Tom Marcel Raes

Hey, Tommy

Well aware there are many different ways of approaching this problem. The way I am doing is just one way. Quite frankly, this looks a lot like my original approach from a long time ago. There, I started with \(x \uparrow^0 y = x \cdot y\), and then do everything you just described. And try to define \(x \uparrow^s y\). Where notably I set \(x \uparrow^s 1  = x\). I'm trying something very very different here. This does not equate to the old studies on this, that's why I shied away from the uparrow notation, because I wanted to insinuate an entirely different object.

That's why again, I'm only interested for \( 0\le s \le 2\) at the moment, because Bennet comes exceedingly close to satisfying the Goodstein recursion in this interval.

I'm well aware of the method you are describing though, I'm still confident a solution for that is given by:

$$
\alpha \uparrow^s (z+1) = \frac{d^{s-1}}{dw^{s-1}}\frac{d^{z-1}}{du^{z-1}} \Big{|}_{w=0}\Big{|}_{u=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{n+1} (k+2) \frac{w^nu^k}{n!k!}\,\,\Re(s) >0,\,\Re(z) > 0,\,\alpha \in(1,\eta)\\
$$

...

We talked about those fractional derivatives in the past , some 10 years ago.

The thing is they are just some gamma type interpolations.

But they usually do not satisfy the functional equations you want or even when they do it is hard to prove.

Also if there is no closed form using infinite sums , then this cannot be the solution.

Now you could try to alter the formula by replacing your list of function f_n(x) by G ( f_n(x) ) assuming that still makes the sum converge.

And then with this functionally invertible G you can retrieve your f_n and somehow that satisfies the functional equation.

But there is no easy way to find this G , in fact this is G is not easier than any other method as far as we know.



I hesitated to post this because I know you like these gamma and fractional derivatives ideas and invested alot of time and effort in it.
But as for now I see little progress or hope...

Im sorry.

regards

tommy1729

...?


Well first of all, you can solve many difference equations using them. And I can't make sense of anything else you said.

$$
\alpha \uparrow^n z = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \alpha \uparrow^{n} k+1 \frac{w^k}{k!}\\
$$

I mean, that's absolute. I don't think you know what you're talking about, tommy.

I think you're misinterpreting a lot of things...

Who cares if there's no closed form for the sum? What does that have to do with anything...

This is precisely the action of iterating linear operators. So if \(E\) is a linear operator with specified conditions, then \(E^z e^{Ew} = \frac{d^z}{dw^z} e^{Ew}\). This converges fairly often, and the criterion of its convergence is pretty loose. I honestly have no clue what you're talking about. Maybe your equations aren't working when you try to solve whatever difference equations you are trying to solve--but it works fine for the purposes I've used it for. The equations I've used it for do in fact work. And the above (about semi operators \(\uparrow^s\)) was a conjecture, with a good amount of evidence to back it.

EDIT: TO clarify.


The correct way to say it, is if \(F\) is holomorphic in the right half plane:

$$
|F(z)| \le e^{\rho|\Re(z)| + \tau|\Im(z)|}\,\,\text{for}\,\,\tau,\rho>0\,\,\tau < \pi/2\\
$$

Then; as you call the gamma interpolation, is equivalent to \(F\). So that \(F(z)\) is fully determined by its behaviour on the naturals.

Suppose that:

$$
EF(n) = F(n+1)\\
$$

And let's also suppose that:

$$
EF(z)\,\,\text{has the same bounds as above}\\
$$

Then:

$$
EF(z) = F(z+1)\\
$$


This is because:

$$
EF(z) - F(z+1) = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \left(EF(k+1) - F(k+2)\right)\frac{w^k}{k!} = \frac{d^{z-1}}{dw^{z-1}} 0 = 0\\
$$

This works for composition, because composition is a linear operator. Write \(Ez = g(z)\) locally about fixed points with real positive multipliers (it works with complex multipliers but it's very tricky). Then the above "gamma interpolation" of \(g^{\circ n}\) just produces the standard Schroder iteration about that fixed point. This is apparent without doing any work at all. the function \(g^{\circ z}\), using the Schroder iteration and keeping the multiplier real positive, satisfies the above bounds I wrote, and is holomorphic in a right half plane. So we just use Ramanujan's master theorem. Absolutely it satisfies the equation.

I really don't understand what you're talking about at all. And I would appreciate clarity before you call work useless.
Reply
#53
(05/26/2022, 10:18 PM)JmsNxn Wrote:
(05/26/2022, 10:00 PM)tommy1729 Wrote:
(05/10/2022, 08:29 PM)JmsNxn Wrote:
(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.

First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.

If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !

Therefore the superfunction operator is an analytic operator.

this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.

Secondly we want

x <s> 1 = x for all s.

by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.

This gives us an opportunity to get analytic hyperoperators.

Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.

You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).

This implies that

x <0> y = x + y is WRONG.

We get by the above :

x < 0 > y = x + y - 1

( x <0> 1 = x !! )

x < 1 > y = x y

( the super of +x + 1 - 1 aka +x y times )

x < 2 > y = x^y

( the super of x y ;  taking x * ... y times )

x < 3 > y = x^^ y 

( starting at x and repeating x^... )

This also allows us to compute x < n > y for any n , even negative.

That is a sketch of my idea.


Not sure how this relates to 2 < s > 2 = 4 ...

Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.

Gotta run.



Regards

tommy1729

Tom Marcel Raes

Hey, Tommy

Well aware there are many different ways of approaching this problem. The way I am doing is just one way. Quite frankly, this looks a lot like my original approach from a long time ago. There, I started with \(x \uparrow^0 y = x \cdot y\), and then do everything you just described. And try to define \(x \uparrow^s y\). Where notably I set \(x \uparrow^s 1  = x\). I'm trying something very very different here. This does not equate to the old studies on this, that's why I shied away from the uparrow notation, because I wanted to insinuate an entirely different object.

That's why again, I'm only interested for \( 0\le s \le 2\) at the moment, because Bennet comes exceedingly close to satisfying the Goodstein recursion in this interval.

I'm well aware of the method you are describing though, I'm still confident a solution for that is given by:

$$
\alpha \uparrow^s (z+1) = \frac{d^{s-1}}{dw^{s-1}}\frac{d^{z-1}}{du^{z-1}} \Big{|}_{w=0}\Big{|}_{u=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{n+1} (k+2) \frac{w^nu^k}{n!k!}\,\,\Re(s) >0,\,\Re(z) > 0,\,\alpha \in(1,\eta)\\
$$

...

We talked about those fractional derivatives in the past , some 10 years ago.

The thing is they are just some gamma type interpolations.

But they usually do not satisfy the functional equations you want or even when they do it is hard to prove.

Also if there is no closed form using infinite sums , then this cannot be the solution.

Now you could try to alter the formula by replacing your list of function f_n(x) by G ( f_n(x) ) assuming that still makes the sum converge.

And then with this functionally invertible G you can retrieve your f_n and somehow that satisfies the functional equation.

But there is no easy way to find this G , in fact this is G is not easier than any other method as far as we know.



I hesitated to post this because I know you like these gamma and fractional derivatives ideas and invested alot of time and effort in it.
But as for now I see little progress or hope...

Im sorry.

regards

tommy1729

...?


Well first of all, you can solve many difference equations using them. And I can't make sense of anything else you said.

$$
\alpha \uparrow^n z = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \alpha \uparrow^{n} k+1 \frac{w^k}{k!}\\
$$

I mean, that's absolute. I don't think you know what you're talking about, tommy.

I think you're misinterpreting a lot of things...

Who cares if there's no closed form for the sum? What does that have to do with anything...

This is precisely the action of iterating linear operators. So if \(E\) is a linear operator with specified conditions, then \(E^z e^{Ew} = \frac{d^z}{dw^z} e^{Ew}\). This converges fairly often, and the criterion of its convergence is pretty loose. I honestly have no clue what you're talking about. Maybe your equations aren't working when you try to solve whatever difference equations you are trying to solve--but it works fine for the purposes I've used it for. The equations I've used it for do in fact work. And the above (about semi operators \(\uparrow^s\)) was a conjecture, with a good amount of evidence to back it.

ah yes but now you used a totally different equation that only looks similar.

The issue in the equation you provided now ( but not the equation i quoted ) is that n ( from the up arrow ) needs to be integer, then it works ofcourse by taylors theorem.

but if n is NOT integer and the whole function is suppose to satisfy functional equation for both integer and noninteger n , now that is a whole different story.

**

In the example i actually quoted your s from the arrow and the related w , might be problematic when s is noninteger. 
Sure you get values , but there is no reason they satisfy desired equations.

When s is a half-integer it relates to gamma( half-integer ) and thus sqrt pi.

That pi might not belong to the ideas of tetration and alike so functional equations might not hold.

I never talked about linear operators.

Most ideas are nonlinear.



regards

tommy1729
Reply
#54
Oh okay, Yes, I understand your concern now.

I actually had a rough outline of how it could work. This is a little tricky but I'll try my best.


Consider

$$
\alpha \uparrow^s z+1\\
$$

Assume it satisfies Ramanujan's bounds, so the "gamma interpolation" works. Now, here is the real kicker. We also need:

$$
\alpha \uparrow^{s-1} \left( \alpha \uparrow^s z\right)\\
$$

To be a holomorphic function for \(\Re(s) > A\) for some \(A\) and satisfy Ramanujan's bounds. Where they are defined similarly. IF (and that's a big IF) you can show this, then we're done.

This is because:

$$
\left(\alpha \uparrow^s z+1\right) - \left(\alpha \uparrow^{s-1} \left( \alpha \uparrow^s z\right)\right)\Big{|}_{s \in \mathbb{N}} = 0\\
$$

So Ramanujan pretty much takes care of everything, if you can show it's appropriately bounded.  It's still a big if though, but I made a good amount of head way.
Reply
#55
(05/26/2022, 10:49 PM)JmsNxn Wrote: ...

This is because:

$$
\left(\alpha \uparrow^s z+1\right) - \left(\alpha \uparrow^{s-1} \left( \alpha \uparrow^s z\right)\right)\Big{|}_{s \in \mathbb{N}} = 0\\
$$
I snipped the part I completely understood.

This statement seems intuitively correct to me.
But Im still a bit troubled.
why positive integer s is sufficient and implies it for positive real s.

Maybe im tired.
I can see both are in the same vector space so that is good.

And we have countable parameters what seems good too ( implying countable s will be sufficient )

Maybe im lazy now.

But this matters alot

regards

tommy1729
Reply
#56
(05/26/2022, 09:04 PM)tommy1729 Wrote:
$$f(x , s-1 , f(x , s , y)) = f(x , s, f(y , s-1 , 1))$$
$$f(x , s , f(y , s-1 , z)) = f(f(x , s , y) , s-1 , f(x , s , z))$$

solved simultan.

Forgive me the off-topic in the off-topic. But I highlight how much we would profit conceptually to have an unifying framework/language to treat, express/compare all the questions of this kind.
Notice how asking that a solution to Bennett equation can be modified into something that satisfies Goodstein would need and use the same framework. Same goes for the question of how all the various hyperoperations (lower, offsets, ackermann...) do compare with each other.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#57
Hey, Tommy!

Not a problem. I'll explain again. I made an edit to the post above after you had already posted. I'll put it here.

TO clarify.


The correct way to say it, is if \(F\) is holomorphic in the right half plane:

$$
|F(z)| \le e^{\rho|\Re(z)| + \tau|\Im(z)|}\,\,\text{for}\,\,\tau,\rho>0\,\,\tau < \pi/2\\
$$

Then; as you call the gamma interpolation, is equivalent to \(F\). So that \(F(z)\) is fully determined by its behaviour on the naturals.

Suppose that:

$$
EF(n) = F(n+1)\\
$$

And let's also suppose that:

$$
EF(z)\,\,\text{has the same bounds as above}\\
$$

Then:

$$
EF(z) = F(z+1)\\
$$


This is because:

$$
EF(z) - F(z+1) = \frac{d^{z-1}}{dw^{z-1}}\Big{|}_{w=0} \sum_{k=0}^\infty \left(EF(k+1) - F(k+2)\right)\frac{w^k}{k!} = \frac{d^{z-1}}{dw^{z-1}} 0 = 0\\
$$

This works for composition, because composition is a linear operator. Write \(Ez = g(z)\) locally about fixed points with real positive multipliers (it works with complex multipliers but it's very tricky). Then the above "gamma interpolation" of \(g^{\circ n}\) just produces the standard Schroder iteration about that fixed point. This is apparent without doing any work at all. the function \(g^{\circ z}\), using the Schroder iteration and keeping the multiplier real positive, satisfies the above bounds I wrote, and is holomorphic in a right half plane. So we just use Ramanujan's master theorem. Absolutely it satisfies the equation.



Now Applying this logic to the above, if \(\alpha \uparrow^s z\) is in this Ramanujan space (across \(s\)), as well as \(\alpha \uparrow^{s-1}\left(\alpha \uparrow^s z\right)\). Then, their difference is certainly in this ramanujan space. Then:

$$
\alpha \uparrow^s z - \alpha \uparrow^{s-1}\left(\alpha \uparrow^s z\right) = \frac{d^{s-2}}{dw^{s-2}}\Big{|}_{w=0}\sum_{k=0}^\infty \left(\alpha \uparrow^{k+2} z - \alpha \uparrow^{k+1}\left(\alpha \uparrow^{k+2} z\right)\right)\frac{w^k}{k!} = \frac{d^{z-1}}{dw^{z-1}} 0 = 0\\
$$


So since they agree on the naturals, they agree continuously. The downside is that we need to ensure that:

$$
\alpha \uparrow^{s-1}\left(\alpha \uparrow^s z\right)\\
$$

Is in this Ramanujan space.  The closest I ever got to showing this, was showing that:

$$
\alpha \uparrow^{s-1} \alpha = \alpha \uparrow^s 2\\
$$

This is actually manageable, to show this isn't too out there. But even then, it requires showing that the "gamma interpolation" works for hyper-operators. Where last I looked at it, there was a lemma that escaped being proven which held it together... I'd have to go over my old notes, But I believe I reduced the problem into showing that the following converges:

$$
F(s) = \frac{d^{s-2}}{dw^{s-2}}\sum_{n=0}^\infty \alpha \uparrow^{n+2} \infty \frac{w^n}{n!}\\
$$

So if you can interpolate the fixed points of the bounded hyperoperators, you can interpolate the hyperoperators themselves. This is very doable, because as you increase \(s\) you get closer and closer to \(\alpha\).
Reply
#58
(05/26/2022, 11:33 PM)MphLee Wrote:
Forgive me the off-topic in the off-topic. But I highlight how much we would profit conceptually to have an unifying framework/language to treat, express/compare all the questions of this kind.
Notice how asking that a solution to Bennett equation can be modified into something that satisfies Goodstein would need and use the same framework. Same goes for the question of how all the various hyperoperations (lower, offsets, ackermann...) do compare with each other.

I agree entirely. That's why I use \(\langle s \rangle\) for the bennet procedure. It would produce something very different than the \(\uparrow\) procedure, despite both satisfy Goodstein's equation.

Welcome to the wonderful world of advanced mathematics, Mphlee; where everyone calls everything something different, lol.
Reply
#59
(05/26/2022, 11:46 PM)JmsNxn Wrote: Welcome to the wonderful world of advanced mathematics, Mphlee; where everyone calls everything something different, lol.

Hahah, I know, I'm perfectly used to it... that's why I need definitions.
[Image: 14721706-218330895252979-7361010090966657575-n.jpg]

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#60
Vittorio's limit formula:

Hey, everyone! So MphLee shared with me some of his work. And he explained a fixed point formula for Goodstein's equation. And I'd like to present it here as an algorithm which can be used to solve our problem.


Let's write:

$$
f(y,s) = x [s] y\\
$$

Where this \([s]\) is interpreted as the modified bennet operations. Now, we are going to look at the inverse in \(y\), and call it \(f^{-1}(y,s)\). So for \(s=0\) this becomes \(f^{-1}(y,0) = y-x\), and for \(s=1\) this becomes \(f^{-1}(y,1) = y/x\) and for \(s=2\) this becomes \(f^{-1}(y,2) = \log_x(y)\). Where, we have an analytic function \(f^{-1}(y,s)\) for \(s \in [0,2]\) and \(y>e\). (Remember \(x>e\)).

What Vittorio's limit formula describes is a very very weird way of identifying Goodstein's equation. In many ways, we are just searching for \(g\) such that:

$$
g(y,s) = g(g^{-1}(y,s+1) + 1,s+1)\\
$$

Now, this may seem obvious. But, when you attach it to the fact that we can use bennet to approximate these solutions--and that these solutions exist. Well, why not just use Vittorio's limit formula (which, although Mphlee didn't write it out perfect, this is exactly what he meant).

$$
g_n(y,s) = g_{n-1}(g^{-1}_{n-1}(y,s+1) +1,s+1)\\
$$

We are just solving this limit. And in solving this limit we have perfection...


Now, Mphlee mostly described this as an algebraic/categorical property any Goodstein looking sequence satisfies. I want to say, that this is a polynomial operation which converges. And it converges fast. And I'm naming it after Vittorio.

The main object we have to consider here are binary operations. And we have to create an iterative procedure off of binary operations. This is something I never would've thought of. Although this appears at face value, as something we've all seen. I believe this deserves to be called Vittorio's limit formula.

Which, I'll describe perfectly. Start with the modified Bennet operators:

$$
x\,[s]\,y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
$$

Call:

$$
x\,[s]^{-1}\, y\,\,\text{the inverse of}\,\,x\,[s]\,y\,\,\text{in}\,\, y\\
$$


Now, we are looking to solve \([1,2]\) while we solve \([0,1]\). This is done to solve the pasting issue. Now let's call:

$$
x\,[s]_1\,y = x\,[s+1]\,\left( (x\,[s+1]^{-1}\,y) +1\right)\\
$$

Now, we can continue this operation:

$$
x\,[s]_n\,y = x\,[s+1]_{n-1}\,\left( (x\,[s+1]_{n-1}^{-1}\,y) +1\right)\\
$$


The really weird part now, is that this solution doesn't work on its own. You have to solve for \([s+1]_n\) while you solve for \([s]_n\). The thing is... we can solve for:

$$
\begin{align}
x\,[s]_{n}\,y &= f(y)\\
x\,[s+1]_n\,y &= f^{\circ y}(q)\\
\end{align}
$$

You can actually do this pretty fucking fast... It just looks like iterating a linear function.

EDIT:ACk! made a small mistake here, this is an idempotent iteration like this, the actual iteration is a little more difficult, I'll write it up when I can make sense of controlling the convergence of this...



Ladies and Gents, I present

Vittorio's Limit Formula

And how you turn Bennet into Goodstein


PS: To Mphlee, the object \(x\,[s]\,y\) is in your attractive basin.... You wrote it, but you didn't see it as a polynomial. \(x\,[s]\,y\) is so close to \(x\,\langle s\rangle y\) that iterations of your identity, converge to it...... God, I hope you get this.....
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The modified Bennet Operators, and their Abel functions JmsNxn 6 248 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 702 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 7,758 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 10,862 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 3,717 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 75,495 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 31,265 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Bounded Analytic Hyper operators JmsNxn 25 43,488 04/01/2015, 06:09 PM
Last Post: MphLee
  Incredible reduction for Hyper operators JmsNxn 0 4,287 02/13/2014, 06:20 PM
Last Post: JmsNxn
  interpolating the hyper operators JmsNxn 3 9,664 06/07/2013, 09:03 PM
Last Post: JmsNxn



Users browsing this thread: 2 Guest(s)