Holomorphic semi operators, using the beta method
#41
(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.

First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.

If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !

Therefore the superfunction operator is an analytic operator.

this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.

Secondly we want

x <s> 1 = x for all s.

by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.

This gives us an opportunity to get analytic hyperoperators.

Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.

You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).

This implies that

x <0> y = x + y is WRONG.

We get by the above :

x < 0 > y = x + y - 1

( x <0> 1 = x !! )

x < 1 > y = x y

( the super of +x + 1 - 1 aka +x y times )

x < 2 > y = x^y

( the super of x y ;  taking x * ... y times )

x < 3 > y = x^^ y 

( starting at x and repeating x^... )

This also allows us to compute x < n > y for any n , even negative.

That is a sketch of my idea.


Not sure how this relates to 2 < s > 2 = 4 ...

Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.

Gotta run.



Regards

tommy1729

Tom Marcel Raes

Hey, Tommy

Well aware there are many different ways of approaching this problem. The way I am doing is just one way. Quite frankly, this looks a lot like my original approach from a long time ago. There, I started with \(x \uparrow^0 y = x \cdot y\), and then do everything you just described. And try to define \(x \uparrow^s y\). Where notably I set \(x \uparrow^s 1  = x\). I'm trying something very very different here. This does not equate to the old studies on this, that's why I shied away from the uparrow notation, because I wanted to insinuate an entirely different object.

That's why again, I'm only interested for \( 0\le s \le 2\) at the moment, because Bennet comes exceedingly close to satisfying the Goodstein recursion in this interval.

I'm well aware of the method you are describing though, I'm still confident a solution for that is given by:

\[
\alpha \uparrow^s (z+1) = \frac{d^{s-1}}{dw^{s-1}}\frac{d^{z-1}}{du^{z-1}} \Big{|}_{w=0}\Big{|}_{u=0} \sum_{k=0}^\infty \sum_{n=0}^\infty \alpha \uparrow^{n+1} (k+2) \frac{w^nu^k}{n!k!}\,\,\Re(s) >0,\,\Re(z) > 0,\,\alpha \in(1,\eta)\\
\]

Largely because if you take the first differ integral and set \(z=0\), we get:

\[
\alpha e^w\\
\]

Then taking the second differintegral, we just get:

\[
\alpha\\
\]

Since the Mellin transform converges at these points, it's very likely it'll converge in a half plane for \(\Re(s),\Re(z) \ge 0\). I made some progress on this, but I could never plug one of the leaks. Additionally, the first differintegral always converges by the work done on bounded analytic hyperoperators, where:

\[
\alpha \uparrow^n z+1 = \frac{d^{z-1}}{du^{z-1}}\Big{|}_{u=0} \sum_{k=0}^\infty \alpha \uparrow^n (k+2) \frac{u^k}{k!}\\
\]

Where we define the terms in the sum recursively as:

\[
\alpha \uparrow^n (k+2) = \alpha \uparrow^{n-1} \alpha \uparrow^{n-1} \cdots\,(k+2\,\,\text{times})\,\cdots \uparrow^{n-1} \alpha\\
\]


But this, as far as I'm concerned is irrelevant for this thread. I'm trying to do something very different here.


YES!

I've got it down to one equation. This is only a good first order approximation, but if this is working it's a very good sign. So, let's call:

\[
\varphi_3(y,s) = \log^{\circ s+1}_{(y+1)^{1/(y+1)}}\left(x\langle s\rangle_{\varphi_1}\left(x \langle s+1\rangle_{\varphi_2} y\right)\right) - y-1-\log^{\circ s+1}_{(y+1)^{1/(y+1)}}(x)\\
\]

We can treat this linearly upto about 3 digits in the interval \(\varphi_1,\varphi_2 \in [-1,1]\).  So call:

\[
\begin{align}
\rho_1(y,s) &= \frac{\partial \varphi_3}{\partial \varphi_1}\Big{|}_{\varphi_1 =0}\\
\rho_2(y,s) &= \frac{\partial \varphi_3}{\partial \varphi_2}\Big{|}_{\varphi_2 =0}\\
\end{align}
\]

And let \(C(y,s) = \varphi_3(y,s)\Big{|}_{\varphi_1,\varphi_2 = 0}\)

Then, the first order approximation, which is surprisingly accurate, looks like this:

\[
\varphi_3(y,s) = C(y,s) + \rho_1(y,s)\varphi_1 + \rho_2(y,s)\varphi_2\\
\]

Now, the first restriction we make on the plane is that \(\varphi_2 = \varphi_3(y-1,s)\) which I detailed above.  So now we have the first order difference equation, that happens to be linear:

\[
\varphi_3(y,s) = C(y,s)+ \rho_1(y,s)\varphi_1 + \rho_2(y,s)\varphi_3(y-1,s)\\
\]

Can you guess the solution.....? [Enter infinite compositions from right stage.]


\[
\varphi_3(y,s) = \Omega_{j=1}^\infty \frac{\rho_1(y+j-1,s)\varphi_1 - C(y+j-1,s) + z}{\rho_2(y+j-1,s)}\bullet z \Big{|}_{z=0}\\
\]


Now I was doubting this converges initially, as I looked at it. But the coefficients are all decaying just fast enough. I'd estimate something like \(1/n^{3/2}\) maybe a bit faster. But this converges... The value \(\rho_1\) drops to zero pretty damn fast. The value \(\rho_2\) tends to 1 moderately fast. The only trouble value I see is going to be \(C\), but it still looks like it's converging. All hope is not lost if this diverges, we'd just have to use a different technique to solve the first order recurrence equation.

This gives the first order approximation, which is almost exactly. You'd have to do this more difficultly for the actual solution, but if the linear is converging, then the actual solution should converge too. I'm dreading writing that out though. It's going to be one helluva a nasty infinite composition...

Then, all we have to do is make sure that \(\varphi_1\) satisfies its equation, and we'd have the solution. Jesus, that's going to be tough though. I doubt this will look like a linear equation.

But we definitely can use the infinite composition to get the correct answer if you guess about where \(\varphi_1\) is... But you also have to modify this equation a tad, and you have to let \(C\) depend on \(\varphi_1\) a tad, I'm not sure why. But that's what makes these equations converge.

Think of it like a newtonian approximation. You have to guess about where \(\varphi_1\) is for \(x,y,s\), then you run this equation to get \(\varphi_3(y,s)\), then you get \(\varphi_2 = \varphi_3(y-1,s)\).

JESUS! This is working out too well. We're going to have a hell of time solving this \(\varphi_1\) anomaly though.
Reply
#42
Okay, I've figured out the solution. All we need is a second order difference equation.

\[
\varphi_3(y,s) = \left(1-\rho_1\right) C(y-2,s)+ \left(\rho_1 + \rho_2\right)\varphi_3(y-1,s)-\rho_1\rho_2 \varphi_3(y-2)\\
\]

Where:

\[
\varphi_3(y-1,s) = \varphi_2\\
\]

And \(\varphi_1\) equals a different equation hidden in the above equation. The values \(\rho_1,\rho_2\) are differentials of the form:

\[
\begin{align}
\rho_1 = \frac{\partial}{\partial \varphi_1} \varphi_3\\
\rho_2 = \frac{\partial}{\partial \varphi_2} \varphi_3\\
\end{align}
\]

Where we're looking at \(\varphi_3\) as a tangent plane.

And again,

\[
C(y,s) = \varphi_3 \Big{|}_{\varphi_1',\varphi_2'}
\]

This turns the really hard problem, into an infinite composition problem. I'm going to refrain from posting for a while, until I have a well working theory. And at least a confirmatory program. So, give me a week or two. I see the math, but there's a lot of work to solve this.
Reply
#43
About the recent MO question of yours, what if you just set up the system of equations relating the various \(\varphi_i\) and their derivatives and just ask for solution sets and solvability conditions via differential-geometric means? Maybe this way you could attract experts in the field of solving things, without them being problem the equation originates from.
In order to do this you could make it look as if it was a general textbook exercise maybe, totally self contained and unrelated to hyper/Goodstein and things like that.

I tried to follow the equations and your post and come up with the system myself, independently, but it felt like trimming my brain cells (I have less neurons now xD). At the intuitive level, It seems to me that there are too many nested layers of functional dependency to extract implicit functions without A) becomeing insane B) doing something wrong C) hitting some serious obstruction to the existence of a solution.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply
#44
That's a good idea, Mphlee, but I guess people just don't like my questions on MO. Maybe I'm asking wrong, but it's far more often I never get an answer. Or if I get an answer it doesn't answer the question. Out of all the questions I've asked on MO, I can count on one hand the amount of times it's actually helped me.



That being said, I've been trying to solve this problem from a more analytic perspective. The trouble I've been having now, is the slow decay of what I was hoping would be much faster.

If I write:

\[
F(x,y,s,\varphi_1,\varphi_2,\varphi_3) = x \langle s\rangle_{\varphi_1} \langle s+1\rangle_{\varphi_2} y - \left(x \langle s+1\rangle_{\varphi_3} y+1\right) = 0\\
\]

Then:

\[
\begin{align}
\varphi_3(x,y,s) &= C(x,y,s) + \rho_1(x,y,s) \varphi_1(x,y,s) + \rho_2(x,y,s) \varphi_2(x,y,s)\\
\rho_1 &= \frac{\partial \varphi_3}{\partial \varphi_1} \Big{|}_{\varphi_1 = \varphi_2 = 0}\\
\rho_2 &= \frac{\partial \varphi_3}{\partial \varphi_2} \Big{|}_{\varphi_1 = \varphi_2 = 0}\\
\end{align}
\]

Serves as a fantastic first order approximation--as it's the tangent plane about zero. Now, I cannot solve this system of equations as well as I'd like, but I can set up a solution fairly well. Firstly, we can ignore \(x\), so long as \(x > e\) everything follows the same (so drop \(x\) from the picture). The first requirement we need is that:

\[
\varphi_3(y,s) = \varphi_2(y+1,s)\\
\]

So we can rewrite this equation as:

\[
\varphi_3(y,s) = C(y,s) + \rho_1(y,s) \varphi_1(y,s) + \rho_2(y,s) \varphi_3(y-1,s)\\
\]

Now, I can somewhat prove the following. But, even though I can't prove it, numerical experimentation gives us good asymptotics on \(y\). Here, we don't really need to talk about \(s\), so drop it from the picture too. \(s\) only makes an appearance when we're talking about \(\varphi_1\), and even then it only appears as a shift \(s \mapsto s-1\). So let's write this out again:

\[
\varphi_3(y) = C(y) + \rho_1(y) \varphi_1(y) + \rho_2(y) \varphi_3(y-1)\\
\]

The first term \(C(y)\) has some interesting asymptotics. My best guess is that it looks like \(1/\log(y)\). So it tends to zero, but does so very slowly. It's difficult to sus out an exact growth, because my code will start to get inaccurate around \(1E10\), which is because it's calculating the super exponential of \(1E10^{1/1E10} \approx 1\), and this is a natural artifact which occurs near \(1\).  Nonetheless, I am confident it tends to zero, and should tend somewhere like \(1/\log(y)\), if not that, maybe something a bit slower, but I doubt it.

The second term \(\rho_1(y)\) is our knight in shining armor. When I started feeling doubtful of this working, \(\rho_1\) brought be back in the game... The value \(\rho_1(y) \le 1/y^e\). And decays at least that fast. If I were to wager a guess, it's something like \(\rho_1(y) \approx 1/y^{x-\delta}\), it's probably asymptotically about \(1/y^x\) though.

The third term is a bit trickier, and doesn't behave as nicely as one would like. But it's not as bad as the first term, which is our trouble value. The third term still has nice behaviour. The value \(\rho_2(y) \approx 1+\frac{1}{y}\). 

What does this mean?

Well I can't prove it, but we can expect \(\varphi_3(y) \to 0\) just like \(C(y)\). If \(C(y)\) had a faster convergence, I could put a box and everything would be solved. But because this decays so slow, it's really making things difficult. So I'm going to have to do some kind of change of variables, not sure where yet, but I need to somehow factor out this \(C(y)\).

What this means, is that for very large \(y\), we can expect our difference equation to look like this:

\[
\varphi_3(y) = -\frac{1}{\log(y)} + \frac{\varphi_1(y)}{y^3} + \left(1+\frac{1}{y}\right) \varphi_3(y-1)\\
\]

The value \(\varphi_1\) shoots to zero so fast, that for large values it becomes inconsequential. WHICH IS REALLY REALLY GOOD. We can think of this in terms of the tangent plane of \(F\) about \(\varphi_1,\varphi_2 = 0\). For large \(y\), this essentially just becomes \(\varphi_3 = \frac{-1}{\log(y)} + \varphi_2\), it looks like a run of the mill \(y=x\) graph with a small offset.

Another good thing about \(\varphi_1\) shooting to zero so fast, is that it implies that in the region \(0 \le s \le 1\) the value of \(\varphi\) for \(x \langle s\rangle_\varphi y\) is very very small, where as for \(1 \le s \le 2\) we may have just small values (relative to the other interval). We only care about \(\varphi_1\) when \(s=1\) though, which allows us to glue \(0 \le s \le 2\) together. Not much progress on that part, but knowing that this value almost flatlines here is a very good sign.

Which means, for very large large \(y\), \(\varphi_3\) does not depend on \(\varphi_1\), or its dependence is neglible. So for large values of \(y\), we are really just trying to solve the difference equation:

\[
f(y) = \frac{-1}{\log(y)} + \left(1+\frac{1}{y}\right) f(y-1)\\
\]

The solution of which, as \(y \to \infty\) is zero. So theoretically, the solution can be found, if we can massage this equation to be solvable despite the slow decay of \(\frac{1}{\log(y)}\). I've made a bit of progress, but I'm not sure yet.

But what I am sure of, is that one can show that:

\[
\begin{align}
\varphi_3(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\varphi_2(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\varphi_1(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\end{align}
\]

Where there respective decays are something like:

\[
\begin{align}
\varphi_3(y) &\approx \frac{1}{\log(y)}\\
\varphi_2(y) &\approx \frac{1}{\log(y-1)}\\
\varphi_1(y) &\approx y^{-e}\\
\end{align}
\]

So ultimately, this means the way we'd have convergence now is like:

\[
\sum_{n=0}^\infty \frac{1}{\log(y+n+1)} - \frac{1}{\log(y+n)}\\
\]

Which, yes it converges. But for fuck's sakes, that's the worst convergence possible. It's wayyyyyyyyyy too slow to be feasible. So I have to speed this up somehow.



Why do we care about this?

Well, this means, if I write bennet's operators:

\[
x [s] y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]

Then:

\[
x[s]\left(x[s+1] y\right) - x\langle s+1 \rangle_{C(y)}(y+1) =0\\
\]

And \(C(y) \to 0 \) like \(-1/\log(y)\).

I can't prove this yet, and the most I've been able to test is up to \(y \approx 1E10\), and there it is only \(0\) upto about \(20\) digits. Again, that's because we start to get too close to doing tetration for \(b \approx 1+\delta\) and that is insufferably hard using Schroder. I might have to switch to the beta method here, that's a whole nother trouble of problems though...

But all signs are pointing to how you put it, MphLee

Bennet WANTS to become Goodstein.
Reply
#45
(05/10/2022, 11:38 AM)tommy1729 Wrote: Ok I want to talk about the connection between superfunction operator , left-distributive and analytic continuation.

First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.

If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !

Therefore the superfunction operator is an analytic operator.

this makes going from x <s> y to x <s+1> y - for sufficiently large s - preserve analyticity.

Secondly we want

x <s> 1 = x for all s.

by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.

This gives us an opportunity to get analytic hyperoperators.

Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s-1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.

You see we then get that x <s> y is EXACTLY the y th iterate of x < s-1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).

This implies that

x <0> y = x + y is WRONG.

We get by the above :

x < 0 > y = x + y - 1

( x <0> 1 = x !! )

x < 1 > y = x y

( the super of +x + 1 - 1 aka +x y times )

x < 2 > y = x^y

( the super of x y ;  taking x * ... y times )

x < 3 > y = x^^ y 

( starting at x and repeating x^... )

This also allows us to compute x < n > y for any n , even negative.

That is a sketch of my idea.


Not sure how this relates to 2 < s > 2 = 4 ...

Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.

Gotta run.



Regards

tommy1729

Tom Marcel Raes

However this idea has issues as well. And with " as well " I mean to express my skepticism against all known hyperoperators.

A typical issue in fact :

let <-s> denote negative hyperoperators 

from the above 3 conditions above we get 

x <-s> y = y + 1

well at least for integer s > 0.

This is ofcourse problematic considering the superfunction operator.
And the loss of info for the x parameter.

We cannot simply say without " shame " that the *super* of x <-2> y = x <-1> y = y + 1 and the next super is not that fixed point function but x + y - 1.

It is also weird to think about functions x <-1.5> y between y + 1 and well y +1 ?!

Almost every hyperoperator proposed for generalization has similar issues :

for negative orders the equations are undefined , inconsistant ( log (0) or oo or inconsistant ( nonunique ) values  ) or we end up with identity or successor functions.

For me that is not a small issue.

The problem runs deep , i mean for almost every linear function occurence this issue arises.

And to fix the issue we could try stuff like x <1> y = x^2 + y^2 + 1 but this is not really what we want is it ??

 We could also try stuff like x<s>y = a* ( x <s-1> ( x<s>(y-1))  ) + b * ( ( x<s>(y-1)) <s-1> x ) with a + b = 1.

But that also does not seem what we want , lacks nice solutions and has similar problems ...

So we are not stuck but super stuck. 

(ok that is a bit of a joke )

 
Regards 

tomm1729
Reply
#46
(05/20/2022, 12:14 PM)tommy1729 Wrote: So we are not stuck but super stuck. 

(ok that is a bit of a joke )

 
Regards 

tomm1729


Tommy, you are entirely avoiding the point of this construction.

I don't know how many times I have to tell you that \(\Re(s) < 0\) is beyond our purview. Similarly with \(\Re(s) > 2\). This is in many ways completely unrelated to hyperoperators, we are simply talking about between addition, multiplication and exponentiation. And there, we are not even trying to really solve anything to do with hyperoperators. It just so happens that \(0\) is addition, \(1\) is multiplication and \(2\) is exponentiation. They satisfy a similar equation, yes, but there's no initial value like \(x <s> 1 = x\), which you keep on trying to force into these equations.

In fact, I expect the entire issue you're talking about with successorship at \(-1\), is handled by the fact IT WON'T BE ANALYTIC HERE. It probably won't be analytic along the entire line \(s \in (-\infty,-1]\), probably a larger domain. What I can say, is that for \(x <s> y\), it will be analytic for \(0 \le s \le 2\) and for \(x,y > e\). This is found solely from the implicit function theorem. There's zero question. The trouble is, how do we construct this solution aptly and quickly.

Without a shred of doubt, there's an analytic function \(\phi(x,y,s)\) such that:

\[
x \langle s\rangle y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \phi(x,y,s)\right)\\
\]

and that:

\[
x \langle s\rangle \left( x \langle s+1\rangle y\right) = x \langle s+1\rangle (y+1)
\]

You can actually make a pretty good guess of this function by letting

\[
\phi(x,y,s) \approx -A(s)/\log(y)\,\,\text{for large}\,\,y\to\infty\,\,1\le s \le 2\\
\]

Where \(A(s)\) can be approximated using computations.

Then a newtonian root finder takes care of the strip \(1 \le s \le 2\). Then all you have to do is make sure the pasting is correct, which isn't very hard to do. All we need is \(\phi(x,y,s)\) is analytic at \(1\), and since we still have one degree of freedom at this point, this is pretty simple to do. The main trouble I am having, is how to program this efficiently, construct it effectively mathematically. At this point it's just an implicit solution.

This solution certainly exists, but programming it is very damn hard because we need to take very large \(y\), and at that point the code starts to deteriorate, because we're taking exponentials of things like \(y = 1E20\). And unfortunately at this point \(-1/\log(y)\) isn't small enough to induce an efficient convergence. Doesn't change the fact that this function will exist. The implicit function theorem guarantees it.

I don't care what happens when \(\Re(s) < 0\), we can get there when we get there by iterating the subfunction operator, but I don't care about that. I bet it'll be absolutely disastrous and will give little to no enlightenment. I'm also expecting this to crash and burn exactly at \(s = -1\), and that's okay by me. I don't care about that. I care about looking at the solution between \(0 \le s \le 2\). And even there, complex \(s\) is going to pose a difficult problem, because the domain in \(y\) is very unclear for complex \(s\).  But when \(s\) is real, and between here we get the following benefits:


\[
\begin{array}
\phi\\
\phi(x,y,s)\,\,&\text{is small for}\,\,x,y>e,\,\,0 \le s \le 2\\
\phi(x,y,s)\,\, &\text{is real valued}\\
x \langle s \rangle y > y\,\,&\text{so there's no domain issues}\\
\end{array}
\]

It's actually possible to find these solutions pretty accurately for \(y > 1E20\), it looks increasingly like:

\[
\begin{array}
\phi\\
\phi(x,y,s) \approx A(s)y^{-c}\,\,&\text{for}\,\,0\le s \le 1\,\,\text{and some}\,\,c = c(s) > 0\\
\phi(x,y,s) \approx -B(s)/\log(y)\,\,&\text{for}\,\, 1\le s \le 2\\
\end{array}
\]

Once the solution exists for \(y > 1E20\) or somewhere very large, you can pull back using the relationship \(\varphi\) has to bennet.

Again, I can't stress this enough. There is an implicit local solution. The monodromy theorem makes sure it's analytic. FULL STOP. I don't care about \(x<s>1 = x\). Yes, there are thousands of issues with that definition when going backwards. I'm trying to do something very different here.
Reply
#47
My apologies James.

But there are reasons why I want more.

I want to find " the " hyperoperator.

Secondly , I was 

1) trying to find a way to solve your equations and I felt it was too general bringing me to 

2) I wanted to have a uniqueness criterion and doubted your equation has a unique solution.

Which bring me to my questions to you :

How do you think about uniqueness ?

I think there is no uniqueness with your setting.

This might be problematic in the sense that the bundle of functions that satisfy this and do not have closed forms can be best described by the equations and nothing else ??

That is maybe way to pessimistic but I compare with differential equations in many variables with a large number of solutions , where no single solution has a closed form , it is hard to describe all solutions in terms of a given one and we describe the set of all solutions just with the differential equation itself.

***

Another thing what bothers or confuses me is this

x + y and x * y are commutative.

x^y is not.

Should operators between x + y and x*y be commutative or not ?

I guess you say not.

But going from commutative to noncomm and back to commutative and then noncomm again bothers me.

Maybe it is just me.

And maybe im too focused on superfunctions.

But that is all I know ( superfunctions ) when it comes to hyperoperators.

For me asking 2 <s> 2 = 4 seems like a superfunction interpretation too.

And without x <s> 1 = x I feel i have no starting point.

Im not sure what you want is a half-superfunction idea, I think not.

***

What happens when we simply interpolate ?

x <s> y = a * ( x <A> y) + b * ( x <B> y ) + c * ( x <C> y) 

where a , b , c , A , B , C are functions of s ? preferably simple functions ?



Could that work ??

***

Finally im not even sure you still want x <3> y  = x^^y now , which brings up the question ; how is this related to tetration or ackermann ? And if not , is it not suppose to ?

***

Im not trying to sound hostile to your ideas sorry.


  
 regards

tommy1729
Reply
#48
(05/22/2022, 12:17 AM)tommy1729 Wrote: My apologies James.

But there are reasons why I want more.

I want to find " the " hyperoperator.

Secondly , I was 

1) trying to find a way to solve your equations and I felt it was too general bringing me to 

2) I wanted to have a uniqueness criterion and doubted your equation has a unique solution.

Which bring me to my questions to you :

How do you think about uniqueness ?

I think there is no uniqueness with your setting.

This might be problematic in the sense that the bundle of functions that satisfy this and do not have closed forms can be best described by the equations and nothing else ??

That is maybe way to pessimistic but I compare with differential equations in many variables with a large number of solutions , where no single solution has a closed form , it is hard to describe all solutions in terms of a given one and we describe the set of all solutions just with the differential equation itself.

***

Another thing what bothers or confuses me is this

x + y and x * y are commutative.

x^y is not.

Should operators between x + y and x*y be commutative or not ?

I guess you say not.

But going from commutative to noncomm and back to commutative and then noncomm again bothers me.

Maybe it is just me.

And maybe im too focused on superfunctions.

But that is all I know ( superfunctions ) when it comes to hyperoperators.

For me asking 2 <s> 2 = 4 seems like a superfunction interpretation too.

And without x <s> 1 = x I feel i have no starting point.

Im not sure what you want is a half-superfunction idea, I think not.

***

What happens when we simply interpolate ?

x <s> y = a * ( x <A> y) + b * ( x <B> y ) + c * ( x <C> y) 

where a , b , c , A , B , C are functions of s ? preferably simple functions ?



Could that work ??

***

Finally im not even sure you still want x <3> y  = x^^y now , which brings up the question ; how is this related to tetration or ackermann ? And if not , is it not suppose to ?

***

Im not trying to sound hostile to your ideas sorry.


  
 regards

tommy1729

Hey, Tommy.  It's not a problem at all, this isn't a solution of analytic inbetween hyper-operators. As long as you understand that, I'm okay. I'll try to answer your questions point by point.

Uniqueness is only existent if you think of this with respect to Bennet's formula. This is not "unique" in a general sense. There are probably many ways to interpolate "hyper-operators" in a meaningful way. The manner this is unique, is that, there is ONE function \(\phi(x,y,s)\) for \(x,y > e\) and \(0 \le s \le 2\) such that:

\[
x \,\langle s \rangle y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \phi(x,y,s)\right)\\\\
\]

Satisfies Goodstein's equation. Remember though, that this is only a small correction which turns the quasi Bennet operators:

\[
x [s] y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]

Into satisfying goodstein's equation. So in that sense, yes it is unique. It's the only function we can put in here to satisfy Goodstein's equation. Additionally, we ask that \(\phi(x,y,s) \to 0\) as \(y\to \infty\), this does force uniqueness. This is covered by the implicit function theorem, and the restrictions we are making on the surface in question.



Okay, so the commutative stuff is a no-brainer. I mean. We can't have an analytic function \(a <s> b = b <s> a\) for \(0 \le \Re(s) <2\) and then all of a sudden it becomes noncommutative at \(s= 2\). Think of it this way, the values \(s=0\) and \(s=1\) are these magical values where it happens to be commutative. The operators are in general not commutative, but there are a couple of instances where it then is commutative. I mean, maybe there are beautiful values \(0 \le \Re s_0 \le 2 \) such that this expression is commutative again, but for the moment, we can assume that it is probably non-commutative. Until you find a value that is commutative, I'm going to assume non-commutative. You're trying to add an a priori assumption to the solution.



Okay, so a linear approximation would definitely never work. I'm not sure why you're bringing that up. At least, it doesn't work and has nothing to do with this discussion.

Similarly \( 2<s> 2 = 4\), was largely a test value of how to define these things. I'm not going to concern myself with that anymore. That's on me though, this thread is a mess. This thread is far too fucking disorganized. I apologize, because I've probably confused you. A lot of what I wrote in this thread was discovered on the fly. I have a much better understanding now.



Yes, so \(x\langle 3 \rangle y = x\uparrow\uparrow y\). Yes, I am expecting this to happen. But, the equation/computation/construction is secondary to this. When you let \(s > 2\), you get that \(\phi\) blows up wayyyyyyyyy too fast. Now, this doesn't mean it's incorrect. It just means it's a hopeless affair, and this whole effort becomes useless. I don't care about inbetween exponentiation and tetration, there's nothing to be gained from this avenue.

BUT!!!!

If \(f(y) = x \langle s+1\rangle y\), then yes \(f^{\circ y}(u_0) = x \langle s+2 \rangle y\), for some value \(u_0\) and \(<s+2>\) "between exponentiation and tetration". The thing is though, that this is kind of a meaningless statement, other than \(s \mapsto s+1\) is the same as taking the superfunction. But, we don't have initial values as before. And even further! The equations will begin to break down around here, where \(s=2.5\) we can expect \(\phi\) to be UNBELIEVABLY HUGE, and there's nothing we can do. At least from a computational aspect.

The reason I'm focusing on \(0 \le s \le 2\) is because the value \(\phi\) has very regular growth/is very small/it's feasible to program in.



All in all, think of what I'm trying to do as finding an efficient way of calculating and computing when \(0 \le s \le 2\). I know I haven't explained myself perfectly, but I promise you this is working. I wish I could post all of my code and my experimentations. It's going to be my summer project to explain all this. But Again, we are keeping \(0 \le s \le 2\) because this is where quasi-Bennet operators look a lot like "inbetween addition, multiplication, exponentiation". Technically this would work about tetration, but this avenue of pursuit would be fruitless. It's just that we're really lucky in this strip that \(\phi\) stays very small that we are okay.

As the first big result I have, we can write:

\[
x[s]\left(x[s+1] y\right) = x \langle s+1\rangle_{\phi} y+1\\
\]

And then I can show that \(\phi \sim A(s)/\log(y)\). This only works for \(0 \le s \le 1\). If you start talking about this equation for \(s>1\) it's fucking useless. This identity does not hold. Thats because, \(x[3]y\) DOES NOT LOOK LIKE TETRATION. Trying to find an error here is a fruitless affair. At least from a computational perspective.


So, in conclusion

For \(0 \le s \le 2\) we get that:

\[
x [s] y \approx x \langle s \rangle y\\
\]

And that \(\phi\) is SUPER well behaved here. But here and only here. As soon as we leave this domain a bunch of errors and chaos happens. Which I mean, is much what you're writing by talking about the iterative successorship problem, where \(-1,-2,-3...\) are all successorship, which is obviously a problem. My solution though, doesn't touch these values, and if you try to--expect a branch-cut/non analycity. Honestly, I don't know what's going to happen, but I know it'll be singular of some kind once we start talking about \(s < 0\).



I apologize for how off the rails and unorganized this thread is. But I was transcribing these results in real time. I know what's happening now. And I know how it works now (largely because I've run a lot of computational trials). I don't want to post how all this works until I have a good working model, which I don't yet. So I'm mostly just writing code and trying to find an efficient way of finding \(\phi\), and explaining how this happens.

I apologize if this is all over the place. But I'm confident (not even confident, absolutely sure) there's an implicit solution to this problem. The real problem is constructing the solution. And any attempt I have at constructing this solution is sidelined by poor code, and the inability to sample values for \(y > 1E20\) and at the same time only having decay like \(-1/\log(y)\).

Regards, James
Reply
#49
Alright!

For fucks sakes. I got by by the skin of my teeth. I have an analytic solution. It's going to be a while to get it coherent and make a write up well enough. I got super fucking lucky though. The infinite composition needed to solve this first order difference equation converges like:

\[
\sum \frac{1}{n\log(n)^2}\\
\]

Which, is like the slowest possible convergence. So don't expect an efficient algorithm as of yet. But I can derive an analytic solution because thank the fuck that it wasn't \(\frac{1}{n\log(n)}\) which diverges.

So the solution I can construct analytically converges, but it's as slow as fucking possible -_-.
Reply
#50
(05/25/2022, 04:04 AM)JmsNxn Wrote: Alright!

For fucks sakes. I got by by the skin of my teeth. I have an analytic solution. It's going to be a while to get it coherent and make a write up well enough. I got super fucking lucky though. The infinite composition needed to solve this first order difference equation converges like:

\[
\sum \frac{1}{n\log(n)^2}\\
\]

Which, is like the slowest possible convergence. So don't expect an efficient algorithm as of yet. But I can derive an analytic solution because thank the fuck that it wasn't \(\frac{1}{n\log(n)}\) which diverges.

So the solution I can construct analytically converges, but it's as slow as fucking possible -_-.

Well apart from not knowing what you have discovered :

\[
\sum \frac{1}{n\log(n) (log(log(n))^2}\\
\]

Is much slower.
And you can keep getting slower by adding log iterations.
And after that slog stuff.

But perhaps you meant slowest with 1 log.

anyway

Let f(x,s,y) be a hyperoperator or at least an analytic function or a continu function for some intervals in x , s , y.

Then how do you feel about the set of equations : 

f(x , s-1 , f(x , s , y)) = f(x , s, f(y , s-1 , 1))
f(x , s , f(y , s-1 , z)) = f(f(x , s , y) , s-1 , f(x , s , z))

or in tex :

\[f(x , s-1 , f(x , s , y)) = f(x , s, f(y , s-1 , 1))\]
\[f(x , s , f(y , s-1 , z)) = f(f(x , s , y) , s-1 , f(x , s , z))\]

solved simultan.

Does this work out ?

Uniqueness ?
 

Sorry if i change things a bit again.

regards

tommy1729
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 1,570 11/27/2022, 05:46 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 18,182 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The modified Bennet Operators, and their Abel functions JmsNxn 6 3,299 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 6,226 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 10,283 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 13,566 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 4,528 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 91,721 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 37,663 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Bounded Analytic Hyper operators JmsNxn 25 55,247 04/01/2015, 06:09 PM
Last Post: MphLee



Users browsing this thread: 1 Guest(s)