Posts: 1,645
Threads: 369
Joined: Feb 2009
(03/23/2022, 03:19 AM)JmsNxn Wrote: ...
Where we have the identity: \(x<k>(x<k+1>y) = x<k+1> y+1\). Good ol fashioned hyperoperators.
...
You should be able to construct an implicit solution to the equation:
$$
x <s> (x<s+1>y) = x <s+1> y+1\\
$$
For all \(x \in \mathbb{C}/\mathcal{E}\) and \(y \in \mathcal{W} + \mathbb{Z}\)where \(\mathcal{E}\) is measure zero in \(\mathbb{R}^2\).
I mean, this problem is really solved if you think of it implicitly. We are just varying \(\mu,\lambda\) until we find a solution to the above equation while we freely move \(s\). This is very fucking difficult to do. I have not done it, as this would require a good 20 pages of work, but it is definitely possible. I may come back to this, but for the moment my brain is switching to PDE/ODE territory, and this type of research is secondary.
Regards, James
But now you tell me you do not want left distributive ?
BUT That equation IS left distributive ?
So you do not want to satisfy that equation ??
Then why mention it and what do you want to satisfy ? And why ??
Im confused.
Posts: 1,645
Threads: 369
Joined: Feb 2009
(05/04/2022, 10:31 PM)JmsNxn Wrote: (05/03/2022, 12:16 PM)tommy1729 Wrote: (03/23/2022, 03:19 AM)JmsNxn Wrote: Hey everyone! Some more info dumps!
I haven't talked too much about holomorphic semioperators for a long time. For this brief exposition I'm going to denote the following:
$$
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
$$
Where we have the identity: \(x<k>(x<k+1>y) = x<k+1> y+1\). Good ol fashioned hyperoperators.
...
First note :
In my notebook  and maybe posted here too  i found the identity: \((x<k+1>y) <k>x= x<k+1> y+1\) is consistant with
$$
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
$$
This seems a nicer choice or not ?
Why not this ? because it is slower ?
Second note : Im going to ignore holomorphic for now because I do not believe that. Might explain later ( more time ) and maybe already did in the past.
Third note :
Which fractional iteration for exp and log ?? There are many and they do not agree on the 2 real fixpoints or ( in case of base e^(1/e) a single real fixpoint that is not analytic ! )
These problems and choices are not simultaniously adressed , picked and motivated.
Fourth note : why noncommutative ?
Fifth note :
you basicly are looking for a function f_1(a,b,s) and " find " the solution f_2(a ,b ,s , f_3(a,b,s) ) where f_3(a,b,s ) is unknown , undefined and unproven analytic.
that feels like solving the quintic polynomial as exp(a_0) + f(a_0,a_2,a_3,a_4,a_5) for some unknown f ...
forgive my parody.
I could continue but I respect you
regards
tommy1729
1) I don't want left associative, who wants left associative....
...
Like I commented , your original equation mentioned IS left associative.
Posts: 902
Threads: 111
Joined: Dec 2010
05/06/2022, 09:07 PM
(This post was last modified: 05/07/2022, 12:05 AM by JmsNxn.)
(05/05/2022, 11:03 PM)tommy1729 Wrote: (05/04/2022, 10:31 PM)JmsNxn Wrote: (05/03/2022, 12:16 PM)tommy1729 Wrote: (03/23/2022, 03:19 AM)JmsNxn Wrote: Hey everyone! Some more info dumps!
I haven't talked too much about holomorphic semioperators for a long time. For this brief exposition I'm going to denote the following:
$$
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
$$
Where we have the identity: \(x<k>(x<k+1>y) = x<k+1> y+1\). Good ol fashioned hyperoperators.
...
First note :
In my notebook  and maybe posted here too  i found the identity: \((x<k+1>y) <k>x= x<k+1> y+1\) is consistant with
$$
\begin{align}
x\,<0>\,y &= x+y\\
x\,<1>\,y &= x\cdot y\\
x\,<2>\,y &= x^y\\
\end{align}
$$
This seems a nicer choice or not ?
Why not this ? because it is slower ?
Second note : Im going to ignore holomorphic for now because I do not believe that. Might explain later ( more time ) and maybe already did in the past.
Third note :
Which fractional iteration for exp and log ?? There are many and they do not agree on the 2 real fixpoints or ( in case of base e^(1/e) a single real fixpoint that is not analytic ! )
These problems and choices are not simultaniously adressed , picked and motivated.
Fourth note : why noncommutative ?
Fifth note :
you basicly are looking for a function f_1(a,b,s) and " find " the solution f_2(a ,b ,s , f_3(a,b,s) ) where f_3(a,b,s ) is unknown , undefined and unproven analytic.
that feels like solving the quintic polynomial as exp(a_0) + f(a_0,a_2,a_3,a_4,a_5) for some unknown f ...
forgive my parody.
I could continue but I respect you
regards
tommy1729
1) I don't want left associative, who wants left associative....
...
Like I commented , your original equation mentioned IS left associative.
Haha, sorry late at night. Meant to say do not want right associative. I apologize, was really tired when I wrote that.
What I meant to say, is your equation will not generate tetration, it'll generate the lower tetration. I don't want the lower tetration. I still want \(x <3> y\) to be tetration, but I'm not going to go that far out. And in order for that to happen you must satisfy goodstein's equation and not the equation you wrote.
Sorry, I apologize. Sometimes I stay up too late and mix up my words.
To clarify, YES! We are looking for solutions to this equation:
$$
x<s> \left(x <s+1> y\right) = x <s+1> (y+1)\\
$$
For natural numbers you'll note that this will generate tetration for \(x<3>y\) (upto a normalization constant) and \(x <4> y\) will be pentation, so on and so forth. I don't want to go that far out though, I only care for \(0 \le \Re(s) \le 2\).
Also, since you don't like the idea of local holomorphy; we can instead refer to \(x,y > e\) and \(0 \le s \le 2\) and write:
$$
x <s>_\varphi y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)
$$
Which is analytic so long as \(\varphi >  e\) (at least). And the iteration we choose is the Schroder iteration about the repelling fixed point of \(b = y^{1/y}\) (so about \(y\) since \(y > e\)). Now all the discussion of surfaces is reduced to a discussion of surfaces in \(\mathbb{R}^3\), and we don't have to worry about Riemann surface stuff. This makes the problem much more manageable.
Posts: 902
Threads: 111
Joined: Dec 2010
05/08/2022, 07:41 PM
(This post was last modified: 05/08/2022, 09:19 PM by JmsNxn.)
So, with some help from Ember, I figured out how to convert data from PariGP into mathematica so I can make 3d graphs now. Everything is behaving exactly as I expected.
Here is a graph of the surface \((\varphi_1,\varphi_2,\varphi_3\) in the equation:
$$
3 <0.5>_{\varphi_1} \left(3<1.5>_{\varphi_2} 3\right) = 3 <1.5>_{\varphi_3} 4\\
$$
This is done over the box \(0.5 \le \varphi_1,\varphi_2 \le 0.5\). The surface is almost planar it's fascinating. So there is a single value on this surface that we want.
I'm hoping to make a widget so that we can observe the evolution of this surface as we move \(s\) in \(([0,1]\).
Here is a graph of the surface:
$$
3 <0.3>_{\varphi_1} \left(3<1.3>_{\varphi_2} 3\right) = 3 <1.3>_{\varphi_3} 4\\
$$
Very little changes, and again, it's very planar:
You can expect the evolution to be fairly static and planar. This is very good news!!! It means we not only have locality, it means we'll have good regular global structure. I'm going to do more experimenting, I'll try bigger values and check the evolution more!
Regards, James
Also another interesting tidbit. If we only move \(y\), and write \(\varphi_3(y) = \varphi_2(y+1)\), then we actually get a First Order Difference Equation which looks pretty solvable:
$$
\varphi_2(y+1) = \log_{(y+1)^{1/(y+1)}}^{\circ s+1}\left(x <s>_{\varphi_1} (x <s+1>_{\varphi_2(y)} y)\right)  y  1  \log^{\circ s+1}_{(y+1)^{1/(y+1)}}(x)\\
$$
This is effectively the first restriction; which then lowers the dimension by \(1\); and we only have to worry about \(\varphi_1\). Funny how everything comes full circle, I love me some first order difference equations!
Posts: 321
Threads: 25
Joined: May 2013
The fact that it seems a plane is amazing... even if I don't know exactly what this means.
I skimmed your paper for the fourth time, it's starting to make more and more sense. In my poor understanding, switching from controlling the period \(\lambda\) to controlling \(\varphi\) you switched from beta method to something more mundane, just perturbating the exponent/fixedpoint, but that has same effect but better behaviour. Is this a good sketch of it?
Also the part where you "desynchronize" the three \(\varphi\)s turning it into a surface and making the coordinates implicitly functions of different arguments and subject to some relations... that's the most tricky part imho. I need to go back to your forum post and read them more. But now I understand why I was slow at getting a full picture... your methods are really rich of details and layers.
What I don't get at this point is... for a triple \((b,y,s)\) you get a surface \({\bf \Phi}_{(b,y,s)}\subseteq \mathbb C^3\)... this amounts to have a family of surfaces. You are interested only in a single point for each surface? How to select them? All of this reminds of me of the fiber bundles/sections business.
If all of them are homeomorphic to a complex plane(but curved), i.e. \(\mathbb C^2\) via a parametrization then, do you get a fiber bundle $$\bigsqcup_{(b,y,s)\in \mathbb C^2\times (0;2)} {\bf \Phi}_{(b,y,s)} \simeq \mathbb C^4\times (0,2) \overset{\bar{\bf \Phi}}{\longrightarrow} \mathbb C^2\times (0;2)?$$
Each fiber at a point \(P=(b,y,s)\) is the desired surface \( \bar{\bf \Phi}^{1}\{P\}={\bf \Phi}_{P} \). If it is a fiber bundle then probably you get some lifting properties... yea I'm just inventing things here but... sections must have something to do with vector fields...
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 902
Threads: 111
Joined: Dec 2010
05/08/2022, 11:18 PM
(This post was last modified: 05/08/2022, 11:23 PM by JmsNxn.)
Hey, Mphlee
I'm going to keep it simple here on out. We don't need the beta method at all, I thought you would, but that just overcomplicates things. I was mostly using beta because I didn't have an efficient programming method for the Schroder case, now I do. I'm working on writing a much clearer write up, but for the moment I'll be brief.
The fact that it's a plane is absolutely AMAZING. And I'll explain how as to the best of my ability. To begin, let's only focus on real values. Let's ignore everything complex (but we're still gonna be analytic, no infinite differentiable/continuous bs, everything is still analytic).
For \(x,y > e\) and \(0 \le s \le 2\), begin by defining the operator:
$$
x\,[s]\,y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
$$
Now, this only uses one type of iteration because of our restrictions on \(x\) and \(y\). It only uses the repelling iteration of the base value \(b = y^{1/y}\). So for example \(b = \sqrt{2}\), then we're only doing the iteration about \(4\). This helps us with the problem we may have with mixing iterations (sometimes repelling (about 4), sometimes attracting (about 2)). It speaks to the fact that the real trouble value is going to be about \(y=e\). Which I mean, is kinda cool tbh. So we're only using the unbounded iteration.
To explain this difference, I suggest Trapmann's and Kouznetsov's paper on the different types of iteration of \(\sqrt{2}\).
Portrait_of_the_four_regular_superexponentials_to.pdf (Size: 1.01 MB / Downloads: 31)
They use \(\sqrt{2}\) as an example, but it works for all \(b = y^{1/y}\) for \(y > 1\), and we are choosing what they call the repelling iteration, or the unbounded iteration. Basically goes to \(4\) at \(s = \infty\), goes to \(\infty\) at \(s = \infty\).
So this means, our definition of \(\exp^{\circ s}_{y^{1/y}}(z)\) is just the Schroder iteration:
$$
\Psi^{1}(\log(y)^s\Psi(z))
$$
Where \(\Psi\) is the Schroder function about \(y\) (which is the repelling fixed point because \(y > e\)). So again, if \(b = \sqrt{2}\), this Schroder function would be about the fixed point \(4\).
Now, we are making the intermediary operator:
$$
x \,\langle s\rangle_{\varphi}\, y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right) = x\,[s]\,y + \mu(x,y,s) \varphi + \mathcal{O}(\varphi^2)\\
$$
The value \(\mu > 0\), but that's not too important at the moment. What's important is the surface.
If we define a surface:
$$
F(\varphi_1,\varphi_2,\varphi_3) = x\,\langle s\rangle_{\varphi_1} \left(x \,\langle s+1\rangle_{\varphi_2}\, y\right)  \left(x\,\langle s+1\rangle_{\varphi_3}\,y+1\right) = 0
$$
Then, you'd expect this to be some whacky looking surface, right? Well, no, it's literally a plane (upto a small error). So literally, the surface \(F \subset \mathbb{R}^3\), looks something like this:
$$
F \approx a \varphi_1+b\varphi_2+c\varphi_3 = 0\\
$$
Not just locally either (which is always possible with tangent planes and yada yada), it looks like this GLOBALLY!
SO QUITE LITERALLY:
$$
0 = x\,\langle s\rangle_{\varphi_1} \left(x \,\langle s+1\rangle_{\varphi_2}\, y\right)  \left(x\,\langle s+1\rangle_{\varphi_3}\,y+1\right) \approx a \varphi_1+b\varphi_2+c\varphi_3 = 0
$$
This means, we're trying to solve very very difficult equations in \(\varphi\), yes... but everything is linear! This is going to be so much easier than I thought! We're solving linear equations upto a small error, holy hell.
The best part, is that they don't change much for varying \(x,y\) also. Here's a graph of the surface:
$$
F = 6\,\langle 0.5\rangle_{\varphi_1} \left(6 \langle 1.5 \rangle_{\varphi_2} 7\right)  \left(6 \langle 1.5 \rangle_{\varphi_3} 7\right) = 0
$$
IT LOOKS THE EXACT SAME!
So not only is it a plane, but the plane barely moves as we move \(x,y,s\)!
So this is the surface, which evolves over time as we move \((x,y,s)\), but stays pretty stable. Now we are asking for restrictions on \(\varphi_1,\varphi_2,\varphi_3\). We have to say that:
$$
\varphi_2(y+1,s) = \varphi_3(y,s)\\
$$
And something a bit more tricky for \(\varphi_1\), and this will single out a point for fixed \((x,y,s)\). Now this is still difficult. But we're basically just doing this in a linear setting! So it shaves off like 80% of the work I thought I'd have to do, lmaoo. Now I can just look at this like it's linear, and it'll at least be a great approximation!!!
I'm too excited, Mphlee! I'm writing up a new write up, which cuts all the fat from the discussion. It'll be quick. And I'm only going to focus on \(x,y > e\) and \(0 \le s \le 2\). This way there's no Riemann surfaces, it's just surfaces in \(\mathbb{R}^3\) which is inconceivably easier, lol.
As to Fiber bundles, we have a saying in toronto. Miss me with that shit!
(which means, I don't want anything to do with that, get that shit away from me)
But you're probably right, this definitely has to do with vector bundles and tangent space bs, I ain't got the energy for that.
Posts: 321
Threads: 25
Joined: May 2013
0.0
I need time to elaborate that. I'll le that sink during this week.
But man... but when you say
Quote:So not only is it a plane, but the plane barely moves as we move...
This to me means that ... there is something builtin Bennet operations suggesting they wants to behave good in relationship with the Goodstein operations... not like we are torturing them to satisfy the recursion...
Idk but, in my ignorance, this seems deep...
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Posts: 902
Threads: 111
Joined: Dec 2010
(05/08/2022, 11:40 PM)MphLee Wrote: This to me means that ... there is something builtin Bennet operations suggesting they wants to behave good in relationship with the Goodstein operations... not like we are torturing them to satisfy the recursion...
ITS EXACTLY THAT!
I was expecting to contort and make some deep Kneser kind of theta mapping. No, not at all. They're linearly related! Sure, I'll have to solve a first order difference equation, and a weird Taylor series equation; but it'll be linear in essence! Linear first order difference equations are just exponentials...
Posts: 1,645
Threads: 369
Joined: Feb 2009
05/10/2022, 11:38 AM
(This post was last modified: 05/10/2022, 11:39 AM by tommy1729.)
Ok I want to talk about the connection between superfunction operator , leftdistributive and analytic continuation.
First the superfunction is not unique but computing what a function is the superfunction of , is almost unique ; just a single parameter usually.
If we have a function f(x,y) that is analytic in x and y and we take the superfunction F(z,x,y) (with the same method) WITH respect to x , for every fixed y , where z is the number of iterations of f(x,y) ( with respect to x ) then F(z,x,y) is usually analytic in both x and y !
Therefore the superfunction operator is an analytic operator.
this makes going from x <s> y to x <s+1> y  for sufficiently large s  preserve analyticity.
Secondly we want
x <s> 1 = x for all s.
by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.
This gives us an opportunity to get analytic hyperoperators.
Combining x <s> 1 = x , the superfunction method going from x <s> y to x <s+1> y and the left distributive property to go from going from x <s> y to x <s1> y we then get a nice structure for hyperoperators that connects to the ideas of iterations and superfunctions.
You see we then get that x <s> y is EXACTLY the y th iterate of x < s1> y with respect to x and starting value y. If we set y = 1 then x <s> 1 = x thereby proving that it is indeed taking superfunctions *we start with x * (for all s).
This implies that
x <0> y = x + y is WRONG.
We get by the above :
x < 0 > y = x + y  1
( x <0> 1 = x !! )
x < 1 > y = x y
( the super of +x + 1  1 aka +x y times )
x < 2 > y = x^y
( the super of x y ; taking x * ... y times )
x < 3 > y = x^^ y
( starting at x and repeating x^... )
This also allows us to compute x < n > y for any n , even negative.
That is a sketch of my idea.
Not sure how this relates to 2 < s > 2 = 4 ...
Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.
Gotta run.
Regards
tommy1729
Tom Marcel Raes
Posts: 321
Threads: 25
Joined: May 2013
05/10/2022, 12:26 PM
(This post was last modified: 05/10/2022, 05:35 PM by MphLee.)
Wops, Tommy, you unintentionally did a doublepost with the same content.
Quote:[...] by doing that we set going from x <s> y to x <s+1> y as a superfunction operator.
This gives us an opportunity to get analytic hyperoperators.
[...]
Now we only need to understand x < s > y for s between 0 and 1 but analytic at 0 and 1.
This way to frame it is really interesting Tommy, it's the same as Trapmann's 2008 proposal (see link) to define ranks as the iteration height of the "superfunction operator" on analytic functions (later reworked by James and Tommy aswell).
This seems very fruitful, and I'm investigating this approach since then. The problem is... it is not technically an operator in the strict sense imho, we lack the right linear structure. So it nonlinear. We need exotic and advanced methods to treat nonlinear operators, and I'm ignorant af. James did some research on this under his differintegral program. Basically he tried to conjugate the superfunction to the differential operator in order to turn fractional differentiation to fractional iteration of the superfunction operator.
So here James is trying to define an good interpolation for \(s\) between \(0\) and \(2\) and then do a piecewise extension, in order to avoid the iteration of nonlinear operators.
Regards.
MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
