Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
The \(\varphi\) method of semi operators, the first half of my research
#11
(07/09/2022, 10:23 PM)MphLee Wrote: Those are suggestive images. Idk if If I'm saying something idiot... but they looks like the white circle is an artifact that can be removed to show the smooth and nice surface underneath...

Too bad I Wanted to be strong on programming...


So far, they do appear to be artifacts, and that's the story I'm sticking to! I have every reason to believe that's the case, because any Schroder iteration will have artifacts exactly about there. So, all's not loss. The problem is I'm not sure how to seamlessly program in a transition from repelling to neutral. This has two problems, near the neutral fixed points everything starts to geek out, and at the neutral case I don't even have code to run, I just return(0) (which only accounts for a measure zero area, so never expect to see it). I somehow have to account for the geeking out near the neutral case. I have a close up of this happening:



   



This is over a \(0.4\) window in the \(x\) axis of \(3 [0.5] x\) centered at \(x = \exp(1)\).



This isn't necessarily a bad thing. But it's not really a good thing. I think though, if this dip is happening it gives us a much more stringent manner of describing the actual semi-operators... they won't have this blip. I think the make or break will be to actually correct this dip. And honestly, it does seem possible, it is still continuous, it's just sharp is all... Which could be due to code (which is highly probable) or due to this is exactly what it looks like. But when I run my code for the \(x = e - 0.1\), it's already failing in the preliminary stages (constructing the Schroder function), before it even gets to the bennet protocol. So it's very likely this is just a programming error. If not, all is still not lost. This is still just the modified bennet. Big Grin



EDIT:


Also Mphlee, I have mathematical evidence this is a computing error.

   

If you look at the Lambert branch, there are points where it is always a curve, but it no longer looks like a function. This is what we are seeing near \(e\), and it's precisely a Lambert branching problem. I have to continue this curve which isn't a function, while maintaining holomorphy.

I solved half the riddle mathematically, and I see much better why this shit code error is happening. Don't lose faith yet...
Reply
#12
HUZZAH!

I've managed to show that \(x [s] y\) is indeed analytic at \(y =e\), and the above artifacts are my code failing. When in doubt, return to basics. All I have to do is show that \(x [s] y\) is complex differentiable at \(y = e\). This can be done pretty simply, because \(\frac{d}{dy}y^{1/y} = 0\) when \(y = e\). So the derivative has a bunch of cancellations. By which, a closed form expression for the derivative at \(e\) is given as:

$$
\frac{d}{dy}\Big{|}_{y=e} x[s]y = \frac{d}{du}\Big{|}_{u=e} \exp^{\circ s}_{\eta}\left( \log^{\circ s}_\eta(x) + u\right)\\
$$




EDIT: I should probably prove this claim to be thorough.

Recall:

$$
x[s]y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
$$


Differentiate \(\exp_{y^{1/y}}^{\circ s}(U)\) first, which looks like zero because \(y^{1/y}\) has a critical point at \(y=e\). Now differentiate through \(U\):

$$
\frac{d}{dU} \exp^{\circ s}_{y^{1/y}}(U) \frac{dU}{dy}\\
$$

The derivative of \(U = \log^{\circ s}_{y^{1/y}}(x) + y\) at \(y=e\) is just \(1\), because we have the same critical point argument.

Therefore:

$$
\frac{d}{dy}\Big{|}_{y=e} x[s]y = \frac{d}{dU}\exp^{\circ s}_{y^{1/y}}(U)\Big{|}_{y=e}\\
$$




This is always a finite value for \(x > e\) and \(0 \le s \le 2\). OH YA!!!

So yes, the above dips are artifacts in the program because they aren't even approaching this limit, which is a must. YES!


So all that's left now, is to make an adaptive algorithm which works as \(y \to e\). I just need to be able to run the standard Ecalle construction of the abel function about a parabolic point for the boundary of the Shell-Thron region, while simultaneously running Schroder iteration about the repelling fixed point for the interior. This is going to be tricky.


You can see the error more clearly here, here is:

$$
\exp^{\circ 0.5}_{y^{1/y}}(3)\\
$$

graphed across \(y \in (2,3)\).

   

This dip is clearly artificial, because it's not a local maximum, which the math prerequisites it being. Also it's happening in the second decimal point, near \(e\), where Schroder becomes untenable. I'll try to see if I can make this more accurate, but without constructing an Abel function that is analytic as \(y \to e\), which is easier said than done. We might have to suffice with a program that fails near the Shell thron boundary but works everywhere else. Luckily, the math needed to construct \(\varphi\) doesn't depend on being holomorphic near \(e\), so we should still be okay theory/computation wise--just have to avoid when \(|\log(y)| = 1\).





So, as per the theory of this post. If I were to run a much much higher precision/deeper recursive version of the above graph, the blip should be smaller. And voila! I had to fucking eat ram like it was a fat kid with a bag of cheetos, but the blip is a tiny bit smaller here. Use the tickers to weigh the difference:

   
Reply
#13
(07/08/2022, 12:05 PM)JmsNxn Wrote: So, I've shifted my research into calculating:

$$
x [s] y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
$$

On a more maximal domain. By this we approach errors at around \(|\log(y)|=1\), but otherwise the functions behave cleanly.

This is a large domain in \(y\), where the errors, white outs and glitches, are precisely near \(|\log(y)|=1\). This goes hand in hand with the manner I have programmed these constructions. So many of the errors near here are human error on the manner of code, and not a mathematical error. Many more detailed tests have taught me that \(x [s] y\) is holomorphic for \(\Re(y) > 1\) when \(x > e\) and \(0 \le s \le 2\).

To exemplify this, I will use some graphs, they are not proof--just descriptions of the way it's working.



This is a graph over \(\Re(y) > 0\), done largely.

Zooming in on that error which represents the values \(|\log(y)| = 1\) we get a closer picture. Here \(\Re(y)>1\) and the white of this graph is error code. It does not represent the analytic function. It represents an error on my part within the coding.




By this I want to say that \(x[s]y\) is holomorphic on much larger domains than originally thought.

what exactly are those pictures ??

There are 3 complex variables x,s,y , so I do not see how you can map those into  1 picture.

***

another thing is to add a restriction on the idea of superfunctions.

that could really make the ultimate (extra) uniqueness criterion.

I mean the condition f^[s+t](z) = f^[s](f^[t](z)) = f^[t](f^[s](z)) as you might have guessed.

regards

tommy1729
Reply
#14
Follow the wording of my posts tommy, I give the description of each picture. We are holding certain values constant. So I've typically had \(x=3\) and \(s=0.5,0.3,1.3,0.9,1.9\) and such--and \(y\) graphed over large domains. And then There are graphs of just \(s\) while \(y\) and \(x\) are constant. I mean tommy, are you really being this dense?
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The modified Bennet Operators, and their Abel functions JmsNxn 6 245 07/22/2022, 12:55 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 7,741 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Holomorphic semi operators, using the beta method JmsNxn 71 5,361 06/13/2022, 08:33 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 10,837 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 3,715 01/17/2017, 05:10 AM
Last Post: JmsNxn
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 75,433 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 31,219 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Bounded Analytic Hyper operators JmsNxn 25 43,440 04/01/2015, 06:09 PM
Last Post: MphLee
  Incredible reduction for Hyper operators JmsNxn 0 4,280 02/13/2014, 06:20 PM
Last Post: JmsNxn
  interpolating the hyper operators JmsNxn 3 9,657 06/07/2013, 09:03 PM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)