Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Some "Theorem" on the generalized superfunction
#21
You can compose THESE Riemann Surfaces as equivalence classes. And it's really no different than composing multivalued functions. At least, as I understand it.

Consider the Riemann Surface derived by the differential equation,



Then this thing just looks like,



Which is clearly a Riemann surface. (classifying it would be hard, I don't know how.)

Now consider the Riemann surface derived by the differential equation,



This thing just looks like,



And there composition is just the Riemann surface,



The idea is that we can always project from the Riemann surface to a local coordinate; and we're just composing the local coordinate in z; and pulling back to make a new surface. This is something I did frantically in that paper; but I always stuck to the idea of local behaviour. It's important to remember, we are treating one variable in the composition, and treating this as an equivalence class. And these are very specific Riemann surfaces.

If you gave me two Riemann surfaces; I do not think you can compose them generally. If you give me two Riemann surfaces generated by a first order differential equation; sure can compose them; as long as we think about it locally. This stuff always hurt my head though, so I never bothered to go deep into it. But I believe it would work as such.
Reply
#22
Quote:This stuff always hurt my head though, so I never bothered to go deep into it. But I believe it would work as such.

My head is not hurting less xD

Anyways, this deserves some study. I'll put it on my infinite to-do list.

MathStackExchange account:MphLee

Fundamental Law
Reply
#23
(05/11/2021, 12:14 AM)JmsNxn Wrote: You can compose THESE Riemann Surfaces as equivalence classes. And it's really no different than composing multivalued functions. At least, as I understand it.

...

If you gave me two Riemann surfaces; I do not think you can compose them generally. If you give me two Riemann surfaces generated by a first order differential equation; sure can compose them; as long as we think about it locally. This stuff always hurt my head though, so I never bothered to go deep into it. But I believe it would work as such.
Hi James

It took me bout a month revising and getting prepared for my final exam, and finally I made it today Smile Smile Smile


I've been also concentrating myself on the idea of differential forms of an iteration, and it turned out that it is adequate to represent them throughout the iteration velocity, which is also the functional root of Julia's equation.(I'll call it Julia's solution)
So the riemann surface should be established on Julia's equation or its reciprocal. Ramanujan gave his result in his notebook I by

where is then proved to be the Julia's solution
we can see that calculating the derivative with respect to t must involve Julia function, itself defined by a differential equation.
(But we have to come to the admission that Julia's equation has infinitely many solutions, correspondingly to the fact that there are infinitely many abel solutions. So the set of all generated Julia's solution should be considered as a category.)
You can see Julia's equation also give a way to describe the infinitesimal iteration and reverse it you have the iteration integral.
Also it proves in some sense the symmetric law, whenever fg=f, then f^t g=f^t(f and f^t treated as branch cuts), because the functional iterations of the same function share the same Julia's solution(by 3rd law, also proveable by many means), for example,
the L denotes Julia's solution
L(cos(z))=L(z)*-sin(z) and L(cos(-z))=L(-z)*sin(z) showing that for some branch cuts L(-z)=L(z)
then by sharing the same L, exchanging cos with arccos
L(arccos(z))=L(z)*-arccos'(z) and L(arccos(-z))=L(-z)*-(arccos(-z))'(z)=-L(-arccos(z))=L(arccos(z))
taking the holomorphic version of inverse of L, arccos(-z)=arccos(z)
which means that in some sense all sets of "-1th-iteration of cos" or arccos(z) has a branch cut arccos(-z) (but normally this is invalid and impractical though)


And I'm concerning if the general composition of two Riemann surface can be represented just as simply the chain rule of d/dx like:

and since f,g,h are three riemann surface, some form of this indeed describe a differential equation for h=g(f) as you may multiply both sides by 
following the idea that an inverse function's RS is defined this way


Regards, Leo
Reply
#24
(06/08/2021, 06:44 PM)Leo.W Wrote: It took me bout a month revising and getting prepared for my final exam, and finally I made it today Smile Smile Smile


I hope you did well on your exam Smile 


Quote:I've been also concentrating myself on the idea of differential forms of an iteration, and it turned out that it is adequate to represent them throughout the iteration velocity, which is also the functional root of Julia's equation.(I'll call it Julia's solution)
So the riemann surface should be established on Julia's equation or its reciprocal. Ramanujan gave his result in his notebook I by

where is then proved to be the Julia's solution
we can see that calculating the derivative with respect to t must involve Julia function, itself defined by a differential equation.
(But we have to come to the admission that Julia's equation has infinitely many solutions, correspondingly to the fact that there are infinitely many abel solutions. So the set of all generated Julia's solution should be considered as a category.)
You can see Julia's equation also give a way to describe the infinitesimal iteration and reverse it you have the iteration integral.
Also it proves in some sense the symmetric law, whenever fg=f, then f^t g=f^t(f and f^t treated as branch cuts), because the functional iterations of the same function share the same Julia's solution(by 3rd law, also proveable by many means), for example,
the L denotes Julia's solution
L(cos(z))=L(z)*-sin(z) and L(cos(-z))=L(-z)*sin(z) showing that for some branch cuts L(-z)=L(z)
then by sharing the same L, exchanging cos with arccos
L(arccos(z))=L(z)*-arccos'(z) and L(arccos(-z))=L(-z)*-(arccos(-z))'(z)=-L(-arccos(z))=L(arccos(z))
taking the holomorphic version of inverse of L, arccos(-z)=arccos(z)
which means that in some sense all sets of "-1th-iteration of cos" or arccos(z) has a branch cut arccos(-z) (but normally this is invalid and impractical though)


And I'm concerning if the general composition of two Riemann surface can be represented just as simply the chain rule of d/dx like:

and since f,g,h are three riemann surface, some form of this indeed describe a differential equation for h=g(f) as you may multiply both sides by 
following the idea that an inverse function's RS is defined this way


Regards, Leo
That is very interesting Leo.

So you want to look at Julia's solution to solve the iterative logarithm problem?

I'm trying to understand exactly what you are trying to do.  Forgive me if I misunderstand you.

To understand the differential equation,



Where



We want to use Julia's equation,



Where,



If I'm interpreting you right; do you mind sourcing me this result? This is absolutely breathtaking.
Reply
#25
(06/09/2021, 12:40 AM)JmsNxn Wrote: ...
If I'm interpreting you right; do you mind sourcing me this result? This is absolutely breathtaking.

Thank you James Smile

A quick proof is: (no need for complex algebra)

Assume the iteration can be expanded as
 (1)
First note that  (2)
We then take the fundamental definition
Expand the LHS by (1) and consider be the variable of RHS, then expand by (1)
We have now:
Take the coefficient of , LHS by binomial theorem, and compare with (2)
we have (3)
Set
Use chain rule to show 
Or plug in, so we have
by (3), we can now show that  (4)
(now substitute y with z)This shows we can calculate all by equating both's coefficients of
followed by
Then we only need to determine , then we can construct the iteration in a neighborhood of t=0, which can be considered as the infinitesimal iteration

We also need the basic fact that Julia's solution works for all iterations of the same function, as stated:
Denote  with Julia's solution
Then (5)
which can be proved very easily by law 3, or through abel's solution

Now note the integral (it's exactly the Abel's solution though: ) q is arbitrary, and z is not a fixed point, but can be very close to some fixed point L



so we have now the more general integral, showing that  (6)

We should consider the integral here as a contour integral, so that (wherever lambda is holomorphic, we can take it as a line integral), if somewhere lambda had a branch cut, we can then know that this equation defines a RS.
By limit, we can analytically continue the definition, so then z can be some fixed point L, however, we should then re-consider f as a multivalued function so that by Cauchy's residue theorem, there lies some branch cuts near L and L is a singular, so C is then not 0.
Now take derivative of (6) with respect to k:



Combine this with (4) and (5) we have
In conclusion,  (7)
Try to remind of the law 3, saying that by multiplying a constant, one can modulate absolute value of the Abel's solution, then we can set C=1

Finally we compare (4)&(7), we complete the proof, showing that , which is also the difinition of the iteration velocity
Tah-dah!

ref/Almost all of these results were concluded by Ramanujan, what I did is just adding more details since his paper was like, filled with "Q.E.D."


A generalization of this, you can take in

Leo
Reply
#26
Quote:ref/Almost all of these results were concluded by Ramanujan, what I did is just adding more details since his paper was like, filled with "Q.E.D."

Thank you for the details. I really needed it!

Quote:Do you mind sourcing me this result? This is absolutely breathtaking.


Btw... I'm sure it is not the oldest reference but wasn't all of this known to Andydude and Trapmann back in 2008, when they were collecting old knowledge from Jabotinksy and Ecalle to complie the "FAQ" and "Tetration Reference"?

bo198214, Jabotinsky's iterative logarithm, 2008

MathStackExchange account:MphLee

Fundamental Law
Reply
#27
(06/09/2021, 03:00 PM)MphLee Wrote: Thank you for the details. I really needed it!
...
bo198214, Jabotinsky's iterative logarithm, 2008
Indeed. I haven't been searching the oldest ref, not aware of someone did this great work so soon.
You can also show that if an iteration has RS model expression, you can't avoid a definition of Julia's equation, you can take notice at any Zf=gZ, take derivative and there's only one way to make the equation contains only Z' rather than Z&Z' or Z'&Z'', which means the whole RS definition is exactly a derivation from the Abel's equation. 

btw I wonder could you please introduce some ref about the continuous hyperoperators it'll be a great favor
if it's convenient for you

I had this quick thought (maybe coincident with early posts)
It's been 2 years since I concerned whether there lies a method to generalize an operator T to all of its powers
For instance, an ODE can be written as H(y)=0 where H is an operator, solving this ODE is exactly doing y=H^-1(0), I'm already aware that the method work for ODE sometimes and already showed in some essays.
Likewise, Iteration defined by the operator A and B, whose inverse
A^-1(f)=f(P(f^-1)) answers ''whose superfunction is f''
B^-1(f)=f^-1(P(f)) answers ''whose abel function is f''
where P is ''plus one'' operator, 
If we find a proper definition of A+1 or (A^-1)+1 we can use binomial series to generalize its exponentiations
I wonder if this method works, and what about on other operators, like d/dx? I maybe test fractional calculus some day soon.



Regards, Leo
Reply
#28
(06/09/2021, 06:00 PM)Leo.W Wrote: You can also show that if an iteration has RS model expression, you can't avoid a definition of Julia's equation, you can take notice at any Zf=gZ, take derivative and there's only one way to make the equation contains only Z' rather than Z&Z' or Z'&Z'', which means the whole RS definition is exactly a derivation from the Abel's equation.

Can you expand on this?

Quote:btw I wonder could you please introduce some ref about the continuous hyperoperators it'll be a great favor
if it's convenient for you

Sure. I give you a latex version of an old answer I gave on MSE. Here you can find probably every reference on non-real ranks up to 2015. I have few more references but recent stuff.


.pdf   (2015) MphLee - Introduction to the non-integer ranks problem.pdf (Size: 391.07 KB / Downloads: 104)

Quote:If we find a proper definition of A+1 or (A^-1)+1 we can use binomial series to generalize its exponentiation
I wonder if this method works, and what about on other operators, like d/dx? I maybe test fractional calculus some day soon.

Regards, Leo

Maybe. This approach was recently discussed on this threads here (2021, JmsNxn's update of his 2015 approach) and here (2021, in the context of functors and operators).
in the past the problem of lack of linearity in the context of iterating operators using fractional calculus was observed also here (2015), here (2016), and here discussing the possible link with computability (2017).

MathStackExchange account:MphLee

Fundamental Law
Reply
#29
(06/09/2021, 06:00 PM)Leo.W Wrote: If we find a proper definition of A+1 or (A^-1)+1 we can use binomial series to generalize its exponentiations
I wonder if this method works, and what about on other operators, like d/dx? I maybe test fractional calculus some day soon.



Regards, Leo

Hey, Leo.

So the fractional calculus approach is equivalent to binomial series. This was discussed a long time ago on here when I first pointed out the fractional calculus approach.

If I take,



This iteration will be equivalent to the expansion,



And you can show if one converges, so does the other.

This Newton series will be equivalent to a binomial expansion.

This method will work in a lot of scenarios; the difficulty tends to be slow convergence.

Regards, James

I cannot believe I ever missed this application for Julia's equation. This is fantastic!

If is julia's function for ; then,

Reply
#30
Thank both of you, James and MphLee, its very kind of you!

Blush

I'm busy these days, sorry to be late to say thank you, these refs and statements did help!

Sincerely,
Leo
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  The Generalized Gaussian Method (GGM) tommy1729 2 654 10/28/2021, 12:07 PM
Last Post: tommy1729
  tommy's singularity theorem and connection to kneser and gaussian method tommy1729 2 677 09/20/2021, 04:29 AM
Last Post: JmsNxn
  Generalized Kneser superfunction trick (the iterated limit definition) MphLee 25 9,182 05/26/2021, 11:55 PM
Last Post: MphLee
  Generalized phi(s,a,b,c) tommy1729 6 2,387 02/08/2021, 12:30 AM
Last Post: JmsNxn
  Where is the proof of a generalized integral for integer heights? Chenjesu 2 4,397 03/03/2019, 08:55 AM
Last Post: Chenjesu
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 29,028 08/22/2016, 12:19 AM
Last Post: JmsNxn
  Natural cyclic superfunction tommy1729 3 6,435 12/08/2015, 12:09 AM
Last Post: tommy1729
  [2014] Uniqueness of periodic superfunction tommy1729 0 3,587 11/09/2014, 10:20 PM
Last Post: tommy1729
  Theorem in fractional calculus needed for hyperoperators JmsNxn 5 12,058 07/07/2014, 06:47 PM
Last Post: MphLee
  [2014] tommy's theorem sexp ' (z) =/= 0 ? tommy1729 1 5,220 06/17/2014, 01:25 PM
Last Post: sheldonison



Users browsing this thread: 1 Guest(s)