Hi everyone, I'm an amateur in iteration theory. Lately I've read one of @MphLee's posts and then my previous work occurred to me and I started to reorganize them.
I've been worked for many years on what @MphLee called the generalized superfunctions (which I prefer to call "eigen-decomposit·ive functional equation"), and I have poor knowledge about the set theory so that my work may have some logistic false in it(plz someone help and I really appreciate it ).

I call it "eigen-decomposit·ive" because the equation for which generalized superfunction hold true, is analogous to the eigenvalue decomposition of a matrix in the field of linear algebra, and someone may find it really analogous to conjugacy, so the name doesn't matter at all.

And here I'm posting the previous part of my main work, in which I used the term "multivalued" or the concept of "multivalued function" many times, even if the term remains controversial till modern times, it(or the concept) still helps a lot in iteration theory(the reason will be explained soon, updating). And if someone have any trouble understanding it, I'd recommend to consider it as a function having many branch cuts.

I hope my work could be contributive to the modern iteration theory.
Regards.

One of sheldon's posts mentioned Peter Walker's 1991 paper

https://math.eretrandre.org/tetrationfor...p?tid=1292 In which he discribed a function to connect the natural tetration and the generalized iteration of f(z)=exp(z)-1 The method is the same to Cancel Law(f(z)=z+1,g(z)=exp(z)-1,h(z)=exp(z))

Most generalized iteration for some functions having no finite fixed point, like f(z)=2sinh(ln(z)) https://math.eretrandre.org/tetrationfor...p?tid=1305 f(z)=z+z^-1 f(z)=2z+z^-1 f(z)=z+exp(z) f(z)=z+ln(z) we can use the transformation of the fixed points (or conjugacy): f(z)=2sinh(ln(z)) q(z)=z^-1 F(z)=q(f(q^-1(z)))=-1/2csch(ln(z)) having parabolic fixed point 0 f(z)=z+z^-1 q(z)=z^-1 F(z)=q(f(q^-1(z)))=z/(z^2+1) having parabolic fixed point 0 f(z)=2z+z^-1 q(z)=z^-1 F(z)=q(f(q^-1(z)))=z/(z^2+2) having elliptic(attracting) fixed point 0 f(z)=z+exp(z) q(z)=exp(z) F(z)=q(f(q^-1(z)))=z*exp(z) having parabolic fixed point 0 f(z)=z+ln(z) q(z)=exp(z) F(z)=q(f(q^-1(z)))=ln(z+exp(z)) having parabolic fixed point 0 Then we generate F^t(z) and plug it into f^t(z)=q^-1(F^t(q(z)))

Welcome to the forum! It's always nice to get fresh blood.

This is interesting; I'm quite the fan of the notation .

I'm wondering, do you have any general idea on how to define the character,

For general instances? This is something I've been stuck on; developing a general theory to handle arbitrary ; and I keep hitting dead-ends. I know MphLEE is a fan of black boxing it. I'm curious to hear what he'll have to say about this. As to conjugating fixed points, this is pretty standard, and many authors have done that before. The trouble is doing it for exotic scenarios, and not just when it's convenient. I do believe that generally finding well behaved solutions to,

Is an open problem. And really, the famous examples are the Schroder/Abel/Bottchner equations. Doing this, for say would be something else though, lol.

Anyway, welcome to the forum! I hope we can be of service, and we can all learn together.

05/05/2021, 11:43 AM (This post was last modified: 05/05/2021, 06:15 PM by Leo.W.
Edit Reason: Grammar correction and small changes
)

(05/05/2021, 02:59 AM)JmsNxn Wrote: Hey, Leo

Welcome to the forum! It's always nice to get fresh blood.

This is interesting; I'm quite the fan of the notation .

I'm wondering, do you have any general idea on how to define the character,

For general instances? This is something I've been stuck on; developing a general theory to handle arbitrary ; and I keep hitting dead-ends. I know MphLEE is a fan of black boxing it. I'm curious to hear what he'll have to say about this. As to conjugating fixed points, this is pretty standard, and many authors have done that before. The trouble is doing it for exotic scenarios, and not just when it's convenient. I do believe that generally finding well behaved solutions to,

Is an open problem. And really, the famous examples are the Schroder/Abel/Bottchner equations. Doing this, for say would be something else though, lol.

Anyway, welcome to the forum! I hope we can be of service, and we can all learn together.

Regards, James

Thank you, James! I'm really happy to join you all, and I'm a big fan of your respectable elaborate previous work, so impressive!

The case seems really horrible
However, you can utilize the "Cancel Law" I mentioned in Section II:
Letting h(z)=tan(z), f(z)=exp(exp(z)), and simply add that g(z)=z+1,
then the phi{h|f} function can be represented as phi{h|g}(phi{g|f}(z)), where phi{h|g}=phi{tan(z)|z+1} is the superfunction of tan(z), and phi{g|f}=phi{z+1|exp(exp(z))} is the abel function of exp(exp(z)). The Cancel Law can produce infinitely many ways of solving the same equation by choosing different g(z).

Another Law I forgot to mention is that, for function f, and some iteration f^t where t is nonzero,
we can always generate one's superfunction family(abel,schroder,bottcher and so on) to another's. To see this, suppose Z satisfies Z f=g Z, then notice that Z (f^t)=(Z f) f^(t-1)=g Z f^(t-1)=g (Z f) f^(t-2)=g (g Z) f^(t-2)=g^2 Z f^(t-2)=....=(g^t) Z, so Z also satisfies the identity: Z (f^t)=(g^t) Z, unless t is 0. Then we arrive at Z=Z{g|f}~=Z{g^t|f^t}, where A~=B represents a relation that, A and B satisfies the same equation, and there's some probability that A=B (so it's also possible they're not equal). This is the ''Invariant Law'' Then we use the Cancel Law: Z{g|f^t}(z)=Z{g|g^t}(Z{g^t|f^t}(z))~=Z{g|g^t}(Z{g|f}(z)) where Z{g|g^t} is solvable, using the cancel law one more time, Z{g|g^t} can be transformed into the case g(z)=s*z then it's easy to check Z{g|g^t}=c*z^t where c in a nonzero constant. I prefer to call this the "Combination Law".

I've been work on the exotic cases for so long, let's call the function f having fixed point L, and f'(L)=s.
For the classic exotic case s=1, the Julia equation is always solvable in coeffients(if you assume a series of it), thus we have an approach to abel function. Then assume the asymptotic expansion of the superfunction at infinity, solve the coefficients term by term, and the superfunction is solved. And we should be careful with the direction of complex infinity, since we may generate different branch cuts of the superfunction. This method is already well-known.

When I was attempting to solve the "general exotic case"/parabolic case, where s=exp(2*pi*I*q) and q is a real rational number(and not an integer), then this case has no solution able to be generated from L, which is also pretty well-known.

However, let's just look on the 2-periodic case f(z)=-z(1-z), which is the logistic recurrence when s=-1:

Firstly I noticed that though the fixed point 0 is unsolvable, but the function has another, f(2)=2 and f'(2)=3, hence we can generate the superfunction of f at L=2.

Then I calculated that, all quadratic functions F(z)=a z^2+b z+c is always conjugate to the simplest case f(z)=z^2+v where v=a*c-b(b-2)/4, and the "conjugator" is a linear function.
So I asked if all general exotic cases in f(z)=z^2+v have another fixed point which is solvable, and it turned out this is true leaving out v=0.25.
(The way to check is simply solve f(z)=z, denoting solution as z1 and z2, then use Vieta's theorem, or Newton's identities about polynomial roots, we see that f'(z1)+f'(z2)=2, so if f'(z1) is on the unit complex circle, then f'(z2) is on the outside.)
Now I really wonder if there exists a function whose derivative at its fixed points are all unsolvable. The case: f(z)=z+(z-c)^n is obviously parabolic, with an nth order multi-fixed point c.

Secondly I tried to calculate the asymtotic expansion of the superfunction T, generated from L=0,f(z)=-z(1-z)
Applying the "Combination Law" I mentioned above, we see that, (since f(f(z))=z-2*z^3+z^4 which is classic parabolic case, we have the leading asymptotic term of the superfunction of f(f(z)), which is z^(-1/2)/2) the superfunction of f should have asymptotic expansion with leading term (2 z)^(-1/2).
So I assumed that T(z)~(2 z)^(-1/2)~f((2 z)^(-1/2)), and by slight rearrangement, I used the initial guess
T_0(z)=cos(pi z/2)^2/sqrt(2 z+6)+sin(pi z/2)^2*f(1/sqrt(2 z+4))
and applied T(z)=f^-1(T(z+1)) to make it converge. Though the function is rather converging to 0 than converging to a smooth function, I plot the T function after 50 iterations, T_50(z)=f^-51(f(T_0(z+50))), and the function did satisfy the relation: T(z+1)=f(T(z)) up to about 7 decimal places. So I guess the superfunction can be generated but in a rather hard method?

Lastly here's the plot of the T_50 function
The image is exported by the Wolfram Mathematica function AbsArgPlot.
(I'm sorry for my poor English and if I did say some words that is offensive or impolite, I'm really sorry for that)
(ps I wonder how you type a math formula in your reply? could u plz help me in it?)

Hey, Leo, very interesting stuff. Don't worry about your English, you speak very well; it's alright.

To add in Latex on this forum just write

Code:

[tex]
math goes here
[/tex]

As to what you do with ; that's very clever. That's particular to what MphLee and I were talking about. I guess my question was more so, how do you plan to choose which superfunction we produce? That was more what I was asking.

I get that,

I was wondering if you had any method where it chooses a particular super function and .

(05/05/2021, 07:53 PM)JmsNxn Wrote: Hey, Leo, very interesting stuff. Don't worry about your English, you speak very well; it's alright.

To add in Latex on this forum just write

Code:

[tex]
math goes here
[/tex]

As to what you do with ; that's very clever. That's particular to what MphLee and I were talking about. I guess my question was more so, how do you plan to choose which superfunction we produce? That was more what I was asking.

I get that,

I was wondering if you had any method where it chooses a particular super function and .

Regards, James

Thanks a lot, James!

(Maybe here's about how I choose a superfunction, I don't know if I expressed it in a clear way?)
I suggest we denote as the specific abel function of f(z), and as the schroder function and so on.
Straightforwardly, the superfunction is exactly (probably) equal to .

Choosing the appropriate branch cut of is kind of tricky. First of all, we should choose a branch cut which has the largest range, so an entire cut that contains a whole complex plane is better than a cut is either not entire or its range is only a subset of , the advantage of this method is that the we want would be entire, otherwise it may contain branch cuts or undefined for some values.

For instance, the superfunction of sine function has infinitely many branch cuts, the most common or most times generated one is that . By iterating we have this branch cut. However it is not entire, in fact the range of this function contains no negetive real numbers. The range is about to stay in the area approximately .
Another branch cut which is pretty approachable is with the initial condition , which is connected to the superfunction of sinh(z): , and if this cut is an entire function, we'll choose it instead of the former one.

However, for some reason we have to choose those non-entire cuts, if you want to generate a real-to-real , sometimes it's inevitable to use such functions.

For a real-valued function, it's better to choose a real-to-real superfunction than a real-to-complex superfunction, e.g. the, where we use the merged version of .

So the order of what kind of superfunctions I choose is,

if f is real-to-real, T:
real-to-real(entire,merged)>real-to-complex(entire,merged)>real-to-complex(entire,unmerged)
>real-to-real(not entire)>real-to-complex(not entire)

if f is real-complex, T:real-to-complex(entire, range is the whole plane,merged)>real-to-complex(entire, range is subset of C)>real-to-complex(not entire)

And for a function, its superfunction predicted to have an oscillating behavior(such as 0.1^x), real-to-complex(entire,merged) is the best choice to use.

For the merged function, we use the very 2 fixed points having the least absolute value and larger real part.
For real-to-real function, we'd better use the fixed point whose multipler s>1, since 0<s<1 could cause the function to be not entire, and s<0 causes the oscillating behavior.

The pics show the superfunction(real-to-real) of tan(z), an entire function(you can see many polynomial singularities and zeroes, there's no cuts however, The real axis behaves like a cut owing to the oscillation between sigularities and zeroes.)(produced by WolframMMA12.2 ComplexPlot)

I'll need sometime to think about this more. I like your examples, though; you're explaining yourself very well. And, again, what I mean is; you as a human are choosing how to perform the black box. Which is to say,

Doesn't have one, single clean formula. I was wondering how you were classifying the choices for ; and yes, we can perform abel iterations; it's making it reductive to a single algorithm that's the real trouble.

For example, on the real-line. Let's assume we have an analytic super function for in a neighborhood of the real-line; where here . There always exists a perturbation; which is a 1-periodic function and ; where . Where now is also a real valued super-function. Furthermore, it's pretty much indistinguishable from any other iteration.

So, as you've introduced it, is a set of functions. The really hard part, is distinguishing between each member of the set.

But again, I'll read over more carefully what you've written tomorrow.

05/06/2021, 11:54 AM (This post was last modified: 05/06/2021, 07:30 PM by MphLee.
Edit Reason: GREATLY IMPROVED
)

BIG EDIT

Hi, I'm sad that I have to answer briefly. I'm taking some notes and comments about the key parts of this thread and I'll come up with a complete and polished opinion ASAP.
Lately, I got virtually 0 time to put into the forum.
Anyways, I can give my few cents.

About the notation

I find the notation you propose over-complicated. Those are just sets and we don't need to write in front of that a representative of that set. Instead of that conceptually invites you to see it as a function. You should write instead, hinting at the fact that it is a multi-valued function. If one finds a way to systematically pick from every set a special solution, we can make a new functional notation like or .

Since the beginning of my path (2012-2013) I always have regarded having to do with a set of multiple possible solutions as a limitation. I wanted desperately to find THE special solution in every set but as JmsNxn states "The really hard part, is distinguishing between each member of the set". For that reason I started acting as if that set was a unique function and I derived the same "laws" you derived. I used the notation naming it "Abel sum".(*)
After 10 years I can claim that this stubbornness slowed me down! In fact I understood that we have to embrace that multiplicity and see what happens. What happens is category theory!

One of my old dreams is to come up with an abstract classification of those sets and with an inner classification of the solutions inside those sets... I guess the way to go is to look at the theory of bundles and connections but is tremendously hard (It links us to mathematical physics.).

Nowadays (v) I'd just write . When the meaning is not clear because our functions come from different monoids and of functions, I'd go for and . It turns out that those sets are nothing but Hom-sets and they define a functor from the category of dynamical system (or N-actions) to the category of sets and functions.

One other example of why the notation can be not enough comfortable is the following: I find more gentle for who writes to denote with f(x) not the function f, but the value of the function f at x. So when you write we should better have where is the abel function of f. The consequence of not doing that is that, on a bad day, you could end up writing things like

This can seem like pedantic and useless. I don't know what happens in the analysis realm, but in set theory and in algebra finding the right notation open the doors to new intuition and abstraction (+old and seemingly unrelated theorems become available.).

About the Laws

Those sets you are defining satisfy indeed the properties you describe in the opening post. Those properties have nothing to do with the fact that our functions are analytic, continuous or multi-valued. Those "laws", in fact do not even derive from f,g and being functions. Most of those propositions come from free from the fact functions are elements of a monoid, wrt composition. Those "laws" are low-level shadows of a more vast zoo of "elementary laws" that come from the structure of categories(iv) !

Some of them do not even require a monoid structure to be deduced: those are a much more "primordial" kind of laws that hold, for example, also for solution sets of equations in more general cases. To take as an example the way every solution set of a non-homogeneous system of linear equations (the fiber/preimage of a linear transformation at a given vector) can be decomposed/factored into a traslation of a solution set of an homogeneous system (you can know all the solutions of Ax=b if you know a single solution and the solution set of Ax=0, i.e. the null space of the matrix). In the same way, the set of antiderivative of a function can be decomposed into a sum of a known solution with the set of antiderivatives of the zero (aka constants +C).

Just some examples:

if every solution set had a unique element(**) then "Inversion Law" would be named anticommutativity(***) and is equivalent to conjugacy being symmetric. It does hold exclusively for groups of functions and it's basically a corollary of working with a groupoid.

The "cancel law" is basically composition in the category or, from another point of view, transitivity of the conjugation.

Following the properties of equivalence relations there is the las one, i.e. reflexivity, and we can exhibit it as the third natural property of those sets: identity is a solution of fx=xf

Law 5 comes from the solution set being a torsor (like solution sets of non-homogeneous linear equations are affine spaces). But it is also a corollary of the cancel law.

The invariant law is more interesting and subtle. I still don't know where it comes from exactly; I'm currently testing how much one can generalize it.

The thins about fixed point is standard knowledge ofc. Fixed points are dynamical properties. Two dynamics being conjugated. Every , i.e. is a solution to xf=gx is a morphism of dynamical system. Morphism of dynamical system preserve positive dynamical properties (e.g having n-periodic points) and reflect negative properties. [See the book Conspetual Mathematics by Lawver-Schanuel]

The analogy with basis and eigendecomposition is not that clear to me. I'm working on at least three different views on that but still can not come up with an unifying vision. For sure finding the superfunction amounts to a change the base in some sense. Amounts to having a dynamical system and changing the base to a new base where the dynamics of the initial map become "linear" (see Schroeder and Abel). I'd like to know if you have more insight into this.

Oh, btw, welcome to the forum!

(*) The additive notation, that I don't support as strongly anymore was chosen to make evident the fact that the operation has a left and right inverse operation (conjugation) and was a pivotal for the intuition of hyperoperation rank-as-iteration of the"Abel sum". . I wanted so badly to make formal JmsNxn's diamond operator.

(**) Impossible if the monoid is non-trivial: if it is not then every element is conjugated to itself by identity AND itself. It is impossible even given the most permissive interpretation of the condition: if f and g ARE NOT idempotent. A solution if x solves xf=gx then xf and xf^2 are new solutions.

(***) Anticommutativity-like properties showing up is an interesting thing because it hints at some links with Lie brackets or metric spaces.

(iv) For a crash intro into categories (for the algebraic-discouraged-teens and not anymore-teens) see this post

(v) With "nowadays" I can send you to my recent post about generalized superfunction tricks. But that version is old and I worked on it a lot. I give you instead the new section 1.1.2 on notation from that paper. Ii is still work in progress.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)

05/06/2021, 09:26 PM (This post was last modified: 05/07/2021, 11:51 AM by Leo.W.
Edit Reason: Simplification
)

(05/06/2021, 11:54 AM)MphLee Wrote: BIG EDIT

Hi, I'm sad that I have to answer briefly. I'm taking some notes and comments about the key parts of this thread and I'll come up with a complete and polished opinion ASAP.
Lately, I got virtually 0 time to put into the forum.
Anyways, I can give my few cents.

[...]

(v) With "nowadays" I can send you to my recent post about generalized superfunction tricks. But that version is old and I worked on it a lot. I give you instead the new section 1.1.2 on notation from that paper. Ii is still work in progress.

Thanks for reading, MphLee! Very honored to receive your reply!

I'm reading about the category theory, it's really a masterpiece to me! But I'm not that good at dealing with the notations in set theory, it may take more time to comprehend.
I see that there's some conflicts between my old notations and those notations in your theory, and frankly speaking, the notation doesn't matter, you can literally denote whatever you want. And this notation , I've considered whether to plug a "(z)" into it, in analytic integral transforms it's usually denoted with a functor or operator, so do I. I merely used here to make my explanation clearer lol. But sometimes, if you're really in a rush and have narrow spare time to define a new operator from another, like you've written:, if you're really defining another operator like , with too much kinds of operators can also be a mess...and consider a function containing another variable as a constant like f(z)=s^z, then we have to write... so much things! So I'm neutral and flexible about them Again, it's ... just not a big deal The part may be similar to an operator, though we can denote any terms whatever we like, I think it necessary to add something beyond f and g.({f,g} or AbelSum is adequate), maybe if you would prefer?
Just because I'm more an analytic math lover And also these notations are what I used like, 3 years ago, when I opened my curiosity to generalize the idea beyond Abel, Schroder, and Bottcher.

The Law 5 is not so a derivation from the "Cancel Law", it contains more generalization and symmetry. It suggests the f's symmetry should work for every nonzeroth iteration of f, it's more like .
Sorry I took the wrong way the very question about how to distinguish each element from the solution. In my opinion, it's really a big deal about finding "the most original solution", which I described it as, the one(two?) solution generated by only the limitations, sometimes the merged version. Though any 1-cyclic theta mapping leads to different solutions to the same abel's equation, the superfunctions generated from the specific fixed points, and the merged version of two should be unique, and can be generated by various methods, while other solutions are "opaque" or indirect to generate, I guess. Then every single solution can be transformed from the original one, or two or etc. So it's like when we solve for a second-order ordinary differential equation, the first thing is to derive out two solutions that are "linearly independent" to each other, and meanwhile let the two solutions have some particular values or identities or properties. The specific ones come first, the general ones come next.
Let's get back to the original. We want a function, to satisfy the relation f(z+1)=s*f(z), so according to 1-cyclic theta mapping, once we had figured out a solution and made it computable, f(z+\theta(z)) would satisfy the same equation, so how did we choose the very specific one f(z)=s^z? I mean, you can see, f(z)=c*s^z is still a solution, and c can be arbitrary. But in reality, the most widely used function is still f(z)=s^z, as a basic definition of exponentiation, so I think, there lies the variety of solutions, pretty well, but we can still determine which solutions are the most "worthy". This happens most in the field of solving an ordinary differential equation, such as Mathieu functions, whose formal definition involves Wronskian, number of zeroes, specific infinite sum, orthogonal properties and some definite integral value.
The one link between all possible solutions is that, if we assume a solution can be represented by the "original" one through , then we can see that. So if commutable, then a new solution is claimed. And ALL solutions should be connected in this way or another analogous way(you may use). For f(z)=z+1, this rho is the theta mapping.
I did really know these laws have a similarity to those more professional and general terminology in group theory and set theory, eh...I just don't know what they're called...so thanks! However, since their compositions behave not so quite similar to the group elements or category members, due to their multiplicity, variety, and maybe the property "multi-valued", because the laws sometimes describe a way more complicated "multivalue", maybe you can derive two different functions throughout the same multiplication, since the multiplication here only describes a way to generate an unknown solution, and the laws only imply a connection between a fractal of all the solutions, so I guess maybe these functions have some properties of categories, and a little from "multigraphs"?

The analogy between basis and eigen-decomposition is through the Carleman Matrix, you can consider the schroder function, according to the definition:
, where C[s*z] is clearly a diagonal matrix, quite similar to the eigendecomposition, where you conjugate a matrix into a diagonal one containing all of its eigenvalues, and thus every eigendecompositic equation is equivalent to solve a linear equation(which has multiple solutions and the solutions can refer to a multivalued function): , given C[F] and C[G].
Likewise, when dealing with operators, it's quite a similarity between linear algebra(though mostly about linear transforms)and set theory/group theory(the most cyclopaedic handler).
The laws of both have something in common, like the anticommunity in multiplication and composition. I suppose these laws are not completely equivalent to those general laws, though there's much similarity, maybe someday we can really combine the multivalued iteration(which can be so sophiscated) with the general laws.