(05/06/2021, 11:54 AM)MphLee Wrote: [ -> ]BIG EDIT

Hi, I'm sad that I have to answer briefly. I'm taking some notes and comments about the key parts of this thread and I'll come up with a complete and polished opinion ASAP.

Lately, I got virtually 0 time to put into the forum.

Anyways, I can give my few cents.

[...]

(v) With "nowadays" I can send you to my recent post about generalized superfunction tricks. But that version is old and I worked on it a lot. I give you instead the new section 1.1.2 on notation from that paper. Ii is still work in progress.

Thanks for reading, MphLee! Very honored to receive your reply!

I'm reading about the category theory, it's really a masterpiece to me! But I'm not that good at dealing with the notations in set theory, it may take more time to comprehend.

I see that there's some conflicts between my old notations and those notations in your theory, and frankly speaking, the notation doesn't matter, you can literally denote whatever you want. And this notation

, I've considered whether to plug a "(z)" into it, in analytic integral transforms it's usually denoted with a functor or operator, so do I. I merely used

here to make my explanation clearer lol.

But sometimes, if you're really in a rush and have narrow spare time to define a new operator from another, like you've written:

, if you're really defining another operator like

, with too much kinds of operators can also be a mess...and consider a function containing another variable as a constant like f(z)=s^z, then we have to write

... so much things! So I'm neutral and flexible about them

Again, it's ... just not a big deal

The part may be similar to an operator, though we can denote any terms whatever we like, I think it necessary to add something beyond f and g.({f,g} or AbelSum is adequate), maybe if you would prefer?
Just because I'm more an analytic math lover

And also these notations are what I used like, 3 years ago, when I opened my curiosity to generalize the idea beyond Abel, Schroder, and Bottcher.

The Law 5 is not so a derivation from the "Cancel Law", it contains more generalization and symmetry. It suggests the f's symmetry should work for every nonzeroth iteration of f, it's more like

.

Sorry I took the wrong way the very question about how to distinguish each element from the solution. In my opinion, it's really a big deal about finding "the most original solution", which I described it as, the one(two?) solution generated by only the limitations, sometimes the merged version. Though any 1-cyclic theta mapping leads to different solutions to the same abel's equation, the superfunctions generated from the specific fixed points, and the merged version of two should be unique, and can be generated by various methods, while other solutions are "opaque" or indirect to generate, I guess. Then every single solution can be transformed from the original one, or two or etc. So it's like when we solve for a second-order ordinary differential equation, the first thing is to derive out two solutions that are "linearly independent" to each other, and meanwhile let the two solutions have some particular values or identities or properties. The specific ones come first, the general ones come next.

Let's get back to the original. We want a function, to satisfy the relation f(z+1)=s*f(z), so according to 1-cyclic theta mapping, once we had figured out a solution and made it computable, f(z+\theta(z)) would satisfy the same equation, so how did we choose the very specific one f(z)=s^z? I mean, you can see, f(z)=c*s^z is still a solution, and c can be arbitrary. But in reality, the most widely used function is still f(z)=s^z, as a basic definition of exponentiation, so I think, there lies the variety of solutions, pretty well, but we can still determine which solutions are the most "worthy". This happens most in the field of solving an ordinary differential equation, such as Mathieu functions, whose formal definition involves Wronskian, number of zeroes, specific infinite sum, orthogonal properties and some definite integral value.

The one link between all possible solutions is that, if we assume a solution can be represented by the "original" one through

, then we can see that

. So if

commutable, then a new solution is claimed. And ALL solutions should be connected in this way or another analogous way(you may use

). For f(z)=z+1, this rho is the theta mapping.

I did really know these laws have a similarity to those more professional and general terminology in group theory and set theory, eh...I just don't know what they're called...so thanks! However, since their compositions behave not so quite similar to the group elements or category members, due to their multiplicity, variety, and maybe the property "multi-valued", because the laws sometimes describe a way more complicated "multivalue", maybe you can derive two different functions throughout the same multiplication, since the multiplication here only describes a way to generate an unknown solution, and the laws only imply a connection between a fractal of all the solutions, so I guess maybe these functions have some properties of categories, and a little from "multigraphs"?

The analogy between basis and eigen-decomposition is through the Carleman Matrix, you can consider the schroder function, according to the definition:

, where C[s*z] is clearly a diagonal matrix, quite similar to the eigendecomposition, where you conjugate a matrix into a diagonal one containing all of its eigenvalues, and thus every eigendecompositic equation is equivalent to solve a linear equation(which has multiple solutions and the solutions can refer to a multivalued function):

, given C[F] and C[G].

Likewise, when dealing with operators, it's quite a similarity between linear algebra(though mostly about linear transforms)and set theory/group theory(the most cyclopaedic handler).

The laws of both have something in common, like the anticommunity in multiplication and composition. I suppose these laws are not completely equivalent to those general laws, though there's much similarity, maybe someday we can really combine the multivalued iteration(which can be so sophiscated) with the general laws.

Regards, Leo