This follows from the discussion held at: Some "Theorem" on the generalized superfunction, (May 07, 2021), Tetration Forum.

Ok... I was reading this point of James' post in the thread linked above and something just clicked in my brain. The downside of working on the elementary building blocks of iteration theory is that I'm lagging some light-years behind all the Sheldon's cutting edge computations, Trappmann and Kouznetsov's method and all of the James' holomorphic witchcraft. But I can see some light. I'm now strongly convinced that the composition integral program is the right way.

The following is a preliminary meditation on the partial derivatives of flows and how to rephrase the key ingredients in categorical terms. I'm excited by this because this is the first time I'm able to embed some differential features into my algebraic framework.

This is not (only) an empty philosophical post about notations and definitions. I tried to generalize the factorization of the partial derivatives of a flow in terms of Jabotinsky Iterative Logarithm and the flow itself, derived from this the property that makes the partial differential a natural transformation and added some exotic corollaries from that.

SOME NOTATIONAL LUNA PARK

Remember the definition of a -time dynamical system on X (or -system): it is just that of a left monoid action over the set X. Here is the monoid of time (better if commutative or even better if abelian group) written in additive notation. When the monoid is the group of real numbers we call it flow over X. When X is a vector space and the action is linear then we have a linear representation of .

def 1a. A T-action is a map s.t. and .

Observation. The definition seems abstract and arbitrary... but it is not really. Remember the existence of the curry isomorphism. It is the isomorphism of "curryng a variable". It shows that the set of functions is "essentially the same" as the set of functions of functions . The isomorphism is given by the function "curry the B variable"

where and

def 2. It is now evident that is a T-action over X if and only if is a monoid homomorphism. Observe that is also a monoid by composition. if f is a T-action we have and

Slogan. Monoid actions are monoid homomorphisms.

Notation. Let be a function enough well behaved over some sets where all the differential structure is on the right place. I write for the partial derivative in the i-th variable. We get new functions

JOURNEY to CATEGORICAL HELL

Following JmsNxn. I guess it is clear that if our monoid of time has some topological/differential structure (like R or C actions) we can consider the partial derivative of the monoid-action/dynamical system. Let be a T-action (differentially well behaved) consider the partial derivative respect to the T variable . As James writes

At this point just notice that (this is called Jabotinsky's iterative logarithm ), thus we have a

Observe that fixing the variable gives us James' differential equation .

If instead we unleash the curry isomorphism we can render the generator lemma as a factorization of single-valued functions.

Maybe this lemma is not important but it has interesting consequences if one wants to translate all of this into category theory. With much boldness and unfounded certainty I declare theorem the following harmless corollary.

Mutiplication corollary. Let n be a natural number

Weird metric-like corollary. If T is a group, and is invertible then

I know, all this seems harmless and empty but it isn't. This equation shows that defines a natural transformation from the T-action f, seen as a functor from the time-monoid to the state-space (category), to the identity T-action, aka the "still" dynamic on X.

To make this clear I invite the reader to endure a little of abstract madness.

Def 3. Given the monoid T we can build an interesting category (the category of the right traslations of T): begin by adding a bunch of points: one point for evey element .

Now the arrows: between the points we draw an arrow if there is an element such that . Let's label that arrow with the name of the element t. We can depict the arrows in the two equivalent ways

Composition is easily defined

What does have to do with T-dynamical systems? Remember that a T-action f over X is equivalently, by currying, a monoid morphism that assigns to every time t an endomorphism of X (in the forum we'd like to see this endomap as the t-th iterate of the dynamics). We can see that , as T-action, defines automatically a corresponding functor from to the category where X lives. Let's denote it, yep abusing the notation, as .

Def 4. can be seen as a functor sending all the objects of to the same object X and every arrow to the function . Functoriality is guaranteed by the definition of dynamical system (T-action).

Let's visualize that!!: on the left (in red) live the objects and arrows of (the domain category) and on the right (in black) I display (some of) the objects (just one) living in the target category.

Now consider the trivial T-action over X. It is the dynamics of the identity map of X: call it where for every t we get the identity. As we have seen above, it defines a functor .

The proof IS the proof of theorem 1 because that is exactly the condition that natural transformations have to satisfy by definition. We can visualize it as we did for the functor.

I know, now this seems scary and disturbing but with natural transformation I mean the same kind of objects I was defining on the post on generalized superfunction tricks and that's why iterated composition fits so well into this. I will expand on this in another post but let's just observe that the partial derivative satisfies the functional equation

TROPHIES FROM HELL?

Let's ground this with concrete objects. The problem with considering general T-monoid actions (T-time dynamics) is that our time object T does not know what our initial function is, i.e. monoid actions don't know what process is being iterated. Monoids do not know what is the "quantum of time" because they have zeroes but not units, unlike unitary rings. We need rings to have units and to recover the initial function as .

So let's consider the field of real numbers as our time and let's just restrict to real analysis. The theorem 1 give us a corollary.

Corollaries. i) Let n be a natural number

ii) For every real number s,

The proof is trivial.

The last one can be rewritten as

In this simplified setting it is easier to appreciate the nature of this equation because we can clearly see that it's a natural transformation between two diagrams that have a familiar shape (that of the real numbers' linear order ). Here the picture shows a bit of the infinite diagram of sets and functions

We obtain where this set is defined as

These are the same sets defined for the Generalized superfunction trick in this thread

When this intution solidifies we could start to consider topological monoids... continuous paths as diagrams and, after I find the right categorical formulation of paths partition and limit, rebuild algebraically James' composition integral.

Quote:Such that . A little unused theorem on this forum, is classifying this superfunction as a flow.

I like to call this object,

Which can be referred to as the logarithm of the super function; or traditionally it's known as a generator of a flow map. Now, for every super function there is one logarithm, and for every logarithm there is one superfunction. There is no obvious connection between the logarithm and the initial function ; but you can derive a formula for it.

Ok... I was reading this point of James' post in the thread linked above and something just clicked in my brain. The downside of working on the elementary building blocks of iteration theory is that I'm lagging some light-years behind all the Sheldon's cutting edge computations, Trappmann and Kouznetsov's method and all of the James' holomorphic witchcraft. But I can see some light. I'm now strongly convinced that the composition integral program is the right way.

The following is a preliminary meditation on the partial derivatives of flows and how to rephrase the key ingredients in categorical terms. I'm excited by this because this is the first time I'm able to embed some differential features into my algebraic framework.

This is not (only) an empty philosophical post about notations and definitions. I tried to generalize the factorization of the partial derivatives of a flow in terms of Jabotinsky Iterative Logarithm and the flow itself, derived from this the property that makes the partial differential a natural transformation and added some exotic corollaries from that.

SOME NOTATIONAL LUNA PARK

Remember the definition of a -time dynamical system on X (or -system): it is just that of a left monoid action over the set X. Here is the monoid of time (better if commutative or even better if abelian group) written in additive notation. When the monoid is the group of real numbers we call it flow over X. When X is a vector space and the action is linear then we have a linear representation of .

def 1a. A T-action is a map s.t. and .

Observation. The definition seems abstract and arbitrary... but it is not really. Remember the existence of the curry isomorphism. It is the isomorphism of "curryng a variable". It shows that the set of functions is "essentially the same" as the set of functions of functions . The isomorphism is given by the function "curry the B variable"

where and

def 2. It is now evident that is a T-action over X if and only if is a monoid homomorphism. Observe that is also a monoid by composition. if f is a T-action we have and

Slogan. Monoid actions are monoid homomorphisms.

Notation. Let be a function enough well behaved over some sets where all the differential structure is on the right place. I write for the partial derivative in the i-th variable. We get new functions

JOURNEY to CATEGORICAL HELL

Following JmsNxn. I guess it is clear that if our monoid of time has some topological/differential structure (like R or C actions) we can consider the partial derivative of the monoid-action/dynamical system. Let be a T-action (differentially well behaved) consider the partial derivative respect to the T variable . As James writes

At this point just notice that (this is called Jabotinsky's iterative logarithm ), thus we have a

Quote:"Generator lemma" 1.

Observe that fixing the variable gives us James' differential equation .

If instead we unleash the curry isomorphism we can render the generator lemma as a factorization of single-valued functions.

Maybe this lemma is not important but it has interesting consequences if one wants to translate all of this into category theory. With much boldness and unfounded certainty I declare theorem the following harmless corollary.

Quote:Theorem 1. For all we haveproof:

Mutiplication corollary. Let n be a natural number

Weird metric-like corollary. If T is a group, and is invertible then

I know, all this seems harmless and empty but it isn't. This equation shows that defines a natural transformation from the T-action f, seen as a functor from the time-monoid to the state-space (category), to the identity T-action, aka the "still" dynamic on X.

To make this clear I invite the reader to endure a little of abstract madness.

Def 3. Given the monoid T we can build an interesting category (the category of the right traslations of T): begin by adding a bunch of points: one point for evey element .

Now the arrows: between the points we draw an arrow if there is an element such that . Let's label that arrow with the name of the element t. We can depict the arrows in the two equivalent ways

Composition is easily defined

What does have to do with T-dynamical systems? Remember that a T-action f over X is equivalently, by currying, a monoid morphism that assigns to every time t an endomorphism of X (in the forum we'd like to see this endomap as the t-th iterate of the dynamics). We can see that , as T-action, defines automatically a corresponding functor from to the category where X lives. Let's denote it, yep abusing the notation, as .

Def 4. can be seen as a functor sending all the objects of to the same object X and every arrow to the function . Functoriality is guaranteed by the definition of dynamical system (T-action).

Let's visualize that!!: on the left (in red) live the objects and arrows of (the domain category) and on the right (in black) I display (some of) the objects (just one) living in the target category.

Now consider the trivial T-action over X. It is the dynamics of the identity map of X: call it where for every t we get the identity. As we have seen above, it defines a functor .

Quote:The real theorem 1. Given a T-action over X as a functor , the partial derivative in the variable T of the T-action is a natural transformation. Its component at the identity element of the monoid is the iterative logarithm of the dynamical system.

In symbols: for s.t. , i.e. for all the arrows , we have

and

The proof IS the proof of theorem 1 because that is exactly the condition that natural transformations have to satisfy by definition. We can visualize it as we did for the functor.

I know, now this seems scary and disturbing but with natural transformation I mean the same kind of objects I was defining on the post on generalized superfunction tricks and that's why iterated composition fits so well into this. I will expand on this in another post but let's just observe that the partial derivative satisfies the functional equation

TROPHIES FROM HELL?

Let's ground this with concrete objects. The problem with considering general T-monoid actions (T-time dynamics) is that our time object T does not know what our initial function is, i.e. monoid actions don't know what process is being iterated. Monoids do not know what is the "quantum of time" because they have zeroes but not units, unlike unitary rings. We need rings to have units and to recover the initial function as .

So let's consider the field of real numbers as our time and let's just restrict to real analysis. The theorem 1 give us a corollary.

Corollaries. i) Let n be a natural number

ii) For every real number s,

The proof is trivial.

The last one can be rewritten as

In this simplified setting it is easier to appreciate the nature of this equation because we can clearly see that it's a natural transformation between two diagrams that have a familiar shape (that of the real numbers' linear order ). Here the picture shows a bit of the infinite diagram of sets and functions

We obtain where this set is defined as

These are the same sets defined for the Generalized superfunction trick in this thread

MphLee, Generalized Kneser superfunction trick (the iterated limit definition), (January 21, 2021), Tetration Forum

When this intution solidifies we could start to consider topological monoids... continuous paths as diagrams and, after I find the right categorical formulation of paths partition and limit, rebuild algebraically James' composition integral.

MSE MphLee

Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)

S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)