Matrix question for Gottfried
#1
Howdy Gottfried,
I'm looking at writing a paper dovetailing and extending Continuous Iteration of Dynamical Maps
by R. Aldrovandi and L.P. Freitas. For some reason I thought that Bell matrices only handled the case of Schroeder's equation. I know your work supports both Schroeder's and Abel's equations, so I hoped you might have some insight here.
Thanks
Daniel
Reply
#2
Hmm, last point first. The Carlemanmatrix-approach gives a simple representation of the Schroeder-mechanism, true. But with the Abel-equation, this is out of the elegance which comes with the basic tech of that matrices. Note, the Abel-equation can be understood as logarithm of the Schroeder-equation, but there is no Carlemanmatrix for the \(log()\) (only for the \( \log(1+x) \) ) So when I worked with the Abel-equation I only took the Carlemanmatrices as basic food and then applied my general (low) college-math at it. That Andydude's method can be understood as an Ansatz to model the Abel-solution and I expressed that in my Carleman-matrix-toolbox is another thing: I simply introduced the (hypothetically applicable) Neumann-series of a Carleman-matrix and derived formally Andydude's ansatz of this basis. But whether the (infinite) Neumann-series of matrix-powers (analogy to the geometric series with matrices as argument) can formally be used, - I have no definite theorem/proof and no sufficient answer to my question in MO.     

After skimming over the mentioned Aldrovandi-article when I'd downloaded it, I was discouraged with -at least- imprecisions in their assumptions.  
For instance, they write on the subject of diagonalization (and matrix functions):

Quote: --> (pg 16):
Bell matrices are not normal, that is, they do not commute with their transposes.  Normality  is  the  condition  for  diagonalizability. 
This  means that  Bell  matrices  cannot  be  put  into  diagonal  form   by  a  similarity transformation.
I had never heard this "are not normal" in context with diagonalization. Putting the keyword "commute" in this, I assume, they mean that the eigenvector-matrices must be orthogonal ("orthogonal" --> they are not only inverses, but even transposes). But "diagonalibility" means only that the eigenvector-matrices are inverses of each other.      

I left this then as an open riddle for me, and did no more involve myself much in that article.
In wikipedia one can find a proper definition of "diagonalizability":

Quote:In linear algebra, a square matrix \(A\) is called '''diagonalizable''' or '''non-defective''' if it is "similar" to a diagonal matrix, i.e., if there exists an invertible matrix \(P\) and a diagonal matrix \(D\) such that \(P^{-1}AP=D\), or equivalently \(A = PDP^{-1}\). (Such \(P\), \(D\) are not unique.)
For a finite-dimensional vector space \(V\), a linear map \(T:V\to V\) is called '''diagonalizable''' if there exists an ordered basis of \(V\) consisting of eigenvectors of \(T\).
These definitions are equivalent: if \(T\) has a matrix representation \(T = PDP^{-1}\) as above, then the column vectors of \(P\) form a basis consisting of eigenvectors of \(T\), and the diagonal entries of \(D\) are the corresponding eigenvalues of \(T\); with respect to this eigenvector basis, \(A\) is represented by \(D\).
'''Diagonalization''' is the process of finding the above \(P\) and \(D\).
(The Aldrovandi-texts continues with the following remark:
Quote: As it happens, this will not be a difficulty because we know their eigenvalues. That functions of matrices are completely determined by their spectra is justified on much more general grounds.
. Ahh. At least we can still use the results of diagonalization... )

I don't remember whether they at all considered the difference between finite and infinite sized matrices (besides the rough remarks on this on the same page). While we have a nice Carleman-matrix for the \( \exp() \)-function we don't have one for the \( \log() \)-function although it would be simply its inverse... Sleepy Well, we could truncate the (infinite) Carleman-matrix to finite software-handable size, and find the inverse of that trimmed creature Cool - but the inverse of that infinite Carlemanmatrix is not defined, as long as we cannot assign definite values to the harmonic series and its powers. In the case of infinite dimension we can even have *infinitely many* "inverses" : for  \( f(z)= \exp(z)-1 \) we do not only have the Carleman-matrix for \( \log(1+z) \) as inverse, but as well that of \( \log(1+z) + k \cdot 2 \pi î \) -and those matrices are even square-matrices: not easy to handle in the Carleman-toolbox, especially when it comes to "matrix-functions" (which Aldrovandi refers to as well as an easily available tool).       

Because of your question I looked at the article again, but have still the same old difficulties to read through that text, after they didn't lay out the prerequisites for the use of (infinitely sized!) Bell-matrices more deliberately.  

Ahh, and p.s.: I would like to discuss this always under the terms of "Carleman"-matrices instead of "Bell"-matrices; while the "Carleman"-matrix has coefficients for instance of the exp()-function by \( \exp(z) = 1+ a_1 x + a_2 x^2 + ...\) the "Bell"-convention is to assume the coefficients \( \exp(z) = 1+ b_1/1! x + b_2/2! x^2 + ...\) . Converting between the "Carleman-" and "Bell-" convention needs then a similarity-scaling with the factorials, and for me it had always seemed easier to handle the "Carleman"-route when diagonalizing, so I got used to it - but that might be not so significant and up to the personal preferences of someone.
Gottfried Helms, Kassel
Reply
#3
(12/06/2022, 02:59 PM)Gottfried Wrote: Hmm, last point first. The Carlemanmatrix-approach gives a simple representation of the Schroeder-mechanism, true. But with the Abel-equation, this is out of the elegance which comes with the basic tech of that matrices. Note, the Abel-equation can be understood as logarithm of the Schroeder-equation, but there is no Carlemanmatrix for the \(log()\) (only for the \( \log(1+x) \) ) So when I worked with the Abel-equation I only took the Carlemanmatrices as basic food and then applied my general (low) college-math at it. That Andydude's method can be understood as an Ansatz to model the Abel-solution and I expressed that in my Carleman-matrix-toolbox is another thing: I simply introduced the (hypothetically applicable) Neumann-series of a Carleman-matrix and derived formally Andydude's ansatz of this basis. But whether the (infinite) Neumann-series of matrix-powers (analogy to the geometric series with matrices as argument) can formally be used, - I have no definite theorem/proof and no sufficient answer to my question in MO.     

After skimming over the mentioned Aldrovandi-article when I'd downloaded it, I was discouraged with -at least- imprecisions in their assumptions.  
For instance, they write on the subject of diagonalization (and matrix functions):

Quote: --> (pg 16):
Bell matrices are not normal, that is, they do not commute with their transposes.  Normality  is  the  condition  for  diagonalizability. 
This  means that  Bell  matrices  cannot  be  put  into  diagonal  form   by  a  similarity transformation.
I had never heard this "are not normal" in context with diagonalization. Putting the keyword "commute" in this, I assume, they mean that the eigenvector-matrices must be orthogonal ("orthogonal" --> they are not only inverses, but even transposes). But "diagonalibility" means only that the eigenvector-matrices are inverses of each other.      

I left this then as an open riddle for me, and did no more involve myself much in that article.
In wikipedia one can find a proper definition of "diagonalizability":

Quote:In linear algebra, a square matrix \(A\) is called '''diagonalizable''' or '''non-defective''' if it is "similar" to a diagonal matrix, i.e., if there exists an invertible matrix \(P\) and a diagonal matrix \(D\) such that \(P^{-1}AP=D\), or equivalently \(A = PDP^{-1}\). (Such \(P\), \(D\) are not unique.)
For a finite-dimensional vector space \(V\), a linear map \(T:V\to V\) is called '''diagonalizable''' if there exists an ordered basis of \(V\) consisting of eigenvectors of \(T\).     
These definitions are equivalent: if \(T\) has a matrix representation \(T = PDP^{-1}\) as above, then the column vectors of \(P\) form a basis consisting of eigenvectors of \(T\), and the diagonal entries of \(D\) are the corresponding eigenvalues of \(T\); with respect to this eigenvector basis, \(A\) is represented by \(D\).           
'''Diagonalization''' is the process of finding the above \(P\) and \(D\).
(The Aldrovandi-texts continues with the following remark:
Quote:  As it happens, this will not be a difficulty because we know their eigenvalues. That functions of matrices are  completely determined  by their  spectra is justified  on  much  more  general  grounds.
. Ahh. At least we can still use the results of diagonalization... )

I don't remember whether they at all considered the difference between finite and infinite sized matrices (besides the rough remarks on this on the same page). While we have a nice Carleman-matrix for the \( \exp() \)-function we don't have one for the \( \log() \)-function although it would be simply its inverse... Sleepy Well, we could truncate the (infinite) Carleman-matrix to finite software-handable size, and find the inverse of that trimmed creature Cool - but the inverse of that infinite Carlemanmatrix is not defined, as long as we cannot assign definite values to the harmonic series and its powers. In the case of infinite dimension we can even have *infinitely many* "inverses" : for  \( f(z)= \exp(z)-1 \) we do not only have the Carleman-matrix for \( \log(1+z) \) as inverse, but as well that of \( \log(1+z) + k \cdot 2 \pi î \) -and those matrices are even square-matrices: not easy to handle in the Carleman-toolbox, especially when it comes to "matrix-functions" (which Aldrovandi refers to as well as an easily available tool).       

Because of your question I looked at the article again, but have still the same old difficulties to read through that text, after they didn't lay out the prerequisites for the use of (infinitely sized!) Bell-matrices more deliberately.  

Ahh, and p.s.: I would like to discuss this always under the terms of "Carleman"-matrices instead of "Bell"-matrices; while the "Carleman"-matrix has coefficients for instance of the exp()-function by \( \exp(z) = 1+ a_1 x + a_2 x^2 + ...\) the "Bell"-convention is to assume the coefficients \( \exp(z) = 1+ b_1/1! x + b_2/2! x^2 + ...\) . Converting between the "Carleman-" and "Bell-" convention needs then a similarity-scaling with the factorials, and for me it had always seemed easier to handle the "Carleman"-route when diagonalizing, so I got used to it - but that might be not so significant and up to the personal preferences of someone.

Thank you for your exceedingly quick response Gottfried. I agree with you about the imprecision of Androvandi's paper, but it is something I feel comfortable with and can write an article extending. I'm writing up what the additional virtues of my paper in the introduction, but handling both Schroeder and Abel is not one of them. The main strength of my work is that it can answer the open question of what the combinatorial structure is in the paper. Also I like the algebraic notation (linear operator?) I use, of course more tools is always a good thing.

While we are here, you mentioned geometrical series of matrices. I work with geometrical series and want my work to handle matrices. Here is my totally stupid problem, I couldn't quickly find a closed for for the geometrical series of matrices while the geometrical series of reals is a high school problem.
Daniel
Reply
#4
(12/06/2022, 10:05 PM)Daniel Wrote: While we are here, you mentioned geometrical series of matrices. I work with geometrical series and want my work to handle matrices. Here is my totally stupid problem, I couldn't quickly find a closed for for the geometrical series of matrices while the geometrical series of reals is a high school problem.

Hi Daniel -

 I've got involved into geometric series of matrices when I experimented with divergent summation. In general one can use that geometric series (= series -of-powers-of-matrix) by applying diagonalization. Having \( B =   M \cdot D \cdot W \) (where \( W = M^{-1} \) for notational comfort) and \( D \) is diagonal, then the geometric series of \( B \) is \( M \cdot (D^0+D^{1}+D^{2}+ ... ) \cdot W \) and the geometric series of the diagonalmatrix \( D \) is represented by the geometric series in the elements of the diagonal matrix \( D \) (one might write for the alternating geometric series \( (I + B)^{-1} = M \cdot (I+D)^{-1} \cdot W \) and the alternating geometric series of \(D\) can be expressed by the alternating geometric series of the elements on its diagonal.  

With Carleman-matrices it is in principle impossible to define this geometric series, because (at least) one eigenvalue is \( 1 \) by construction, and we had one entry \( 1 / 0 \) in one entry. Thus I mostly worked with the alternating geometric series in our (or my other) contexts.   
There is one little, but I think significant, exception, and this is the geometric series of the pascalmatrix \( P \) which performs \( V(x) \to V(x+1) \) . I found a workaround for the geometric series via diagonalization (we cannot define a proper diagonalization on \( P \) , but I found somehow an improper, but useful, one) and this led to the rediscovery of the Faulhaber-matrix, (or matrix of integrals of the Bernoulli-polynomials), with which I can implement Hurwitz zeta. That has one aspect which made me curious whether perhaps we overlook something relevant in our common calculations. (See index-page https://go.helms-net.de/math/index.htm and entries https://go.helms-net.de/math/binomial_ne...Powers.pdf and https://go.helms-net.de/math/binomial_ne...Laurin.pdf)

So far at the moment, it's late here, and soon I'll invite sandman ;-) If you need more input, I come back to this tomorrow.

Gottfrie
Gottfried Helms, Kassel
Reply
#5
Hi Daniel -

I should add, that R.Aldrovandi has rewritten that paper referred by you in 2014 and called it "Tetration: an iterative approach". He took some inspiration from your and my work and mentions us explicitely in the text. There is now a section on Carleman-matrices and the conversion between Carleman & Bell-matrices.    
However still "lightweight" a bit...

I should have noted this before writing my previous answer, but just found the text in an unworked subdirectory for the import of articles into my database. 

The article is on arxiv:   https://arxiv.org/abs/1410.3896v1 

Gottfried
Gottfried Helms, Kassel
Reply
#6
Jesus, I love you guys. I highly suggest the book by soviet mathematicians on quantum mechanics. The soviets were far more, this is solved let's move on. And worked entirely on integral transforms/actions on hilbert spaces. Theory of Linear operators in Hilbert Space by N.I. Akhiezer & I.M. Glazman.

Here, you'll see matrices as integral transforms; but I really think this will help you guys visualize these results as continuous actions. I know I'm being a broken record here. But all the terms and Matrix reductions you guys are doing, they can entirely be done as integral transforms. And this book does an amazing job of walking you through everything. I wish I could solve all your guys' problem; but primarily, I can identify you guys are actually missing a whole staple in your diet. And I can answer some of your questions, but not all of them.

You mention Normal, but you do not mention the Hilbert space--and that's a red flag to me. We have to remember the "space" the infinite vector exists in. Where the infinite vector space, this infinite vector lives in, has an inner product attached. And from there, we are talking about a Matrix Acting On A Vector Space, which is then Normal/diagonalizable. I agree with everything you guys are doing. But you seem to be missing some key points.

BUT FOR FUCKS SAKES! NOT EVERYTHING IS EUCLIDEAN!!!!!!!!!!!!!!!!!!!!!!!!!!!

Sometimes Infinite vector spaces, that Infinite Matrices act on, don't look Euclidean. And you guys are really missing some things.
Reply
#7
Hi, I'm sorry I don't have the energy to follow all of this properly.
But I have a lil bit of time so I offer some thoughts.

About missing things... I believe it's a problem everyone has, and members of the forum in particular since nobody here is paid to do research and we are forced to grapple with these problems only with a skinny part of our time/life force. Sad but true.

Another coin I want to offer is a terminological one. I notice that often in this forum some topics are faced by only few viewpoints and that the purely algebraic side is often ill developed.
It is increasingly more evident for me that once the right framework is established all the continuous; discrete; matrix e and integral; topological and analytical phenomena we are separately dealing with will merge into a well connected big picture. Obviously, to do this, much more culture, time and coordinated research effort is needed, or just the intervention of a erudite professional with a interest in this field.


Back to the point... much use of the term "semigroup", "homomorphism" and "actions" has been made and often in a superficial, inprecise, manner. I'd like to give my coin making this more precise, again.

A semigroup is a quite abstract gadget: a semigroup \((T,*_T)\) is a set of things that is closed under an associative, non necessarily commutative, operation: stop nothing more nothing less... no functions, no way to evaluate elements. Define what is an action and what does it mean to act for a semigroup. A semigroup \(T\) can act over other mathematical objects such as spaces, say \(X\) is a space-like object, by associating to each element \(t\in T\) a self-transformation of \(\phi_t: X\to X\) that is compatible with the spacial structure of \(X\) such that composing two consecutive transformations associated witht wo elements of \(T\) $$X\overset{\phi_s}{\longrightarrow}X \overset{\phi_t}{\longrightarrow} X$$ is the same as acting on \(X\) by the spatial transformation associated with \(t*s\) $$X\overset{\phi_{t*s}}{\longrightarrow}X.$$ This is a quite general scenario.

It is important to have clear in mind that an action of a semigroup \(T\) over a space-like object \(X\) is equivalent as giving a semigroup homomorphism from that semigroup \((T,*_T)\) to the semigroup \(({\rm End}(X),\circ)\) of space-preserving self-transformation of that space-like object. Since in our intuition \(T\) is the domain of of time-like object/instants, that amounts to specifying a time evolution of the things that lies in \(X\), that is a \(T\)-dynamics.
$$\lbrace \, T -{\rm actions\, over} X\, \rbrace \cong \lbrace \, {\rm semigroup\, homomorphisms\, from} \, T {\rm to \, End}(X) \rbrace$$
i.e. in technical terms that the following set are in bijective correspendence
$$\lbrace \, T\times X\overset{\varphi}{\longrightarrow} X\, |\, {\rm s.t.}\, \varphi(t*s,x)=\varphi(t,\varphi(s,x)) \rbrace \cong \lbrace  T \overset{\phi}{\longrightarrow} {\rm End}(X) \, |\, {\rm s.t.}\, \phi(t*s)=\phi(t)\circ \phi(s) \rbrace$$

When we say that a function \(f:X\to X\) or a matrix \(M:V\to V\) acts over something we actually mean, precisely, that there is a semigroup \(T=(\mathbb N,+)\) that is acting on that space by a morphism of semigroups that satiesfies \(\phi_1=f\) of \(\phi_1=M\): in that case we are justified to call it integer iteration.


Concrete Examples

A (scaling in abelian groups) - Notable example of this are vector spaces: let \(k\) be a field and \(V\) a vector space with scalars living in the field \(k\). Let\(T=(k^\times,\cdot ) \) be the monoid structure of the field under multiplication. Take our spatial object to be the abelian group \((V,+)\) of vector. Then \(k^\times\) acts over \(V\) by abelian group morphisms \(\mu_t:V\to V\) for \(t\in k\) a scalar in the field, i.e. transformations of vectors that respect the vector addition. It is an action by scalar multiplication and respecting the vector addition means we have distributivity: $$(t\cdot s){\bf v}=t(s{\bf v})$$ we can interpret it as scaling by a factor but also by linear motion of a point running away from the origin...i.e. drawing the linear subspace \(\langle \bf v\rangle\) so in a sense it's dynamical.

B1 (matrices) - A second related example is the case of a group of matrices acting over vector spaces. Let \({\sf GL}_n(\mathbb R)\) be the group of \(n\)-th order invertible square matrices with real coefficients and let \(V\) be an \(n\) dimensional vector space. By linear algebra we know that, fixing a base \(\mathcal B\) of \(V\) gives us a vector space isomorphism, i.e. bijective linear application $$\varphi_{\mathcal B}:\mathbb R^n \overset{\cong}{\longrightarrow} V$$ That isomorphisms, in turn, let us describe every linear operator over \(V\) as an \(n\)-square matrix, i.e. the isomorphism lifts to an isomorphism of the respective monoid of operators $$ {\sf M}[\varphi]: {\sf M}_n(\mathbb R) \overset{\cong}{\longrightarrow}{\rm End}_{\bf Vec}(V)$$
This map sends each matrix \(A\) to the operator \( {\sf M}[\varphi]_A({\bf v}) = \varphi(A\cdot \varphi^{-1}({\bf v})) \). It also sends matrix multiplication to composition of linear operators.
Notice now that the group \({\sf GL}_n(\mathbb R)\) is a submonoid of the monoid \({\sf M}_n(\mathbb R)\) of all \(n\)-th order square matrices. This inclusion is a map that respect the matrix multiplication, hence a monoid morphism: call it \(i\)  $${\sf GL}_n(\mathbb R) \overset{i}{\longrightarrow} {\sf M}_n(\mathbb R).$$
At this point it is enough to compose the monoid morphisms to obtain a morphism of the general linear group over the vector space \(V\).
$${\sf GL}_n(\mathbb R) \overset{i}{\longrightarrow} {\sf M}_n(\mathbb R) \overset{\cong}{\longrightarrow}{\rm End}_{\bf Vec}(V)$$
In this way we have a group \({\sf GL}_n(\mathbb R) \)-action of invertible matrices acting over a vector space \(V\) by linear operators.

B2 (rotations) Consider the group isomorphism sending sum of angles to multiplication of unitary complex numbers \(e: (\mathbb R,+)\to \mathbb T\): \(\theta\mapsto e^{i\theta}\). Is is known that there is a group isomorphism morphism sending complex numbers to \(2\times 2\)-matrices. In particular this map sends bijectively all the unitary complex numbers \(z\in\mathbb T\), the ones with \(|z|=1\) to rotation matrices in \({\sf SO(2)}\). $$\mathbb T \overset{R}{\longrightarrow} {\sf SO}(2)\,\,\,\,\, R(e^{\theta i}):=
\begin{bmatrix}
\cos \theta & -\sin \theta  \\
\sin \theta & \cos\theta  \\
\end{bmatrix}$$ Post-composing both with the injective morphism \( {\sf SO}(2)\overset{j}{\longrightarrow} {\sf M}_2(\mathbb R)\) exhibiting the special orthogonal group as a submonoid of the \(2\times 2\)-real valued matrices we obtain an action of the additive group of real numbers over the euclidean plane, by rotation, and action that decomposes in several intermediate actions.
$$\mathbb R \overset{e^{i\cdot}}{\longrightarrow} \mathbb T \overset{R}{\longrightarrow} {\sf SO}(2)\overset{j}{\longrightarrow} {\sf M}_2(\mathbb R)\cong {\rm End}_{\bf Vec}(\mathbb R^2)$$
We can see this as time parametrizing continuous rotations of a vector \(R_{\theta+\tau}({\bf x})=R_{\theta}(R_{\tau}({\bf x}))\).

C (representation theory) - It is evident by now that given any group \(G\) and a group homomorphism \(\phi: G\to {\sf GL}_n(\mathbb R) \) we obtain, by default a linear action of \(G\) over the space \(\mathbb R^n\). This is exactly the topic of representation theory: considering all the ways to "linearize" a group, i.e. in a sense, all the ways of representing \(G\) multiplication as the multiplication over vectors \((gh){\bf v}=g(h{\bf v})\) and \(g({\bf v+u})=g{\bf v}+g{\bf u}\) or, equivalently, all the ways in which, given an sum-like operation, we can extend its iteration to \(G\).

A Functorial Quantum field theory (FQFT), under the Schrödinger picture of QM dual to the Heisenberg picture, whatever this means, instead can be thought about representing wordlines (cobordisms of manifolds), instead of just time continua, as linearly acting over hilbert spaces. In other words, a (functorial) quantum field theory is a linear representation of a space-time manifold acting of infinite dimensional hilbert spaces by evolution operators, or something along those lines.

D (Matrices methods) - it seems to be that the trick here is done by taking the space of formal series as a monoid \(\mathbb R[[X]]\) and representing it as infinite matrices by the Bell and Carlemann  machines, i.e. a monoid morphism \({\sf C}:\mathbb R[[X]]\rightarrow {\sf M}_{\infty}(\mathbb R)\). Then if for \(f\in \mathbb R[[X]]\) the associated matrix \({\sf C}[f]\) would give powers related to the iteration in the monoid \(\mathbb R[[X]]\). This process should be analyzed ALSO from this point of view imho...

E (iteration in general) - but then, after all these examples, is it enough to have an action of a semigroup, eg. \(T=(\mathbb R,+)\), \(T=(\mathbb C,+)\) over a space of things \(X\), i.e. a map sending \(t\in T\) to \(f_t:X\to X\), to be able think of \(f_{\cdot }:T\to {\rm End }(X) \) as having to do with the iteration of something? I think that the action, to have the right to be called iteration be should at least be a monoid action. So \(T\) must be a monoid and not only a semigroup. There must exist something \(0_T\in T\) that gives us the identity \(f_{0_T}={\rm id}_X\), i.e. the do-nothing action.
Is it enough? Ofc... this could be called algebraic iteration... but in our intuition \(T\) must have some time-like features... like being a topological space... the action must be continuous... so continuous iteration is a continuous action of a topological monoid \(T\)? Do time has to be invertible? Commutative? Should we call an action an iteration only if \(T\) is a topological abelian group? Then iteration theory would be limited to actions of abelian Lie groups.
Notice that those actions define vector fields so we reconnect with classical dynamics.

If instead of semigroup actions we restrict to monoid actions, it turns out that spectral analysis and the generalized version of spectra, periodic points, eigenstates, eigenvalues and so on can be defined in the case of arbitrary monoid actions at the level of functor categories, as Lawvere explains in Taking Categories Seriously, 2005, TAC.

F (near-semirings geometry?) What if is the right way to think about iteration is not as an action over a space-like object but as an action over the trasformations of a space-like object... i.e. as an action over a monoid? The set of endofunctions \(M^M\) of a monoid \(M\) is itself a monoid under composition but it also inherits a monoid structure by pointwise \(M\)-multiplication: both structure interacts only by left distributivity \((\psi\chi) \circ \varphi=\psi\varphi \circ \chi\varphi\) giving us a structure of right near-semiring. Finding a way to truly exponentiate elements of \(M\), e.g. let it be functions, amounts to defining a near-semiring morphism \({\mathfrak f}:\mathbb R\to M^M\). Let \(s,t\in \mathbb R\) and \(m\in M\), being an unitary near-semiring morphism translates in the following equations holding $${\mathfrak f}_0(m)=0_M;\,\,\,{\mathfrak f}_{t+s}(m)=({\mathfrak f}_{t}{\mathfrak f}_{s})(m)={\mathfrak f}_{t}(m){\mathfrak f}_{s}(m)$$
$${\mathfrak f}_1(m)=m;\,\,\,{\mathfrak f}_{ts}(m)=({\mathfrak f}_{t}({\mathfrak f}_{s}(m))$$

Maybe before asking for how to iterate naturally some process we should invest more in the question: what does it means to iterate and what it means to naturally iterate?

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  [question] Local to global and superfunctions MphLee 8 1,045 07/17/2022, 06:46 AM
Last Post: JmsNxn
  A random question for mathematicians regarding i and the Fibonacci sequence. robo37 1 4,337 06/27/2022, 12:06 AM
Last Post: Catullus
  Question about tetration methods Daniel 17 2,162 06/22/2022, 11:27 PM
Last Post: tommy1729
  A question concerning uniqueness JmsNxn 4 10,429 06/10/2022, 08:45 AM
Last Post: Catullus
  Math.Stackexchange.com question on extending tetration Daniel 3 2,617 03/31/2021, 12:28 AM
Last Post: JmsNxn
  A support for Andy's (P.Walker's) slog-matrix-method Gottfried 4 6,437 03/08/2021, 07:13 PM
Last Post: JmsNxn
  Kneser method question tommy1729 9 13,598 02/11/2020, 01:26 AM
Last Post: sheldonison
  A Notation Question (raising the highest value in pow-tower to a different power) Micah 8 13,950 02/18/2019, 10:34 PM
Last Post: Micah
  Math overflow question on fractional exponential iterations sheldonison 4 11,386 04/01/2018, 03:09 AM
Last Post: JmsNxn
  @Gottfried : answer to your conjecture on MSE. tommy1729 2 6,888 02/05/2017, 09:38 PM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)