Composition, bullet notation and the general role of categories
#1
This follows from the discussion held at: MphLee, Generalized Kneser superfunction trick (the iterated limit definition), (January 21, 2021), Tetration Forum.



Quote:But if I did all this bullet stuff with \( \circ \)-that's not really how \( \circ \) is usually used, so I'd be overriding the meaning of an existent symbol within this context. Better to use a new symbol and be fresh. This is especially beneficial when we talk about \( ds\bullet z \) which is almost like a differential form. Writing \( ds\circ z \) would be going a step too far I think.

It is clear to me where you are coming from. Your solution is pleasant, pretty, comfortable and the notation, as usually happens with good notation, hints at new developments, e.g. the differential forms. Even if I don't get \( n \)-forms yet and exterior algebra feels alien to me I feel it's a similarity worth considering: for this and another reason, I like your choice. I feel like you are aiming, not secretly at all, to a general infinitesimal compositional calculus.

But said that you shouldn't be overly confident about the variable vs function distinction:

Quote:  \( f \circ g \circ z \)
Wtf is that nonsense? lol

What nonsense is this? It's abstract nonsense!

In category theory, a land where only composition and arrows make the whole ontology, writing \( f \circ g \circ z \) makes perfect sense, not only that, it means exactly what you expect it should.

More than that: I claim that the natural home for general iterated compositions are categories!
In the last part (mostly in the attached pdf+the pdf has a few pages of very gentle introduction to categories) I'll offer moral reasons for that but first let's inspect your "nonsensical" composition.

About evaluation.
In general categories morphisms are just abstract arrows, not functions, and evaluation of them doesn't generally make sense because not every object can be conceived as a bag of something, e.g. points. The philosophy of category theory is exactly this: ignore what's inside, the inner structure of things, and solely observe how your things interact with each other.

There are some very special categories where objects are indeed made of points, e.g. the category of topological spaces, of vector spaces, of abelian groups or the category of bare sets: with this I mean that some categories have among all the objects a special "point object" \( * \).

In these particular categories a morphism from this objectified abstract point to an arbitrary object \( X \) can be thought as (a choice of) a point \( x \) in \( X \)

\( *\overset{x}{\longrightarrow}X \)

therefore defining the set points of \( X \) to be the (hom-)set of arrows
\( {\rm Points}(X):={\rm Hom}(*,X)=\{x\,|\,*\overset{x}{\longrightarrow}X\} \)

For example, for a bare set \( X \) we have \( X^1\simeq X \) where \( 1 \) is a singleton; the set of group homomorphisms from the group of integers to \( G \) is in bijection with the set of group elements of \( G \), i.e. \( {\rm Hom}_{\rm Grp}({\mathbb Z},G)\simeq G \); the linear applications from a field \( \mathbb K \), seen as a vector space, to a \( \mathbb K \)-vector space \( V \) are in bijection with vectors of \( V \), i.e. \( {\rm Hom}_{\rm Vec}({\mathbb K},V)\simeq V \)

[Image: points.png]
and all of this without actually being able to look inside our objects nor knowing what set membership is!

In the case you make, given an abstract arrow \( X\overset{f}{\to}Y \) and a point \( *\overset{x}{\to}X \) we can evaluate \( f \) at \( x \) composing the two and producing a new point \( *\overset{y}{\to}Y \)

[Image: bullet0-4.jpg]

Here the TeXed post in pdf+a mini guide on categories.

.pdf   (2021 02 02) Composition bullet notation and the general role of categories - the softest introduction ever made.pdf (Size: 585.27 KB / Downloads: 462)

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#2
(02/02/2021, 01:44 AM)MphLee Wrote: This follows from the discussion held at: MphLee, Generalized Kneser superfunction trick (the iterated limit definition), (January 21, 2021), Tetration Forum.



Quote:  \( f \circ g \circ z \)
Wtf is that nonsense? lol

What nonsense is this? It's abstract nonsense!



Hey, Mphlee. I absolutely agree with you, but I think you misread me. I write,

\( f\bullet g \bullet z \)

To make it distinct. Which inherently also means \( f \circ g \circ z \) as you say it. I was mostly just making an argument against using \( \circ \). Whereas, someone sees that with circ and thinks it an abuse of notation; whereas with a bullet, itis given this meaning. On top of that, now we can look at, a sequence of forms (don't worry, I don't use forms, it's mostly just a comparison of notation) \( f_j \bullet g_j \bullet z \) and we can attach an operator/functor to them, \( \Omega_j \). I just really don't like the use of \( \circ \) notationally/typographically. I believe it would warrant confusion. But I'm so happy that you see the categorical structure. Again, I had help by someone really smart in designing this notation...<_<

EDIT: Oh and I look forward to reading the PDF!

I was also suggested \( f \leftarrow g \leftarrow z \) but it was agreed bullet is much better. This one is a bit toooo categorical, If you know what I mean.

Edit2:

\(
ds \leftarrow z\\
\)

looks fucking awful, eh?
#3
Fucking Beautiful PDF MphLee. Fucking Beautiful. That's just like my notation for compositional integrals. Never even thought for a second it was THAT categorical. Just Fucking, fucking Gordon Ramsey needs to speak to the chef about how FUCKING beautiful it was. I need to time to reallllly think about this. Thank you for the diagrams!
#4
(02/02/2021, 04:49 AM)JmsNxn Wrote: EDIT: Oh and I look forward to reading the PDF!

Oh but it is at the end of my first post!

(02/02/2021, 04:49 AM)JmsNxn Wrote: Hey, Mphlee. I absolutely agree with you, but I think you misread me. I write,

I'm sorry if I did misread you. Please tell me exactly what I'm missing.


I think I understood that you wanted to make a distinct notation; as you say I agree that typographically it is better; I agree that writing \( f\circ g\circ z \) someone could be confused believing it is an abuse of notation and, at the end I do believe that this is a very smart choice indeed: firstly for the similarity to forms, because it remembers the \( dz \) making it analogue to integration and secondly, this is the reason I omitted in the post, because it is perfect when you deal with multivariable functions.

When I say

(02/02/2021, 04:49 AM)MphLee Wrote: for this and another reason, I like your choice.

the other reason is that the bullet seems to me to fit perfectly when you manipulate expressions with multiple variables that you don't want to hide, i.e. the standard in math, e.g.

\( f(z_0,...,z_j,...,z_n)\bullet g(z_0,...,z_j,...,z_n)\bullet z_j=f(z_0,...,g(z_0,...,z_j,...,z_n),...,z_n) \)

I admit that for me there are some gray zones in it's usage, but I'll ask you you when I'll need to use your notation.

I'm very sorry but reading backwards I realize that I was not very clear and I did not make explicit some of my thoughts. I did this post as a side effort: I'm tackling the multi-valued case for the superfunction trick but I believed making this post would be helpful to make my stance more sensible.

To clear myself a little bit: what I don't believe is that \( f\circ g\circ z \) is actually a real abuse of notation. It seems an abuse but I say it is perfectly formal and legit. What I believe is that we are not leaving the world of composition. That's not a reason at all to abandon the bullet but it is important to notice imho.

ps: the \leftarrow notation it's horrible, even categorically...

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#5
(02/02/2021, 08:57 AM)JmsNxn Wrote: Fucking Beautiful PDF MphLee. Fucking Beautiful. That's just like my notation for compositional integrals. Never even thought for a second it was THAT categorical. Just Fucking, fucking Gordon Ramsey needs to speak to the chef about how FUCKING beautiful it was. I need to time to reallllly think about this. Thank you for the diagrams!

Oh ahahahah xD!
Omg!
I was in the writing mode when you posted this... I missed it... hahahah
ROTFLING
(02/02/2021, 08:57 AM)JmsNxn Wrote: I need to time to reallllly think about this. Thank you for the diagrams!

Take all the time you need. Eventually going down this rabbit hole and while I ascend more to the analytic holy heights we will meet midway! haha!

(02/02/2021, 08:57 AM)JmsNxn Wrote: That's just like my notation for compositional integrals. Never even thought for a second it was THAT categorical.

It is even more than you think. Given a finite sequence of \( n \) consecutive arrows we can think of it like a functor from the ordinal number \( n+1 \) to the context we are working in, e.g. complex numbers

[Image: comp0.jpg]

Given an infinite sequence of consecutive arrows we can use your notation to write the n-truncations.

[Image: comp1.jpg]

I want to convince you that we have a functor assigning to every pair \( (m,n) \), s.t. \( m< n \), a function \( X_m\to X_{n+1} \)

[Image: comp2.jpg]

and this is functorial (I can prove it), i.e. the law that holds for integral has a deep categorical origin.

[Image: comp3.jpg]

LET'S GO BACK ON THE POINTS AS MORPHISMS PARADIGM!

If we say that points of a space (elements of a set) are just morphisms from the point to the space we can obtain this



[Image: comp4.jpg]

where we get (correct me if I'm wrong)
[Image: image.png]

Now, this can be seen as a sterile change of notation. I believe it isn't, I feel and bet it isn't.
In all of this you see as the index is always a \( j\in{\mathbb N} \)... but what if we replace the naturals with the reals? Or with arbitrary monoids? That's what I want to discover and apply to the superfunction trick.


ps: as a side note that I'll expand soon, from a categorical point of view the outer composition is more natural and the inner composition is contravariant, i.e. it inverts the orientation of arrows.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#6
Yes when the index is real, I've only every looked at the riemann-stieljtes construction.  Consider a partition \( \mathcal{P} \) of \( [a,b] \), lets call it \( s_{j+1} \le s_j^* \le s_j \) where \( b = s_0 > s_1 > s_2 >...>s_n = a \)--where here \( s_j^* \) is our sample point. Note I've written the partition as descending.  Then, we call the partial compositions,

\(
Y_{\mathcal{P}}(z) = \Omega_{j=0}^{n-1} z + \phi(s_j^*,z)(s_j - s_{j+1})\bullet z\\
\)

Where we assume \( \phi(s,z) \) is some nice function. So as we limit the partition (as it gets finer and finer)--namely \( s_{j+1} - s_j \to 0 \)--under well enough conditions (holomorphy suffices) we get,

\(
\lim_{||\mathcal{P}||\to 0} Y_{\mathcal{P}}(z) = \int_{a}^b \phi(s,z)\,ds\bullet z\\
\)

Now, what this notation means, because we get two things for the price of one, is if,

\(
y(x) = \int_a^x \phi(s,z)\,ds\bullet z\\
\)

Then,

\(
y(a) = z\,\,\,\text{and}\,\,\, y'(x) = \phi(x,y(x))\\
\)

Which is just the equation of a first order differential equation. These motions date allllllllllllll the way back to Euler (usually the bastardization we see is Euler's method, but it's far more advanced). And it's actually kind of funny how much compositional analysis Euler used, which seems to have been just,, well, forgotten... This also helps us learn where the singularities arise--they are where the differential equation blows up.

For instance, take \( \phi(s,z) = z^2 \) then,

\(
\int_a^b z^2 \,ds\bullet z = \frac{1}{\frac{1}{z} + a -b}\\
\)

So where ever \( a-b = \frac{1}{z} \) we know instantly that this infinite composition can't converge because the differential equation blows up.  This is really the best I could do--but I did prove some general normality conditions on,

\(
\Omega_{j=0}^{n-1} h_{jn}(s,z)\,\bullet z\\
\)

As we let \( n\to\infty \) then \( h_{jn}(s,z) \) needs to behave either discretely (the infinite composition way) or continuously (which looks something like the riemann-stieljtes composition, but not necessarily).


EDIT:

As to doing this with monoids. I can't even imagine. Maybe something like Tate's thesis with adeles and nonsense. You can count me out for that one, lmao.

EDIT:

Also, thank you for writing, I see what you are driving at now.

\(
f \circ g \circ x = f \bullet g \bullet z |_{z=x}\\
\)

Which is the evaluation morphism (?). This makes a lot of sense too. And is definitely a very important distinction. Yes I totally can see the argument for using \( \circ \) here. Correct me if I'm wrong, but can we think of this as,

\(
\text{eval}_{z=x} f \bullet g \bullet z = f\circ g \circ x\\
\)

Where this takes the space of analytic functions, to the space of points. Or something of that nature? Really interesting though. I've never thought of writing it like that; but that's a great distinction notationally.
#7
I think I understand pretty well where this is going. I tried to apply that Riemann construction by analogy back in 2015 (from February to April). I remember that in that period I was reading your two Ramanujan-method papers on holomorphic hyperoperations (without fully grasping them). It can be that back than I somehow subconsciously absorbed some concept that was floating in the ether. So in some sense I was not surprised at all when I discovered later that you were working on "composition integral."

Anyways at that time I've momentarily given up that idea because at the time it was exceedingly complex for my mind.

The idea is that there has to be something that fills the following analogies:

Calculus of differences: Differential calculus = Conjugation : X

Discrete Sums: Riemann Integral = Supefunction : Y

Where X/Y are in an analogue relation of Derivation/Integration.

Now, 2021, I feel like after a deep study of your papers I will be in the position to discuss this in a sensible way. But at the moment there is something that feels off with the partition/definition you are giving... I don't grasp what it is yet. My strategy is to understand properly the discrete version first. But my priority is to present you with an extension of the general superfunction trick (SGT) proof to the multivalued case, i.e. the object of your last paper. If I'm successful, I guess we have a solid algebraic foundation where I can understand your results.

Said that, I'm not mad... I'm not going Tate-thesis mode anytime soon... I don't even understand what an Adele is... or an L-function. But with general monoids I merely mean to apply it to (Q,+), (R,+) and (C,+). What I suggest is that the secret key lies, imho, in seeing a monoid like (R,+), that is an abelian group, as a set of points with an arrow \( r\to s \) iff \( r \lt s \) and from that we should rephrase the diagrams of iterated composition In a way that parallels the Riemman construction you are pointing to.

Attached you can see two brief notes I wrote down during an Eureka moment in 2015... I was not even convinced that it made any sense.
[Image: Delta-Sigma-Calculus-intro.png] [Image: generalized-Sigma-caculus.png]


EDIT: Thank you for these explainations, it will help me alot in moving through your papers.

About your second edit: in the notation \( f\circ g\circ x \) we have that \( x:1\to X \) it's just a function that take as input argument the unique element of the singleton set, thus \( f\circ g\circ x \) is a another  function, i.e. x is not a variable/argument but x is A PARTICULAR "element" of X: remember that I want to systematically identify points of X with functions from the singleton to X.

\( 1 \overset{x}{\rightarrow}X \overset{g}{\rightarrow} Y \overset{f}{\rightarrow} Z \)

In \( f\bullet g\bullet z \) the z is meant to be an arbitrary \( z\in X \), i.e, an input argument.
For example in your [second iteration, pag 4, first formula] you get the iterated composition evaluated at z=0, implying that \( \Omega_{j=1}^n h_j(s,z)\bullet z \) is actually a bi-indexed family of functions in the z, i.e.
\( \Omega_{j=1}^n h_j(s,{-{}})\bullet{-{}}=\phi_n(s,-):X\rightarrow X \)
,

\( \phi_n(s,{-{}}):z\mapsto\Omega_{j=1}^nh_j(s,z)\bullet z \)

but you just hid the variable in z... or better, you fixed it to be z=0, to make it explicit

\( \phi_n(s):=\phi_n(s,0) \)


About evaluation: evaluation is defined on every function space/set. An evaluation is just a binary operation that evaluates functions.

\( {\rm ev}:Y^X\times X\to Y \)

if we fix the second variable, we get \( {\rm ev}_x:Y^X \to Y \) but if we identify \( X\simeq X^1 \), where the bijection sends a point \( x\in X \) to the function \( \bar{x}:1\to X \), we get that inner composition by elements of \( X^1 \) coincides with evaluation:

\( \circ :Y^X\times X^1\to Y^1 \)

\( f \circ \bar{x}={\bar{{\rm ev}_x(f)}} \)

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)
#8
Hey, so I thought it might help you out to give you what I mean by the real case looks something like the riemann-stieljtes construction.

First of all, if we take a sequence of functions \( h_{jn}(x) \) in \( j,n \in \mathbb{N} \) (where the idea is to limit to a continuous space, through some kind of enumeration),

\(
Y_{n} = \Omega_{j=1}^n h_{jn}(x)\bullet x\\
\)

Then, for \( n\to\infty \), to make \( Y_n \) converge, we get one of two cases.

\(
1.\,\,h_{jn} \to h_j \neq x\,\,\text{as}\,\,n\to\infty\\
2.\,\, h_{jn} \to x\,\,\text{as}\,\,n\to\infty\\
\)

Now if we are in the first case, then we should expect some kind of summability criterion,

\(
\sum_{j=1}^\infty |h_j(x) - x| < \infty\\
\sum_{j=1}^\infty |h_j(x) - L| < \infty\\
\sum_{j=1}^\infty |h_j(x_j) - L| < \infty\\
\)

Where here \( L \) is a point, and \( x_j \to L \) in a summable manner. These are three possible summability criterion, which provide weaker results the further down you go. The first one converges to a function; the second to a point; the third to a point, but it depends on how we choose \( x_j \).  This is essentially the discrete case. Where we are not really enumerating anything.

The second way is much more complicated--but this is something John Gill wrote about a lot that I simplified using the Riemann Stiejltes construction. The main normality condition is that,

\( h_{jn}(x) = x + q_{jn}(x)\\ \)

Then, we have to get that \( q_{jn}(x) = \mathcal{O}(1/n)\\ \). If for instance \( q_{jn}(x) = \mathcal{O}(1/n^{1+\epsilon}) \) then \( Y_n \to x \) which is the trivial sequence. If \( q_{jn}(x) = \mathcal{O}(1/n^{1-\epsilon}) \) then \( Y_n \to \infty \), we get the divergent case. Now when \( q_{jn} = \mathcal{O}(1/n) \), its the sweet spot, and not necessarily, but probably will converge to a function,

\(
\lim_{n\to\infty} Y_n = \int_a^b f(s,x)\,ds\bullet x\\
\)

For some \( f(s,x) \). Now this is not my result, I used this result to reframe everything in the Riemann Stieljtes language--where we start with \( f \) rather than derive it afterward. John Gill is the creditor of this result. Now I said, not necessarily that it will look like this, but as soon as we ask for normality of \( h_{jn}(x) \) then necessarily it'll look like this. So if we want to talk about holomorphic functions, then yes it will look like this. The trouble is, it may not be entirely obvious what the function \( f \) is.

Now of course there's a whole bunch of anomalous cases (if we added some cantor arguments or something)--but I don't do that because I want every sequence normal, and every function holomorphic. In which case, the Riemann Stieljtes construction suffices for most general purposes. The difference being, we care about \( f \) rather than \( h \)--and we start the construction with \( f \).




As to your point about the analogy between \( X,Y \) and derivation and integration. I'd like to throw in my own analogy.

\(
\sum \mapsto \Omega\\
\int...ds \mapsto \int...ds\bullet z\\
\)
Which is the analogy that discrete sums become discrete compositions. And continuous sums become continuous compositions. Where now, the beauty being, we still have the same difference relationship and differential relationship. Which I do see you seeing. Which can be expressed as,

\(
\Delta y = F(s) \mapsto \Delta y = F(s,y(s))\\
\frac{dy}{ds} = F(s) \mapsto \frac{dy}{ds} = F(s,y(s))\\
\)

So very much inherently we care about first order differential equations and first order difference equations.

The second analogy comes from something you're saying about superfunctions. Which I think might help. Correct me if I'm wrong, but I think it might be what you are driving at,

Suppose I write,

\(
F(x,z) = \int_0^x h(z)\,ds\bullet z\\
\)

Then,

\(
F'(x,z) = h(F(x,z))\\
F(x,F(y,z)) = F(x+y,z)\\
\)

So technically, for tetration, if I write,

\(
h(x) = (\frac{d}{dx} e\uparrow \uparrow x) \bullet \text{slog}(x)\bullet x\\
\)

Then trivially,

\(
\frac{d}{dx}e \uparrow \uparrow x= h(e \uparrow \uparrow x)\\
\)

And less trivially,

\(
e \uparrow \uparrow x = \int_0^x h(z)\,ds\bullet z |_{z=1}\\
\)

Which, although expressing the superfunction relation, is rather useless if we wanna try and solve using this. But we could definitely approximate using this method--I had a pretty rudimentary way of constructing \( \sqrt{\exp} \) using this, but I wasn't sure if it converges.

So yes, I definitely agree that the continuous case is deeply connected to super functions. Especially when we start taking contour integrations, and modding out by conjugations. Which was the central subject of my last paper--which allows you to construct things like TAylor series and the like.

EDIT:

And I was just joking about Tate's thesis, but what you are suggesting is very similar. If you look at haar measures and the sort it is very very similar to what you are suggesting. Except we're making the transition from \( \sum \mapsto \Omega \)--and haar measures are on groups rather than monoids, but the construction would be similar.

EDIT2 (BUT really EDIt 3 because I reedited the edit; sorry, I'm a fan of the edit functor):

Thought I'd explain this haar measure a bit better. Suppose we have a group \( \{\mathcal{G}, \cdot\} \) and we define functions \( f : \mathcal{G} \times \mathbb{C} \to \mathcal{G} \). Then we write,

\(
+ \mapsto \cdot\\
\)

And we derive a notion of "measure" within this group. Call \( \mu \) the haar measure.

Then instead of writing, as Tate would,

\(
\int f(x) d\mu\\
\)

We'll twist it by binding it to \( z \). We would write,

\(
\int f(x,z)d\mu\bullet z : \mathbb{C} \to \mathbb{C}\\
\)

Which, would look similar to what you are describing. It would simply mean something along the lines (NOT THIS BUT CLOSE),

\(
\Omega_{j=1} z + f_\mu(x,z)\bullet z\\
\)

Where \( f_\mu \) is horrible notation for the use of a measure across a function across groups, which is then measured in the complex plane.

The haar measure is central to preserving the algebraic structure of the group (with monoids you could definitely do at least half of what you can do with groups; I'm curious if there are "monoid haar measures"). We would get things like, for \( \mathcal{I},\mathcal{J} \subset \mathcal{G} \) and measureable.

Product identity (which is an additive condition with Tate; for us it'll be non-abelian so definitely more to unpack),

\(
\int_{\mathcal{I} \bullet \mathcal{J}} f(x,z)d\mu\bullet z = \int_{\mathcal{I}} f(x,z)d\mu\bullet \int_{\mathcal{J}} f(x,z)d\mu\bullet z\\
\)

And left/right invariance (because it's a haar measure)

\(
\int_{a \cdot \mathcal{J}} f(x,z)d\mu\bullet z = \int_{\mathcal{J}} f(ax,z)d\mu\bullet z\\
\int_{\mathcal{J}\cdot a} f(x,z)d\mu\bullet z = \int_{\mathcal{J}} f(xa,z)d\mu\bullet z\\
\)

But, it wouldn't look exactly like this. But what you are driving at with monoid indexes is very much a beast not too different.
#9
Thank you very much for these last posts. Only now it is starting to make sense and only now I can begin to appreciate version of the analogy you were proposing (in the light on the Jabotinsky connection of my last thread). One day I'll give you a worthy reply on this.

The answer is yes btw...

Quote:The second analogy comes from something you're saying about superfunctions. Which I think might help. Correct me if I'm wrong, but I think it might be what you are driving at

Yes... I was years ahead of myself and now you are able to read perfectly what I was driving at (mostly unconsciously) because you were already there.

MSE MphLee
Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)
S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)


Possibly Related Threads…
Thread Author Replies Views Last Post
  another infinite composition gaussian method clone tommy1729 2 941 01/24/2023, 12:53 AM
Last Post: tommy1729
  Consistency in the composition of iterations Daniel 9 3,475 06/08/2022, 05:02 AM
Last Post: JmsNxn
  Categories of Tetration and Iteration andydude 13 34,677 04/28/2022, 09:14 AM
Last Post: MphLee
  Improved infinite composition method tommy1729 5 4,334 07/10/2021, 04:07 AM
Last Post: JmsNxn
  A Notation Question (raising the highest value in pow-tower to a different power) Micah 8 15,982 02/18/2019, 10:34 PM
Last Post: Micah
  Inverse super-composition Xorter 11 31,039 05/26/2018, 12:00 AM
Last Post: Xorter
  [2014] composition of 3 functions. tommy1729 0 4,095 08/25/2014, 12:08 AM
Last Post: tommy1729
  composition lemma tommy1729 1 5,824 04/29/2012, 08:32 PM
Last Post: tommy1729
  A notation for really big numbers Tai Ferret 4 13,372 02/14/2012, 10:48 PM
Last Post: Tai Ferret
  General question on function growth dyitto 2 8,761 03/08/2011, 04:41 PM
Last Post: dyitto



Users browsing this thread: 1 Guest(s)