Teaser

What is the general argument behind the "superfunction trick"?

It seems to me that it is possible to study the general machinery lurking behind this superfunction trick. As JmsNxn correctly notice, he is using a more complex "mutation" of the mentioned trick, but if I can make some progress understanding the old trick, I guess I know how to easily expand the construction category-theoretically to include his iterated composition. Let's begin by setting some notation and by stating a "fake theorem" that illustrates my sentiment about this matter better than words.

Disclaimer on notation.

From now on I will call "a function" only a binary relation that is everywhere defined and single-valued, i.e. for every element in the domain exists a unique element in the codomain and so on...; I'll denote functional composition by juxtaposition and integer iteration by . With i vaguely mean the successor endomap and with the endomap that scales by .

For two given endofunctions and define the set as the solution set of the equation

we define two subsets of sequences as follows

and

Ok! We are now ready for the...

SuperLazy Prototheorem. Given functions and and a sequence of functions if the conditions

- for every natural number, or ;

- is "appropriate";

This "fake theorem" depends fundamentally on the existence and on our ability to build sequences of maps satisfying (1). Luckily this is not a problem! Such kinds of sequences exist, are definable by recursion and are abundant: in fact we can prove these

Easy Lemmas. Given functions and . For every function we prove that:

- is split-mono exists a sequence s.t. and ;

is split-epi given if then ;

if is iso

- is split-epi exists a sequence s.t. and ;

is split-mono given if then ;

if is iso

- if the constant sequence is in both and ;

- for every and , if then ;

For the proof of the former I'll have to work by a sequence of, hopefully finite and convergent, approximated attempts: you can read an attempt in the pdf. The latter result is pretty trivial to prove and generalize because it's pure algebra (it's in the Appendix A of the same pdf).

Some context

But let's start from scratches and provide some context. I'm sure I'm not providing the oldest references on this site but in

[TF] 2008 jul, Trappmann, Robbins - Tetration Reference

the principal Abel and Schroeder functions are defined as follow

for , is the successor and is multiplication by . While in the thread

[KSuLog] 2008 nov, bo198214 - Kneser's Super Logarithm:

bo198214 writes that one way to construct the principal Schroeder function is given by the limit

where is a fixed point, which Kneser (1949) composes with an Abel function of multiplication, , to obtain an Abel function and solve the half-iterate of exp.

Few posts later Sheldonison defines an Abel function of ( developed for the fixed point c) with this limit

Question! What is the general pattern here?! Let's ignore now the convergence issue of the limit for a moment, and I'll try later to, at least, black-box it and return to it when the underlying algebraic argument is clear to me.

The general scheme here seems to comprise the following:

- They select a pair of functions and with some properties, i.e. continuous, analytic, linear, or no property at all, e.g. in the case of principal Abel; in the case of Principal Schroeder; in the case of tetration; in the case of super-logarithm;

- they go on drawing from their magician's hat a function , a kind of first approximation chosen so as to obtain some desired properties related to the fixed points, to the behavior or to the very success of the limiting construction, e.g. in the principal Abel function; the identity in the principal Schroeder function defined in [TF]; subtraction by the fixed-point in [KSuLog] and, if I'm not mistaken in the Tommy-method;

- in all the cases shown, by inverting or , they define recursively from a "broken conjugation" a sequence of functions with base of the recursion such that . In other words, if the definition of wasn't very attractive, this means that the sequence behaves imperfectly almost like a solution of ;

- they take the limit of the sequence and get the desired function. Well, what they mean probably is that for every they evaluate the sequence defining for every a sequence over and then they evaluate that limit in

studying the subset for which the sequence converges.

- what we obtain at the end should be one of the desired inaccessible elements of , e.g. is the solution set of the Abel equation on ; is the solution set of the Schroeder equation on ; contains the slog; contains tetration; in general is the set of superfunctions of ;

(2021 01 16) Generalized_superfunction_trick.pdf (Size: 307.64 KB / Downloads: 326)

MSE MphLee

Mother Law \((\sigma+1)0=\sigma (\sigma+1)\)

S Law \(\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)\)