Maybe the solution at z=0 to f(f(z))=-z+z^2
Hey guys it's been so long time and things in uni are getting busier than ever
Personally I'd expect myself to pass GRE and learn abroad, most potentially in USA
And I postponed so many researches but except for the kinda conjecture that I posted earlier, that in the sense of asymptotic expansion, every function and its iterations can form a group under composition, and thus we can compute the asymp expansion of the iterations in so much convenience.
I just gave a try to, still the old equation friend, \(f(f(z))=-z+z^2\), at \(z=0\) with a multiplier \(-1\) as fuck, it's trivially proved that there's just no Taylor Expansion for this function, I kinda got stucked in here, since I used to consider the iterative Taylor Expansions(as asymtotic expansion) as a group under composition in the sense aforementioned, and by this equation you can prove that they cannot form a group, but only excluded the nonconstructible multipliers(amplifiers).
James was so correct, the petals do generate iterates, the point I got stuck was how to compute, substantially.
If you do care about multivalued-ness, it can indeed construct an Abel function of such functions, yet still arduous and too skillful to approach a well-behaved, not nonsensical inverse of that Abel function, and hence a superfunction.
But it occurs to me that the wider asymp expansions(still conjecture though), in the multivalued sense, form a group. So I tried to grow a computational method to approach any half iterate of a function with nonconstructible multipliers, namely \(-1\).

If we assume \(\log(az)=\log(a)+\log(z)\) for small \(z\) and all \(a\), and bring the main branch of \(\log(z)\) in the evaluation, we have (I took only one branch)
$$f(z)=-iz-\frac{1-i}{2}z^2+z^3\big(\frac{1}{2}+\frac{2\log(z)}{\pi}\big)-iz^4\big(\frac{i-1}{2}-\frac{i}{\pi}+\frac{(2-3i)\log(z)}{\pi}\big)$$ $$-iz^5\frac{1}{8\pi^2}\big(\pi ((10i-8 )+(5-6i)\pi)+4i(4i+(3+8i)\pi)\log(z)-48\log(z)^2\big)+O(z^6\log(z)^3)$$

I can't be more excited, you can check how well \(f(f(z))\) approximates \(-z+z^2\) at least in a small region around z=0 and missing accuracy in some directions in the complex plane. Here's rough Re-Im plot showing \(f(f(z))\) (undoubtedly orange) and \(-z+z^2\) (blue). (Dotted line = Im, Solid line = Re)
But "unfortunately" the existence of this expansion might declare the expiration of conjectures that we can approx it by limits such like: 
\(f_c(z)=\displaystyle\lim_{c\to -1}{cz+z^2}\) and so derived methods.

Attached Files Thumbnail(s)
Regards, Leo Smile
maybe a silly comment but

why not consider h(z) = - z + z^2 , g(z) = h(h(z))

and then consider the super and abel of g(z) ?


(01/19/2023, 11:20 PM)tommy1729 Wrote: maybe a silly comment but

why not consider h(z) = - z + z^2 , g(z) = h(h(z))

and then consider the super and abel of g(z) ?



Hey tommy, the reason was discussed way earlier at
I can repeat lol

Abel part:
It'll be infelicitous for us to compute the Abel function for a nonconstructible type, we call a such function \(f\). Let's say \(q\) is the smallest positive integer making \(f^q(z)\) is constructible, or having \(1\) as its multiplier.
The Abel function \(\alpha_f(z)\) we iterate to converge is actually a scaled version of different branches of the same Abel function of \(f^q\): \(\alpha_{f^q}(z)\), it has the property \(\alpha_{f^q}(f^q(z))=\alpha_{f^q}(z) + 1\), but never \(\alpha_{f^q}(f(z))=\alpha_{f^q}(z) + \tfrac{1}{q}\) since this Abel function works officially for the regularized root of \(f\).

* We call \(g\) as a regularized root of \(f\) (a function having multiplier \(1\)) if \(g\) also has \(1\) as its multiplier.
And every such function \(f\) has at least 1 regularized root. We can write \(g =(f^q)^{\tfrac{1}{q}}\)

Let's look at this example \(f(z)=-z+z^2\).
We first calculate \(f^2(z)=z-2z^3+z^4\), that's good, we have \(q=2\) and a totally closed form of the inverse function, making the computation easier as fuck. Then we calculate the Abel function \(\alpha_{f^2}(z)=\frac{1}{4z^2}+\frac{1}{4z}+\frac{11}{8}\log(z)+O(z)\) around \(z=0\), and we use iteration to make it converge by \(\alpha_{f^2}(z)=\lim_{k\to\infty}{\alpha_{f^2}(f^{2k}(z))-k}\). Whoa- it just converges as expected!
But when we do computation and find the inverse abel, the results just got us cold as it
a) it's multivalued as fuck
b) it'll oscillate among branches, for example we won't have \(\alpha_{f^2}(f(z))=\alpha_{f^2}(z)+\tfrac{1}{2}\) but totally 2 different branches.
c) its result is exactly the fucking scaled abel function of \((f^2)^{\tfrac{1}{2}}(z)=z-z^3+\tfrac{1}{2}z^4-\frac{3}{2}z^5+O(z^6)\), this is what we call the regularized root of \(f^2\).

* so for b), we have \(\alpha_{f^2}((f^2)^{\tfrac{1}{2}}(z))=\alpha_{f^2}(z)+\tfrac{1}{2}\) exactly
Any Abel function generalized in this way, terminating in the regularized root case, that is so different from our initial \(f\).

Superfunction part:
It can be calculated in the same procedure in Abel part, and we still gets into the blackhole of regularized root.
My "P Method" was to combine in the simplistic way the 2(or N) different branch of a superfunction, and it's successful at many cases, but not this nonconstructible case. In this case the merged superfunction either diverge to total discontinuities or converge to 0 for all noninteger \(z\).
Besides, if you really constructed a superfunction, it must be oscillative since the multiplier is -1, or generally nonconstructible, this superfunction would contradicts with the Abel function you construct, then you can't combine the 2 functions to create iterations. If you really wanna do so, computing a nice enough Abel function is the eternal nightmare.

And the point is their combination gives only polynomial, or Taylor series for iterations instead, like I computed, that \(f^{1/2}(z)\) just has no such series, but an multivalued asymptotic expansion. You may notice about the wierd \(\pi\) appearing in the formula of f, in fact it's descended from \(\log(i)\), which is multivalued.
Regards, Leo Smile
I want to complement something that I've missed
About the phenomenon that all functions, whether generalized by P method, simple iteration or limiting cases, would eventually goes to 0, I think I might give an explanation of this.
It's really fucking wierd, right? Why the fuck all such functions goes to 0 after so many iterations, it should just converge to an at-least-nonzero (continuous?) function. Let's still look at the case, denote \(F(z)=-z+z^2\) and \(f(f(z))=F(z)\), we took it for granted that \(f(z)=F(f(F^{-1}(z)))\) and this formula leads us to the 0 function.
The thing is, after each iteration, we'll jump from a branch cut of \(f\) to another totally different one branch, and this branch is fucking acting so holomorphic and it has a norm smaller than the initial branch, so it's now closer to 0. Or more briefly, denote \(f_1(z),f_2(z)\) are 2 branch cuts of the same function \(f\), we don't have \(f_1(z)=F(f_1(F^{-1}(z)))\), but we do have \(f_2(z)=F(f_1(F^{-1}(z)))\) where \(\|f_2(z)\|<\|f_1(z)\|\) most of the time.
Thus we always arrive at 0.

And in fact, we do have tons of different cuts, rather than \(f\) in the first post.

Now I'll show you computation stuffs.
The mathematica code(I can't afford it, I might just use a cloud or a cr*cked ver) is:
IF[z_] := 1/2 (1 - Sqrt[1 + 4 z])
F[z_] := -z + z^2
G[z_] := -I z - (1/2 - I/2) z^2 - I z^3 (I/2 + (2 I Log[z])/\[Pi]) -
  I z^4 ((-(1/2) + I/2) - I/\[Pi] + ((2 - 3 I) Log[z])/\[Pi]) -
  I z^5 ((5/8 - (3 I)/4) - (1 - (5 I)/4)/\[Pi] - (
     2 Log[z])/\[Pi]^2 - ((4 - (3 I)/2) Log[z])/\[Pi] - (
     6 Log[z]^2)/\[Pi]^2)
s = 100;
M[z_] := Nest[F, G[Nest[IF, z, s]], s]
q = 2;
L[z_] := M[M[z]] - F[z]
Plot3D[0, {a, -q, q}, {b, -q, q}, PlotPoints -> 100,
PlotStyle -> Directive[RGBColor[1, 1, 1], Opacity[1]],
ColorFunction ->
  Function[{a, b, z},
   Hue[(Arg[L[N[a + b I]]])/(2 Pi), 1, 1 - (1/E)^Abs[L[N[a + b I]]]]],
  ColorFunctionScaling -> False, Mesh -> False, PlotRange -> {-1, 1},
BoxRatios -> {100, 100, .1}, ViewPoint -> Above]
Showing how close M(M(z)) in the code is to \(F(z)=-z+z^2\) in a field about \(\{z|z:\|z\|<.5, \arg(z)>-\tfrac{\pi}{2}\}\)

You can manipulate the s = 100; in the code, it's the number of iteration, s=200 gives a better half iterate etc.
Changing L[z_]:= M[M[z]] - F[z] to L[z_] := M[z] shows a graph about M(z), which is visualizing a descending absolute value of M(z) after each iteration, and as s goes bigger, the most part of the graph must be darker.

We can claim we found a good enough branch in this sense. The half iterates must violates \(f(F(z))=F(f(z))\) in some manner to prevent itself from being non-constructible.
Also I want to again emphasize that the deep rudiment of why we can't do a good calculation of this function is that \(1=e^{2\pi iz}\) doesn't always hold. This is the root of the crux.
Regards, Leo Smile
Im not sure how these relate but 3 ideas cross my mind.

( for every petal seperately : )


 the "realifier" mentioned by at minimum yourself and Bo. 


how about the 1-periodic theta functions present in the super functions ? 


 vector calculus to " see " what is really going on and even perhaps solve it. Together with series expansions that might make more sense.

I mentioned fourier and L-series in the past.

Mittag-Leffler perhaps.

It seems taylor might not be the best here.

I assume you considered fractional calculus too. And the julia equation.


Also i had the idea

f^[t](x) = f_1(t) x + f_2(t) x^2 + f_3(t) x^3 + ...

Understanding these 3 functions seems key to me.

If those f_n agree with one of the iterates computed by the carleman matrix methods ( picking the correct roots ) then we are close to having the semi-group isom.

Notice f_1(t) also has the semi-group iso for both f_1(t) = 1 and for f_1(t) = (-1)^t.

In fact around a fixpoint f_1 defines bijectively a fitting theta function.

However the issues are not resolved ; radius 0 , divergeance , bad series expansions ? etc

But the truncated taylor function f_1(t) x + f_2(t) x^2 + f_3(t) x^3 should work well in the limit formula's I assume.

further ideas start to resemble the ideas around exp(x) - 1 ....

Ironically I think the branching thing is resolved in a way similar to your own ideas of that " realifier " picking branches... if we got the correct solution ofcourse.

I want to point out that dynamics on a plane is alot like liquid flow.

Not sure if that is of any help.

just my 50 cent.


btw i believe in your conjecture


(01/20/2023, 08:33 PM)tommy1729 Wrote: Im not sure how these relate but 3 ideas cross my mind.



2) about 1-periodic theta mapping, I don't think it'll help. If it would, then our mixed real-valued tetration for base \(0<b<e^{-e}\) would be easily to compute, but till today I can't compute those tetration. Maybe I'll give it a try. The key is to figure out how it behaves in an interval like [0,1], not at infinity, since the iterative method would bring us to different branches, we must develop a technique to iteratively (but not a limit from/at infinity) make an initial guess converge. This is our obstruction now.

3)Your idea is good, and it coincides with things in my last thread and my procedure, but not enough since we need a series containing terms like \(z^3\log(z)\). On the other hand, we do want it comes true as a good expression for our iteration, but the truth is, we can't even figure out the first term.
\(f^t(z)=(-1)^{3t}(z)+O(z^2)\) and so on.
Which one is your answer?
In fact, the first term is enough to affect all other terms following and they all fit the basic semigroup's identity.
After that, my computation gives:
If we even assume a better series (asymp):
and we (A) force it satisfies the semigroup identities,
and if we assume (B1) \(f_1(t)=(-1)^{-t}\), then
which cancels all terms containing \(\log(z)\) and thus a Taylor, hence incorrect.
But we know that the series is correct anytime t is integer.
and if we assume (B2) \(f_1(t)=(-1)^t\), then the same thing happens,
This means our assumption goes wrong. There's contradiction. And by logics, the glitch of the system potentially happens at our condition (A). The semigroup identities. That means we must deal with tons of branches, multivalued multifiers and stuffs. It can be horrible.
The halfiterate can be computed in the same way. You assume coefficients and you earn one series.
Regards, Leo Smile
(01/20/2023, 08:33 PM)tommy1729 Wrote: Im not sure how these relate but 3 ideas cross my mind.


What's more strange is, we assume (A) \(f(f(z))=-z+z^2\)
and (B) \(\log(az)=\log(a)+\log(z)\) for all \(a\) near \(z=0\) where log here represents the main value
and (D) a series for \(f\): \(f(z)=a_1z+a_2z^2+(a_{3,0}+a_{3,1}\log(z))z^3+(a_{4,0}+a_{4,1}\log(z)+a_{4,2}\log(z)^2)z^4+...\)
This gives a series dependent on \(a_{3,0}\)
Denote \(a_{3,0} = c, \log(z)=L\)
Choose the branch (E1): \(a_1=i\)
We have [A,B,D,E1]
\(c\) appears infinitely many times, for example in coefficients of \(z^5, z^5\log(z), z^6, z^6\log(z)\)
And choosing the branch (E2): \(a_1=-i\) the same thing happens. The posted version was with the condition (F) \(a_{3,0}=0.5\).
This is what I meant about how iteration by \(f(-z+z^2)=-f(z)+f(z)^2\) moves our branch to another, the function \(-z+z^2\) will affect the \(a_{3,0}\) term and so on.

Moreover, it's asymp, and will diverge.
Under [A,B,D,E2,F]:
and \(a_{9,0}\approx -55.048-45.058i\)
The norm (abs) of the coefficients grows as \(\|a_{k,0}\|>\sim O(k^2)\)
My computer would explode to compute z^10 terms
SOOOOOOOOOOOOO MANY ISSUES! Literally just keel mi(lol)
Regards, Leo Smile
Beautiful thread, LEO!

These are the same problems I am facing now. And I've only managed to solve it in the parabolic case. So as you, use your example \(-z + z^2\), I use \(e^{-z}+1\) (I like dealing with non-polynomial constructions off the bat; in case you're just proving something special for polynomials). And you have \(q=2\), which is exactly what I write...

The only difference I am trying to observe in this construction you've beautifully exposited: I'm trying to watch this reaction beneath the integral. This is truly fascinating. And I'm all ears to your experiments. My experiments are all with integrals; any different angle is a plus for me, so keep hitting it.

So, for example, let's take:

f_\pi(z) = e^{-z}+1\\

Then \(f_{\pi}^2(z) = e^{-e^{-z}-1}+1 = z + O(z^2) = h(z)\).

There are two petals for \(f_\pi\), (and for all \(f_\theta\)), but there is a transposition in each application of \(f_\pi\) (an order \(q=2\) permutation). So when we take \(h^{\circ k}(z)\), then if \(k \equiv 0\) \(h(z) \approx 0\) when \(\Re(z) < 0\). And if \(k \equiv 1\) \(h(z) \approx 0\) when \(\Re(z) > 0\). Where the closeness to \(0\) is asymptotically \(\frac{1}{\sqrt{k z}}\) (or something close to this, it can vary a bit).

Now when I write:

\vartheta[h](x,z) = \sum_{n=0}^\infty h^{\circ n+1}(z) \frac{x^n}{n!}\\

We can prove that the differintegral at zero converges, and that we get:

h^{\circ s}(z) \Gamma(1-s) = \int_0^\infty \vartheta[h](x,z) x^{-s}\,dx\\

But how do we transform \(h^{\circ s} = (f_\pi^{\circ 2})^{\circ s}\) into \(f_\pi^{\circ s}\)? Something you've so clearly articulated. And I'm super excited to everything you can bring to the table!!!

My goal, and how I think. Is that this problem can be solved by operations on \(\vartheta\) and "how we take the integral". For the parabolic case I am fairly confident. But I am not confident for \(f_{2\pi \xi}(z)\) where \( \xi\) is irrational. The general idea seems like it could work, but things don't seem to be working--in as straight forward a manner, at least.

So I have a question for you.

Let \(\xi = \sqrt{2} -1\), What are your comments on:

p(z) = e^{2\pi i\xi} z + z^2\\

Siegel disks tend to be needed here, and they develop fairly good Schroder "similar" constructions. But Abel functions still seem super mysterious in the general sense. I'm very curious if you can run your ideas on this Big Grin 

Nice to have you back, Leo!
Dear Leo

It might make more sense using series expansions derived from asymptotics.

This usually improves convergeance and analyticity.

And adding some fractal structure  :

f(f(z))= - z + z^2

f(z) = G_1(G_2(z))

G_1(z) = sqrt(i) z + a z^(1/sqrt(2)) + b z^(sqrt(2)-1) + c z^((1/sqrt(2))+1)  + d x^2 + ...

G_2(z) = sqrt(i) z + a_2 z^(1/sqrt(2)) + b_2 z^(sqrt(2)-1) + c_2 z^((1/sqrt(2))+1)  + d_2 x^2 + ...

for some real a,b,c,d,a_2,b_2,c_2,d_2 or more if you want.



Possibly Related Threads…
Thread Author Replies Views Last Post
  On the [tex]2 \pi i[/tex]-periodic solution to tetration, base e JmsNxn 0 948 09/28/2021, 05:44 AM
Last Post: JmsNxn
  tommy's simple solution ln^[n](2sinh^[n+x](z)) tommy1729 1 6,104 01/17/2017, 07:21 AM
Last Post: sheldonison
  Further observations on fractional calc solution to tetration JmsNxn 13 28,031 06/05/2014, 08:54 PM
Last Post: tommy1729
  Seeking a solution to a problem Qoppa/ssqrtQoppa=2 0 3,806 01/13/2011, 01:15 AM
Last Post: Qoppa/ssqrtQoppa=2
  Alternate solution of tetration for "convergent" bases discovered mike3 12 33,742 09/15/2010, 02:18 AM
Last Post: mike3
  tetration solution f(b) tommy1729 0 3,859 09/17/2009, 12:28 PM
Last Post: tommy1729
  Exact and Unique solution for base e^(1/e) jaydfox 22 45,102 08/03/2009, 03:24 PM
Last Post: bo198214

Users browsing this thread: 1 Guest(s)