Just a few late thoughts
#1
I. Distincts among members in generalized superfunction families
We present a generalized superfunction family by MphLee's notation \([f,g]\) or my previously used notation \(\zeta{g|f}(z)\), and by previous discussion we have \([g,g][f,g]=[f,g][f,f]=[f,g]\), for a specific \(f(z)=z+1\), we denote \(\theta=[f,f]\) as \(\theta(z+1)=\theta(z)+1\), and can be written in the form \(\theta(z)=z+c+T(z)\) where T(z) is a 1-periodic function of z.
It's natural to consider to distinct each member, to figure out on which theta mapping the function depends is the key. But also we should consider that, the theta mapping may not map among all members, well in theoretical frames it does, but in pragmatic computations no, it's easy to check for example that 2 schroder superfunctions (explicitly discussed by James and MphLee in a previous post) generated at ~0.3+1.3i and ~0.3-1.3i won't be able to transform each into the other by one single computable theta-mapping. So we can consider the superfunction sub-families for different branch cuts at different fixed points. It's pretty easy to distinct sub-families from sub-families.
And then under each sub-family, the way to distinct each member is theta mapping. It's pretty hard though. However, because we almost never use of a fixed point which has a multiplier 1, after theta mapping, the superfunction would have (at least looks so) 2 different periods (and not elliptic functions), or 1 as a fake period. Then we can detect a function by some sort-of Fourier transformation to check whether it has a theta mapping or not, and distinct then.
As we wrote, \(F(z+1)=f(F(z)), F(z)=\sigma^{-1}(L( c)s^z)\) where c refers to that in the \(\theta(z)=z+c+T(z)\), and a theta mapping-ed F is \(F_\theta(z)=\sigma^{-1}(L_\theta( c)s^{z+T(z)})\) and then we can write such "Fourier transformation" to take T out. Even the case we don't know about the original sigma function we can still detect the 1-periodic behavior.

II. A special functional equation and related phenomenon(?)
I personally got into some sorts of asymptotic solutions of iterative functions, one of which is \(\Delta{f}=g\circ{f}\) for given function g. This is exactly asking the superfunction \(f(z+1)=G(f(z))\) where \(G(z)=z+g(z)\), However I don't need an exact solution but just the first term of the asymptotic expansion, and by the superfunction method I can get an exact solution by iteration like \(f_n(z)=G^{-n}(f_0(z+n))\).
And easily to check if g(z)=O(z), only the biggest term of g(z) controls the biggest term of asymp of f. For example, if g(z)=1/z, we can write for \(G(z)=z+\frac{1}{z}\), and then both \(G^t(z)=z+\frac{t}{z}+O(t^2z^{-2})\) and \(G^t(z)=\sqrt{2t}+O(t^0h(z))\) for some h(z) are true.
The sqrt term comes directly from the equation.
Another example, for \(G(z)=z+\sqrt{z}\) we have \(G^t(z)=\frac{t^2}{4}+O(h(z)t\log(t)^2)\)
I tried a^(-z), z^a*log(z)^b(a<1), etc.
For g(z)=O(z), can we always derive such terms for a specific asymptotic term O(g(z))?
Regards, Leo Smile
#2
1.)



I, too, have looked at trying to identify the theta mapping in other ways, using "some kind of" fourier transform. This would be done with the Mellin transform, or the Laplace transform. It turns out similar to the following.



If \(F(z)\) is a super function, that satisfies \(f(F(z)) = F(z+1)\). Then assume additionally that \(F\) is holomorphic in a half plane, WLOG we can set the half plane as \(\Re(z) > -\delta\). This would equate to real valued multipliers, but complex valued multipliers can be found through a change of variables. Then we can show that:



\[

\frac{1}{2\pi i} \int_{k-i\infty}^{k+i\infty} F(-z)\Gamma(z)x^{-z}\,dz\\ = \sum_{n=0}^\infty f^{\circ n}(F(0))\frac{x^n}{n!}\\

\]


For \(0 < k < \delta\). You'll note this is a fourier transform in disguise.
 


Well, it just so turns out. The only function this operation converges on is when \(\theta = \text{Constant}\). I used this as a uniqueness condition in much of my earlier work. I've never quite phrased it as this, but this is a consequence of Ramanujan's master theorem, and the fact \(f^{\circ n}(F(0))  = F(n)\) regardless of \(\theta\). To go against this, would require that we are not holomorphic in a half plane.



The other kind of "fourier transform" I thought, after this, was less restrictive--let's just use Laplace. If we assume that \(F\) is bounded, and about an attracting fixed point \(\lim_{t \to \infty} F(t) = A\). Then:



\[

\int_0^\infty e^{-st}F(t)\,dz = LF(s)\\

\]



I don't remember what I worked out here, but I found a way to describe \(LF\)'s dependence on \(\theta\).





2.)



Also, when it comes to first order difference equations, I've done a lot of work on it. Which probably culminates in a paper I wrote back in 2019. I can link it if you're interested. It's primarily focused on solving the equation:



\[

\Delta f(s) = e^{sf(s)}\\

\]



But it allows for solutions of lots of whacky first order difference equations (and even higher order ones). I just chose that equation as a case study to describe how it's done more generally. Though it does take a deep deep dive into infinite compositions and how they work. And that's largely the heavy lifting.



Could you further elaborate how a First Order Difference Equation can help solve a super function problem? Because I always got hung up on how these things seemed irreconcilable. The closest I got was the Beta method. Which solves the first order difference equation:



\[

\Delta \beta = \frac{e^{\beta(s)}}{1+e^{-s}} - \beta(s)\\

\]



Which is slightly different than your form. I'm very interested in this. Additionally it was shown that in general, these solutions are not holomorphic (barring Schroder solutions). So it's no help for base \(e\) for example (at least real valued ones).
#3
(08/10/2022, 11:50 PM)JmsNxn Wrote: 1.)
...

Which is slightly different than your form. I'm very interested in this. Additionally it was shown that in general, these solutions are not holomorphic (barring Schroder solutions). So it's no help for base \(e\) for example (at least real valued ones).

Hi James
Thx for your reply
I don't quite get your ideas about how your transformations work, could you show me more details?
Does it work by expanding F(z) as a power series in s^z where s is the multiplier? Just my intuition lol

Well for the 2nd part, it's a more general phenomenon, could be in Set theory. pardon me because I didnt explain why I jumped to this
Here are details (sry but i literly have so lil time) (and maybe someone proposed similar things way before)
After I posted about the iteration research for \(z+\Gamma(z)\), I realized that the conjugator couldn't be any elemental functions but more transcendental ones, most likely to be a function that has asymptotic behaviors of inverse function of \(\sqrt{\frac{C}{z\sinh(z)}}\), so I consider the computation of the inverse of \(z*sin(z)\), but it turned out that I should investigate for an asymptotic again, for this function.
Looked into the asymptotic of inverse questions, then after deriving the inverse asymptotic(mostly I refer to the one at infinity) of the function \(z+\frac{1}{z}+\log(z)\), I noticed and found it pretty easy to prove that:
For \(f(z)=z+\log(z)+O(\log(z))\), we must have \(f^t(z)=z+t\log(z)+O(\log(z))\)
And can be extended further and further such as:
For \(f(z)=z+G(z)+O(G(z)), where G(z)=O(z)\), \(f^t(z)=z+tG(z)+O(G(z))\)
And other cases like \(f(z)=H(z)+G(z)+O(G(z))\) where \(G(z)=O(H(z))\)
Thus we have a method for more series' generalization,
for example, if we consider the function \(f(z)=z+\sqrt{z}\), how would you calculate \(f^t(z)\)?
We can write by asymptotic \(f^t(z)=z+t\sqrt{z}+\frac{t(t-1)}{4}+O(z^{-1/2})\) and as more terms as desired.
And we use asymptotic expansion to get any iteration by conjugated with f.
If we only took fixed point method we won't get any f^t.
This asymptotic expansion has a group structure under composition, not necessarily at inf, for example the set
\(\{f(z)|asymp[f](z)=\sum_{n\ge -k}{P_n(log(z),log^2(z),\cdots,log^r(z)}z^{-\frac{n}{T}}\}\) forms a group under composition, where T is a rational number and r any positive integer and P_n any multidimensional rational(polynomial) function.
And all such group guarantee an expansion of iterations of any element inside of it, with t.
The very well-known example is the set of polynomials with 0 as constant term, or
\(\{f(z)|asymp[f](z)=\sum_{n\ge1}{a_nz^n}\}\)
And their iterations' series are well known too
This is only the beginning but i don have so much time though
What becomes next is beyond the structure, we must conquer a method of computing asymp from asymp to be more clear about the whole struct, suck like asking for an asymp for a superfunction of these functions.
That is, \(F(z+1)=F(z)+G(F(z))+O(G(F(z)))\)
You may ask why we'd take it into account, for example
You can know at first glance about
\(f(z)=z+\sqrt{z}\to f^t(z)=z+t\sqrt(z)+O(t^2z^0)\)
But what about asymp in t (eg. t->inf)?
\(f(z)=z+\sqrt{z}\to f^t(z)=\frac{t^2}{4}+O(t\log(t)h(z))\)
This can be a hard problem bro
Regards, Leo Smile
#4
Remember all that talk about semi-group homomorphism , 2sinh and koenigs function ??

Seems very related hmm

In fact it makes the idea of theta 1-periodic absolute rather than " relative compared to other solutions ".


Furthermore I advise thinking in terms of the Julia equation.

regards

tommy1729
#5
(08/12/2022, 01:51 AM)tommy1729 Wrote: ...

tommy1729

Thank u tommy

About Julia equation, we know \(\lambda(z)=\frac{1}{\alpha'(z)}\)
and we know that for any 1-periodic function \(\theta(z)\) we know \(\alpha(z)+\theta(\alpha(z))\) is also an abel function.
So we combine these 2 and then have \(\lambda(z)\frac{1}{1+\theta'(\alpha(z))}\) is also a julia function.
and wlog \(\theta'(z)\) is also arbitrarily 1-periodic, so \(\lambda(z)\frac{1}{1+\theta(\alpha(z))}\) is also a julia function and so is \(\lambda(z)\theta(\alpha(z))\).
It seems now that we can only discern 2 different julia function by a multiplication by \(\theta(\alpha(z))\), still lack of some Fourier technique.
Regards, Leo Smile
#6
Lemme explain this more
We used to take analyticality around a fixed point for granted and had almost no ideas about how to deal with non-analyticality, for example when it comes to the iterations of function\[f(z)=z+\sqrt{z}\]
This function only has a fixed point at 0 and has a derivative or multiplier as \(\infty\), our old methods totally invalidated
If you insist, that its inverse function is multivalued and thus one branch cut must preserved a fixed point with analyticality in nature, so there's still a chance: \(f_1^{-1}(z)=z+\frac{1-\sqrt{1+4z}}{2},f_2^{-1}(z)=z+\frac{1+\sqrt{1+4z}}{2}\)
Then the reality comes beat at u because after evaluation only \(f_1^{-1}(z)\) has a fixed point, also at 0. It's superattractive and thus only the Botcher equation would work.
If you do some substancial calculation you'll quickly realize that Botcher's function will not extend beyond a branch cut, after so many iterations, the function can only be defined on a subset of \(\mathbb{C}\)
So you can never get an iteration that's entirely-defined-on-C.
If you still try hard, you can build a superfunction that has infinitely many cuts and a discontinuity at a infinitely long curve, It's \(\C:Im(z)^2=Re(z)\), such computing can be superannoying: (in Mathematica)
Code:
Clear[F, B, B1, B2, B3, BI, IF]
F[z_] := z + Sqrt[z]
B[z_] := 1/z + 1 - z/2 + 3 z^2/2 - 35 z^3/8 + 101 z^4/8 -
  599 z^5/16 + 1869/16 z^6 - 48733 z^7/128
B1[z_] :=
1.0640783082804308` - 0.02060483255029065` (-6.` + z) +
  0.004348716183968336` (-6.` + z)^2 -
  0.0007901451309530964` (-6.` + z)^3 +
  0.00013565293405788404` (-6.` + z)^4 -
  0.000022785630652999728` (-6.` + z)^5 +
  3.750548741345441`*^-6 (-6.` + z)^6 -
  6.07839369121214`*^-7 (-6.` + z)^7 +
  1.0408560489666969`*^-7 (-6.` + z)^8 -
  1.9589488351201014`*^-8 (-6.` + z)^9 +
  3.3923628509269487`*^-9 (-6.` + z)^10 -
  4.19175181489827`*^-10 (-6.` + z)^11 +
  3.885382935067092`*^-11 (-6.` + z)^12 -
  1.1779937148123413`*^-11 (-6.` + z)^13 +
  6.536083288896178`*^-12 (-6.` + z)^14 -
  2.9867841343622627`*^-12 (-6.` + z)^15 +
  1.50119091703261`*^-12 (-6.` + z)^16 -
  8.23562018916844`*^-13 (-6.` + z)^17 +
  4.662696976292319`*^-13 (-6.` + z)^18 -
  2.6108033074645186`*^-13 (-6.` + z)^19 +
  1.434506597568819`*^-13 (-6.` + z)^20
B2[z_ /; Abs[z - 6] <= 3] := B1[z]
B2[z_ /; Abs[z - 6] > 3 && Re[z] > 10] := B2[IF[z]]^(1/2)
B2[z_ /; Abs[z - 6] > 3 && Re[z] <= 10] := B2[F[z]]^2
B3[z_ /; Im[z]^2 <= Re[z]] := B2[z]
B3[z_ /; Im[z]^2 > Re[z]] := Undefined
IF[z_] := 1/2 (1 + 2 z - Sqrt[1 + 4 z])
IF2[z_] := -(1/2) (1 + 2 (-z) + Sqrt[1 + 4 (-z)])
BI[z_] :=
z - z^2 + 3/2 z^3 - 7/2 z^4 + 81/8 z^5 - 249/8 z^6 + 1569/16 z^7 -
  5077/16 z^8 + 135067/128 z^9
Clear[BI1, BI2, BI3, BI4, BI5, BI6, BI7, B4]
BI1[z_ /; Im[z]^2 >= -Re[z]] := Sqrt[BI[IF[z]]]
BI1[z_ /; Im[z]^2 < -Re[z]] := -Sqrt[BI[IF[z]]]
BI2[z_ /; Im[z]^2 >= -Re[z]] := Sqrt[BI1[IF[z]]]
BI2[z_ /; Im[z]^2 < -Re[z]] := -Sqrt[BI1[IF[z]]]
BI3[z_ /; Im[z]^2 >= -Re[z]] := Sqrt[BI2[IF[z]]]
BI3[z_ /; Im[z]^2 < -Re[z]] := -Sqrt[BI2[IF[z]]]
BI4[z_ /; Im[z]^2 >= -Re[z]] := Sqrt[BI3[IF[z]]]
BI4[z_ /; Im[z]^2 < -Re[z]] := -Sqrt[BI3[IF[z]]]
BI5[z_ /; Im[z]^2 >= -Re[z] && Abs[z] > 1] := Sqrt[BI4[IF[z]]]
BI5[z_ /; Im[z]^2 < -Re[z] && Abs[z] > 1] := -Sqrt[BI4[IF[z]]]
BI5[z_ /; Abs[z] <= 1] := BI4[z]
BI6[z_ /; Abs[z] <= 6/5] := BI5[z]
BI6[z_ /; Im[z]^2 >= -Re[z] && Abs[z] > 6/5] := Sqrt[BI6[IF[z]]]
BI6[z_ /; Im[z]^2 < -Re[z] && Abs[z] > 6/5] := -Sqrt[BI6[IF[z]]]
BI7[z_ /; 1/10 < Abs[z] < 2] := BI6[z]
BI7[z_ /; 1/10 >= Abs[z]] := BI6[F[z]]^2
BI7[z_ /; Im[z]^2 >= -Re[z] && Abs[z] >= 2] := Sqrt[BI7[IF[z]]]
BI7[z_ /; Im[z]^2 < -Re[z] && Abs[z] >= 2] := -Sqrt[BI7[IF[z]]]
B4[z_] := Exp[1/Log[BI7[z]]]
Clear[IBI, IBI1, IBI2, IBI3]
IBI[z_] :=
z + z^2 + z^3/2 + z^4 + z^5/8 + z^6/2 + (7 z^7)/16 + z^8 - (21 z^9)/
  128 + z^10/8 + (71 z^11)/256 + z^12/2 + (5 z^13)/1024 + (7 z^14)/
  16 + (1095 z^15)/2048 + z^16 - (15885 z^17)/32768 - (21 z^18)/
  128 + (18443 z^19)/65536 + z^20/8 - (55841 z^21)/262144 + (71 z^22)/
  256 + (324945 z^23)/524288 + z^24/2 - (2649857 z^25)/4194304 + (
  5 z^26)/1024 + (6109987 z^27)/8388608 + (7 z^28)/16 - (
  18206579 z^29)/33554432 + (1095 z^30)/2048 + (92290439 z^31)/
  67108864 + z^32
IBI1[z_ /; Re[z] >= 0] := F[IBI[z^2]]
IBI1[z_ /; Re[z] < 0] := F2[IBI[z^2]]
IBI2[z_ /; Re[z] >= 0] := F[IBI1[z^2]]
IBI2[z_ /; Re[z] < 0] := F2[IBI1[z^2]]
IBI3[z_ /; Re[z] >= 0] := F[IBI2[z^2]]
IBI3[z_ /; Re[z] < 0] := F2[IBI2[z^2]]
A[n_] := InverseFunction[B4][B4[1.]^(2^n)]
where A[n] is the superfunction, slowly computing, low precisions and many branch cut issues.


While using the asymptotic method, we have very briefly:\[f^t(z)=z+t \sqrt{z}+\frac{1}{4} (t-1) t-\frac{(t-1) t}{16 \sqrt{z}}+\frac{(t+1) \left(t^2-t\right)}{96 z}+\frac{-2 t^4-2 t^3+5 t^2-t}{768 z^{3/2}}+\frac{(t+1) \left(12 t^4+13 t^3-53 t^2+28 t\right)}{15360 z^2}-\frac{\left(16 t^5+52 t^4-55 t^3-110 t^2+69 t+28\right) t}{61440 z^{5/2}}+\frac{(t+1) \left(240 t^6+838 t^5-1447 t^4-2053 t^3+3208 t^2-786 t\right)}{2580480 z^3}-\frac{t \left(720 t^7+4176 t^6-280 t^5-18102 t^4-385 t^3+21714 t^2-1630 t-6213\right)}{20643840 z^{7/2}}+\frac{(t+1) \left(1120 t^8+6908 t^7-3292 t^6-39338 t^5+19626 t^4+55911 t^3-40287 t^2-648 t\right)}{82575360 z^4}+O(z^{-9/2})\] at infinity
And our calculation can be balefully abbreviated as:
Code:
Clear[L, A, \[Alpha], IA, I\[Alpha], F, IF, IF2]
F[z_] := z + z^(1/2)
IF[z_] := 1/2 (1 + 2 z - Sqrt[1 + 4 z])
IF2[z_] := 1/2 (1 + 2 z + Sqrt[1 + 4 z])
F[z_, t_] :=
z + t z^(1/2) + 1/4 (-1 + t) t - ((-1 + t) t)/(
  16 Sqrt[z]) + ((1 + t) (-t + t^2))/(
  96 z) + (-t + 5 t^2 - 2 t^3 - 2 t^4)/(
  768 z^(3/2)) + ((1 + t) (28 t - 53 t^2 + 13 t^3 + 12 t^4))/(
  15360 z^2) - (t (28 + 69 t - 110 t^2 - 55 t^3 + 52 t^4 + 16 t^5))/(
  61440 z^(5/2)) +
  1/(2580480 z^3) (1 + t) (-786 t + 3208 t^2 - 2053 t^3 - 1447 t^4 +
      838 t^5 + 240 t^6) -
  1/(20643840 z^(7/2))
    t (-6213 - 1630 t + 21714 t^2 - 385 t^3 - 18102 t^4 - 280 t^5 +
     4176 t^6 + 720 t^7) +
  1/(82575360 z^4) (1 + t) (-648 t - 40287 t^2 + 55911 t^3 +
     19626 t^4 - 39338 t^5 - 3292 t^6 + 6908 t^7 + 1120 t^8)
IA[z_] :=
z^2/4 + Log[z]/8 - 1/4 z Log[z] + Log[z]^2/16 - (
  1/96 + Log[z]/16 + Log[z]^2/32)/z
I\[Alpha][z_] :=
N[Nest[IF, IA[z + 10000 + 2.350354884909475`100], 10000], 10]
A[z_] := 2 Sqrt[z] + 1/4 Log[z] + Log[2]/2 +
  1/192/z + (-1/6144 Log[z]^3 -
     Log[2]/1024 Log[z]^2 + (5 - 3 Log[2]^2)/1536 Log[z] +
     1/768 (1 + 5 Log[2] - Log[2]^3))/z^(3/2)
\[Alpha][z_] :=
N[A[Nest[F, N[z, 100], 10000]] - 10000 - 2.350354884909475, 10]
Here I\[Alpha][z_] and \[Alpha][z_] are superfunction and abelfunction of f(z)

It can be quickly computed and has very high precision (due to some convience I only saved 10 digits)


Such methods can be useful and also permit us to compute a precise series for any iterations (that in the compositional group)
Regards, Leo Smile
#7
@james : nice resume of some of your key ideas.

I even see the connection with fractional derivatives.

Just one question: you say that in many cases the difference equation has no analytic solution ?

like in nowhere analytic ? 

Like which ones ?

And in the cases which are not , is it always due to or shown with infinite composition or are there other tools and ideas ?

Does that property relate or carry over to differential equations ??

***

@leo mainly :

about those asymptotics at infinity and difference equations ...

First , I have no clue how you got those series of functions you did.

Not that it is beyond me , but it is not clear how YOU exactly got them.

But for asymptotics at infinity and difference equations there are afak only about four or five methods apart from fixpoint methods :

1)

approximate the difference by differentials.

such as replace difference by derivative.

or better approximations with higher derivatives and using taylor.

this could give asymptotics and boundaries that are potentially useful.


2)

 use infinite composition like the ideas from the beta method or gaussian method.

3)

use limits to get asymptotics 

As an example :

f(0) = 2
f(n+1) = f(n) + ln( f(n) )

to get asympt for f(n) ... 

( notice the inverse of Li(n) is a good asymptotic , thereby motivating the derivative idea in 1). idea 2) can also be used but may be harder )

use this limit technique :

> \[\lim_{n\to\infty}\left(\frac{f_n}n-\log n-\log\log n\right)=-1\]

To prove this, consider 

\[g_n=n\log n+n\log\log n-n\]
and
 \[h_n=f_n-g_n\]. 

One wants to prove that 

\[h_n=o(n)\]. 

The identity \[f_{n+1}=f_n+\log f_n\] is equivalent to
\[
h_{n+1}=g_n+h_n+\log(g_n+h_n)-g_{n+1}.
\]
Using simple properties of the logarithm, one can show that this implies
\[
h_{n+1}=h_n+\log\left(1+\frac{\log\log n}{\log n}-\frac1{\log n}+\frac{h_n}{n\log n}\right)+O\left(\frac1{\log n}\right).
\]
In particular, if 
\[h_n=o(n\log n)\]
, the logarithm in the RHS goes to zero hence 
\[h_{n+1}=h_n+o(1)\]
, which implies 
\[h_n=o(n)\].
 Thus, our task is to prove the easier statement that
\[
f_n=n\log n+o(n\log n).
\]
To do so, first note that
\[f_n\geqslant2\]
 for every \[n\geqslant0\]
 yields 
\[f_{n+1}-f_n\geqslant\log2\]

hence \[f_n\geqslant n\log2\]
 for every \[n\geqslant0\]

. Plugging this once again in the recursion
 \[f_{n+1}=f_n+\log f_n\] 
yields 
\[f_{n+1}-f_n\geqslant\log n+\log\log2\]

 hence, summing up, 
\[f_n\geqslant n\log n+o(n\log n)\].

In the other direction,
 \[f_{k+1}-f_k=\log f_k\leqslant\log f_n\]
 for every \[k\leqslant n\] 
hence 
\[f_n\leqslant f_0+n\log f_n\],
 which can be seen to imply 
\[f_n\leqslant n\log n+2n\log\log n\]
 for every \[n\] large enough. This completes the proof.


SECOND EXAMPLE :

https://math.stackexchange.com/questions...v-n-sqrt-7


4) truncated methods. based on series expansions and/or fixpoints. not necc taylor.

5) sometimes series multisection and/or " fake function theory ".




and combinations of those above ofcourse.

other methods would be nice to see.



regards

tommy1729

Tom Marcel Raes
#8
see also 

https://math.stackexchange.com/questions...e-sequence

regards

tommy1729
#9
(08/13/2022, 07:54 AM)tommy1729 Wrote: 1)
approximate the difference by differentials.
such as replace difference by derivative.
or better approximations with higher derivatives and using taylor.
this could give asymptotics and boundaries that are potentially useful.


2)
 use infinite composition like the ideas from the beta method or gaussian method.

3)
use limits to get asymptotics 
However it's not always applicable, for example what if we're dealing with:
\(f(z+1)=f(z)+\frac{exp_e^{1/2}(z)}{e^z}\)

4) truncated methods. based on series expansions and/or fixpoints. not necc taylor.

5) sometimes series multisection and/or " fake function theory ".
Idk what is this by the name, I searched for this term many times before... Would u mind elaborating on it?

and combinations of those above ofcourse.

Yes, tommy, at least former 4 methods are ubiquitous in analysis.
Well what im trying to convey is my idea:
1. \(f(z)=z+g(z), g(z)=o(z)\) should guarantee \(f^t(z)=z+tg(z)+o(g(z))\), without necessity of asymptotic superfunctions
2. the generalization about 1., for example, \(f(z)=h(z)+g(z)+o(g(z))\) where \(g(z)=o(h(z))\) also guarantees an asymp series for \(f^t(z)\)
3. Any functions that can be included in the same group under (the multiplication as) composition while written in the forms of asymp series
have (asymp) series for its iterations
I'll show you more examples only generated from these ideas

Example 1
\(f(z)=z+\frac{1}{z}+\log(z)\)
We have
\(f^t(z)=z+t\log(z)+\frac{\frac{1}{2} (t (t-1)) \log (z)+t}{z}+ \frac{-\frac{1}{6} (t (t-1) (2 t-1)) \log ^2(z)+\frac{1}{3} (t (t-1) (t-5)) \log (z)+(t-1) t}{2 z^2}+ \frac{\frac{1}{2} (t-1) t \left(t^2-t\right) \log ^3(z)+t \left(t^2-6 t+5\right)-\frac{1}{2} t \left(2 t^3-11 t^2+13 t-4\right) \log ^2(z)+\frac{1}{4} t \left(t^3-22 t^2+47 t-26\right) \log (z)}{6 z^3}+\frac{t \left(t^3-18 t^2+41 t-24\right)-\frac{1}{5} t \left(6 t^4-15 t^3+10 t^2-1\right) \log ^4(z)+\frac{1}{30} t \left(108 t^4-615 t^3+910 t^2-405 t+2\right) \log ^3(z)-\frac{1}{10} t \left(22 t^4-315 t^3+820 t^2-705 t+178\right) \log ^2(z)+\frac{1}{5} t \left(t^4-65 t^3+365 t^2-535 t+234\right) \log (z)}{24 z^4}+O(z^{-5})\)

Example 2
\(f(z)=z+\sqrt{z}\log(z)+\frac{\log(\log(z))}{z}\)
We have
\(f^t(z)=z+t\sqrt{z}\log(z)+\frac{t(t-1)}{4}\log(z)^2+\frac{t(t-1)}{2}\log(z)+o(\log(z))\)

Example 3 (though well-known)
\(f(z)=z^2+c\)
Denote \(F(z+1)=f(F(z))\) F as a normal kneser-like real-to-real superfunction of f and \(F(0)=c\)
And write \(f_2(t)=2^t\prod_{n=0}^{t-1}{F(n)}\)\[f_4(t)=f_2(t)\sum_{n=0}^{t-1}{\frac{f_2(n)^2}{2F(n)f_2(n)}}\]
\[f_6(t)=f_2(t)\sum_{n=0}^{t-1}{\frac{2f_2(n)f_4(n)}{2F(n)f_2(n)}}\]
\[f_8(t)=f_2(t)\sum_{n=0}^{t-1}{\frac{f_4(n)^2+2f_2(n)f_6(n)}{2F(n)f_2(n)}}\]
\[f_{10}(t)=f_2(t)\sum_{n=0}^{t-1}{\frac{2f_4(n)f_6(n)+2f_2(n)f_8(n)}{2F(n)f_2(n)}}\]
with the pattern \(f_{2k}(t)=f_2(t)\sum_{n=0}^{t-1}{\frac{\textstyle \sum_{m=1}^{k-1}{f_{2m}(n)f_{2k-2m}(n)}}{2F(n)f_2(n)}}\)
\(f^t(z)=F(t)+f_2(t)z^2+f_4(t)z^4+f_6(t)z^6+f_8(t)z^8+f_{10}(t)z^{10}+O(z^{12})\)

Example 4
\(f(z)=ez+\log(z)+\frac{1}{z}\)
We have
\(f^t(z)=e^tz+\frac{\left(e^{t+1}-1\right) \log (z)}{e-1}+\frac{-e t+t+e^t-1}{(e-1)^2}+\frac{e^{-t} \left(e \left(e^2-1\right) \left(e^t-1\right) \left(e^{t+1}-1\right) \log (z)+(1+e)^2 \left(-e^t\right)+\left(e (1+e) (e-1)^2+1\right) e^{2 t}+e (-t+e (e (t-e+1)+2)+1)\right)}{(e-1)^3 (1+e)^2 z}+O(z^{-2})\)

Most are un-trancable, non-taylor, you may notice it's similar to our old functional iteration where f(0)=0 and f'(0)=1 and has a taylor series.
The series are precise, and not approximation, so the asymp method may be better in some cases.
This is the group structure (infinite-order though)
Regards, Leo Smile
#10
Quote:@james : nice resume of some of your key ideas.

I even see the connection with fractional derivatives.

Just one question: you say that in many cases the difference equation has no analytic solution ?

like in nowhere analytic ? 

Like which ones ?

And in the cases which are not , is it always due to or shown with infinite composition or are there other tools and ideas ?

Does that property relate or carry over to differential equations ??

***

Hey, Tommy. just to clarify some things. Yeah, that was basically a rephrasal of some of the stuff I started to do with Fractional derivatives; but I did more work in analytic number theory where these transforms are much more at home. So this was just a rough explanation of that.

I, also, didn't mean that the difference equation has no analytic solution; I meant that converting the difference equation into an analytic super function proved to have no analytic solution. For example, the \(\beta\) method was spawned from a lot of these observations, and the \(2 \pi i\) periodic tetration base \(b = e\) is nowhere analytic. So the \(\beta\) function itself is analytic, but it can't spawn an analytic superfunction in this instance (it becomes \(C^\infty\) on the real line and a tad more chaotic elsewhere (but an equivalent kind of \(C^\infty\))).

For the most part it was derived using infinite compositions, but I used a lot of tools I had gathered over the years from complex analysis.

When you mention differential equations; we kind of go off topic from this discussion. But much of this "difference equation" talk can be turned into "differential equation" talk.

For example. the general solution to a difference equation:

\[
\Delta f = q(s,f)\\
\]

Has the form:

\[
f(s) = \Omega_{j=1}^\infty z + q(s-j,z)\bullet z\\
\]

Where \(z\) acts as an initial point parameter. Very similar to Picard lindelof. This solution is unique if we ask that \(\lim_{s \to -\infty} f(s) = z\)--and then it only converges for certain \(z\). We can extend this to differential equations by solving arbitrary difference equations:

\[
\begin{align}
f(s+h) - f(s) &= h q(s,f(s))\\
f(s) &= \Omega_{j=1}^\infty z + q(s-jh,z)h\bullet z\\

\end{align}
\]

Limiting \(h \to 0\) produces the general form of a first order differential equation. This gets very very fucking complicated though, which led me to posit the Compositional integral. Which is designed after the Riemann-Stieljtes integral. I won't bother going into detail. But if you're interested I have a short over view on Arxiv; and then I have what I consider my thesis on Compositional integration--which describes all the methods  and the ways these objects can converge. I'd suggest the overview though; as the full thesis is far far more indepth and dealt almost exclusively with compositional integration as though it was cauchy's contour integration.

This was intended to be a notice to some people at U of T, and I was planning on publishing it more professionally, but then covid happened and I can't be bothered anymore, lol.

https://arxiv.org/abs/2001.04248

For example, if you write:

\[
f(s,z) = \lim_{h\to 0} \Omega_{j=1}^\infty z + e^{(s-jh)z}h\bullet z\\
\]

Then this function satisfies:

\[
\begin{align}
f'(s,z) &= e^{sf(s,z)}\\
\lim_{s\to-\infty} f(s,z) &= z\\
\end{align}
\]

In the compositional integral notation, this would be written:

\[
f(s,z) = \int_{-\infty}^s e^{tz}\,dt\bullet z\\
\]

And this object converges everywhere:

\[
\int_{-\infty}^s ||e^{tz}||_{z \in K} \, |dt| < \infty
\]

Where \(K\) is compact. Incidentally it means the solution is holomorphic for \(s \in \mathbb{C}\) and \(\Re(z) > 0\).

The overview was basically a motivation for the notation, to describe how it works, where it comes from. It has nothing to do with difference equations. I only touched briefly on the "difference equations become differential equations" in an adjacent paper, as it became pretty self explanatory once you have strong normality theorems. The thesis I wrote, dealt much more with this stuff, where I looked at developing fourier transforms. Where you have results like:

\[
\begin{align}
\int_{-\infty}^\infty z f(t)e^{-2 \pi i t\xi}\,dt \bullet z &= z e^{\int_{-\infty}^\infty f(t)e^{-2\pi i t\xi}\,dt}\\
\int_{-\infty}^\infty z^2 f(t)e^{-2 \pi i t\xi}\,dt \bullet z &= \frac{1}{\frac{1}{z} + \int_{-\infty}^\infty f(t)e^{-2\pi i t\xi}\,dt}\\
\end{align}
\]

And these are invertible Fourier transforms. This extends for general functions \(p(s,z)\) and not just \(p(s,z) = g(z)f(s)\)--but doing so becomes a problem much like Tate's thesis on fourier transforms in abstract algebra and the likes. It's basically useless for tetration, but has it's value elsewhere. Shy


Possibly Related Threads…
Thread Author Replies Views Last Post
  [MSE too] more thoughts on 2sinh tommy1729 1 569 02/26/2023, 08:49 PM
Last Post: tommy1729
  some general thoughts Gottfried 8 17,575 01/06/2010, 07:38 AM
Last Post: Gottfried



Users browsing this thread: 1 Guest(s)