Another interesting idea is involving the gaussian method.
Vague sketch of the idea (ignoring some constants ) :
For a suitable function f with 2 fixpoints lets say t(s) = (1+erf(s))/2
G(s+1) = f( t(s) G(s) )
Then the inverse of G ( mock schroder or so ) is obviously also interesting.
Let H(G(x)) = G(H(x))
then instead of defining H(x) in the trend of pure iteration like the inverse of ln ln ln ... G(x+n) and such
... we can also use integrals or sums for this.
Let c be as before.
Then consider
c_0 = 1
c_1 = c_0 * c_1 = exp( t(1) ln c )
and
c_n = exp( t(i) ln c ) * c_(n-1) * c_(n-2) * ... * c_1
or " simply "
c_x = product c^t(x)
then c(x) = [ c^t(x) ] * c(x-1).
A gamma or exp type recursion.
Then we can consistantly define gaussian mock schroder ;
GMS(x) = sum_k [ f^k(z) c_k ]
where the sum is over all integer k.
We have then for suitable z_0 ;
GMS( f^k(z_0) ) = c_k GMS(z_0)
and in the limit n to +oo :
f^[-n] GMS(z_0 + n) = SGMS(x)
( super gaussian mock schroder )
SGMS(f(s)) = SGMS(s) * c
So we have something between sum and iteration limit.
The gaussian part makes it analytic as desired.
Now we can write it as
GMS(x) = integral_[-oo,oo] J(x) * c(x) dx
J(x) should now be G(x) and c(x) satisfying c(x) = [ c^t(x) ] * c(x-1).
So we arrive at
GMS(x) = integral_[-oo,oo] G(x) * c(x) dx
and by using f^[-n] GMS(z_0 + n) = SGMS(x)
f^[-n] integral_[-oo,oo] G(x+n) * c(x+n) dx
we get closer and closer to
integral_[-oo,oo] G_oo(x) * c^x dx
where this G_oo(x) is the super of f(x) : G_oo(x+1) = f(G_oo(x)).
and
integral_[-oo,oo] G_oo(x) * c^x dx = sum_k f^[k] c^k = schroder of f(x).
The key is the continuum product and/or solving :
c(x) = [ c^t(x) ] * c(x-1).
regards
tommy1729
Vague sketch of the idea (ignoring some constants ) :
For a suitable function f with 2 fixpoints lets say t(s) = (1+erf(s))/2
G(s+1) = f( t(s) G(s) )
Then the inverse of G ( mock schroder or so ) is obviously also interesting.
Let H(G(x)) = G(H(x))
then instead of defining H(x) in the trend of pure iteration like the inverse of ln ln ln ... G(x+n) and such
... we can also use integrals or sums for this.
Let c be as before.
Then consider
c_0 = 1
c_1 = c_0 * c_1 = exp( t(1) ln c )
and
c_n = exp( t(i) ln c ) * c_(n-1) * c_(n-2) * ... * c_1
or " simply "
c_x = product c^t(x)
then c(x) = [ c^t(x) ] * c(x-1).
A gamma or exp type recursion.
Then we can consistantly define gaussian mock schroder ;
GMS(x) = sum_k [ f^k(z) c_k ]
where the sum is over all integer k.
We have then for suitable z_0 ;
GMS( f^k(z_0) ) = c_k GMS(z_0)
and in the limit n to +oo :
f^[-n] GMS(z_0 + n) = SGMS(x)
( super gaussian mock schroder )
SGMS(f(s)) = SGMS(s) * c
So we have something between sum and iteration limit.
The gaussian part makes it analytic as desired.
Now we can write it as
GMS(x) = integral_[-oo,oo] J(x) * c(x) dx
J(x) should now be G(x) and c(x) satisfying c(x) = [ c^t(x) ] * c(x-1).
So we arrive at
GMS(x) = integral_[-oo,oo] G(x) * c(x) dx
and by using f^[-n] GMS(z_0 + n) = SGMS(x)
f^[-n] integral_[-oo,oo] G(x+n) * c(x+n) dx
we get closer and closer to
integral_[-oo,oo] G_oo(x) * c^x dx
where this G_oo(x) is the super of f(x) : G_oo(x+1) = f(G_oo(x)).
and
integral_[-oo,oo] G_oo(x) * c^x dx = sum_k f^[k] c^k = schroder of f(x).
The key is the continuum product and/or solving :
c(x) = [ c^t(x) ] * c(x-1).
regards
tommy1729