01/03/2013, 12:02 AM
Superfunctions in continu sum equations
As the title says I will express Superfunctions in an equation involving the continu sum.
First I need to say this post is in the context of real-differentiable functions.
Also we avoid fixpoint issues or assume there is only 1 or 0 on the real line and/or 2 conjugate on the complex plane.
Second I Must say that we need to find a real analytic function g(x) and it is not totally clear or proven what g(x) is and how many g(x) exist ; in other words uniqueness issues. I assume many g(x) exist and I think you will agree because there are many superfunctions.
However I think it makes sense to say there are as many g(x) as analytic solutions to the superfunction. ( in the sense of a continu bijection ).
Intuitively the equation seems logical to me.
Lets take the exponential as example
( but this post/equation is thus in a more general setting )
Let CS ... ds be the notation for Continuum Sum with respect to s.
( inspired by integrals ... because we can express this in terms of integrals ! )
Let x > 0
consider an invertible realvalued f(x) such that
f(x) = g(x) + g(exp(x)) + g(exp^[2](x)) + ... +g(exp^[oo](x))
Now assume f(x) always converges. This implies that g(x) goes to zero for large values of x.
Let T(x) be the functional inverse of f(x).
Now notice
f(x) - f(exp(x)) = g(x)
Such equation looks familar... If g(x) was GIVEN. But here it is not given yet. However if we continue ;
f(x) - f(exp^[2](x)) = g(x) + g(exp(x))
f(x) - f(exp^[s](x)) = CS g(exp^[s-1](x)) d(s-1)
- f(exp^[s](x)) = CS g(exp^[s-1](x)) d(s-1) - f(x)
f(exp^[s](x)) = - CS g(exp^[s-1](x)) d(s-1) + f(x)
exp^[s](x) = T ( - CS g(exp^[s-1](x)) d(s-1) + f(x) )
exp^[s](1) = T ( - CS g(exp^[s-1](1)) d(s-1) + f(1) )
Which seems to completely express the superfunction in terms of familiar calculus once we rewrite CS in terms of integrals and try to solve for g(x).
Its almost like a differential equation.
I think it might even be solvable by brute force iterations of truncations. For instance by taylor series.
I was also intruiged by the idea of derivative ; we know the CP ( continuum product ) is the derivative of sexp and this seems related yet different. Thing is the CP is always(!) the derivative of ANY sexp so setting up that equation seems useless. I hope to do better with this one. But it seems tempting to differentiate (with respect to s) that last equation.
Btw the equation(s) to turn a CS into an integral can be easily found , for instance on wiki or this forum. (This forum also contains a limit form that is well explained (and equivalent) but imho harder to compute.)
Regards
tommy1729
As the title says I will express Superfunctions in an equation involving the continu sum.
First I need to say this post is in the context of real-differentiable functions.
Also we avoid fixpoint issues or assume there is only 1 or 0 on the real line and/or 2 conjugate on the complex plane.
Second I Must say that we need to find a real analytic function g(x) and it is not totally clear or proven what g(x) is and how many g(x) exist ; in other words uniqueness issues. I assume many g(x) exist and I think you will agree because there are many superfunctions.
However I think it makes sense to say there are as many g(x) as analytic solutions to the superfunction. ( in the sense of a continu bijection ).
Intuitively the equation seems logical to me.
Lets take the exponential as example
( but this post/equation is thus in a more general setting )
Let CS ... ds be the notation for Continuum Sum with respect to s.
( inspired by integrals ... because we can express this in terms of integrals ! )
Let x > 0
consider an invertible realvalued f(x) such that
f(x) = g(x) + g(exp(x)) + g(exp^[2](x)) + ... +g(exp^[oo](x))
Now assume f(x) always converges. This implies that g(x) goes to zero for large values of x.
Let T(x) be the functional inverse of f(x).
Now notice
f(x) - f(exp(x)) = g(x)
Such equation looks familar... If g(x) was GIVEN. But here it is not given yet. However if we continue ;
f(x) - f(exp^[2](x)) = g(x) + g(exp(x))
f(x) - f(exp^[s](x)) = CS g(exp^[s-1](x)) d(s-1)
- f(exp^[s](x)) = CS g(exp^[s-1](x)) d(s-1) - f(x)
f(exp^[s](x)) = - CS g(exp^[s-1](x)) d(s-1) + f(x)
exp^[s](x) = T ( - CS g(exp^[s-1](x)) d(s-1) + f(x) )
exp^[s](1) = T ( - CS g(exp^[s-1](1)) d(s-1) + f(1) )
Which seems to completely express the superfunction in terms of familiar calculus once we rewrite CS in terms of integrals and try to solve for g(x).
Its almost like a differential equation.
I think it might even be solvable by brute force iterations of truncations. For instance by taylor series.
I was also intruiged by the idea of derivative ; we know the CP ( continuum product ) is the derivative of sexp and this seems related yet different. Thing is the CP is always(!) the derivative of ANY sexp so setting up that equation seems useless. I hope to do better with this one. But it seems tempting to differentiate (with respect to s) that last equation.
Btw the equation(s) to turn a CS into an integral can be easily found , for instance on wiki or this forum. (This forum also contains a limit form that is well explained (and equivalent) but imho harder to compute.)
Regards
tommy1729