# Tetration Forum

Full Version: Powerful way to perform continuum sum
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hello everyone. I've recently come across a way to perform continuum sums and I was wondering if anyone has any suggestions on how to formalize this or if they have any nice comments about how it works.

We start by defining an operator:

$\mathcal{J} f(s) = \frac{1}{\Gamma(s)}\int_0^\infty f(-X)X^{s-1}dX$

Thanks to Riemann and Liouville, if $\mathcal{J} f$ and $\mathcal{J} \frac{df}{ds}$ converge then:

$\mathcal{J} \frac{df}{ds} = (\mathcal{J} f)(s-1)$

And, neatly $(\mathcal{J}\frac{df}{ds})(1) = f(0)$ therefore, by induction:

$(\mathcal{J} f)(N) = \frac{d^{-N}}{ds^{-N}}f(s)|_{s=0}$

It is noted that this operator can be inverted using Mellin inversion and Taylor series for certain analytic functions, and for some continuum Taylor transformation. So it makes sense to talk about $\mathcal{J}^{-1}$

However, we must use Riemann-Liouville differintegration.

Define:

$\frac{d^{-s}f(t)}{dt^{-s}} = \frac{1}{\Gamma(s)}\int_{-\infty}^{t}f(u)(t-u)^{s-1}du$

And we find when $t=0$ the operator reduces to $\mathcal{J}$

We then define our continuum sum operator which works beautifully:

$\mathcal{Z}f(s) = \int_0^{\infty} e^{-t}\frac{d^{-s}f(t)}{dt^{-s}}dt$

$\mathcal{Z}f(s) = [-e^{-t}\frac{d^{-s}}{dt^{-s}}f(t)]_{t=0}^{\infty} + \int_0^{\infty} e^{-t}\frac{d^{1-s}f(t)}{dt^{1-s}}dt$

Performing integration by parts, which is easy, and using the fact that $\frac{d}{dt}\frac{d^{-s}f(t)}{dt^{-s}} = \frac{d^{1-s}f(t)}{dt^{1-s}}$

We get:

$\mathcal{Z}f(s) = \mathcal{J}f(s) + (\mathcal{Z}f)(s-1)$

And if we take $f = \mathcal{J}^{-1}g$ and say:

$\phi(s) = \mathcal{Z}\mathcal{J}^{-1} g(s)$

$\phi(s) = g(s) + \phi(s-1)$

Which is the glory of the continuum sum!

The real problem now is finding what kinds of functions g(s) does this work on

$\mathcal{J}^{-1}g(s) = \int_{\sigma + -i\infty}^{\sigma + i\infty}\Gamma(t)g(t) e^{- \pi i t}s^{-t} dt$
with $\sigma$ chosen appropriately for g(s).

We also have:
$\mathcal{J}^{-1}g(s) = \sum_{n=0}^{\infty} g(-n)\frac{s^n}{n!}$

for some functions, this is remembering because $g(-n) = \frac{d^n}{dt^n}f(t)|_{t=0}$ for some functions.

For example we can find a continuum sum, $\lambda \in \mathbb{R}$ $\mathcal{J}e^{\lambda s} = \lambda^{-s}$

therefore for $|\lambda| < 1$ $\Re \lambda < 1$:

$\phi(s) = \int_0^{\infty} e^{-t} \frac{d^{-s}e^{\lambda t}}{dt^{-s}}dt$

$\phi(s) = \lambda^{-s} \int_0^{\infty}e^{(\lambda-1)t}dt = \frac{\lambda^{-s}}{1-\lambda} = \sum_{n=0}^{\infty} \lambda^{n-s}$
which satisfies the continuum sum rule.

We can do the same trick for $e^{-s^2}$ we know that $\mathcal{J} e^{-s^2} = \frac{ \Gamma(s/2)}{\Gamma(s)}$

Therefore the continuum sum, or function that satisfies:
$\phi(s) = \frac{\Gamma(s/2)}{\Gamma(s)} + \phi(s-1)$

Is:

$\phi(s) = \frac{1}{\Gamma(s)}\int_0^\infty e^{-t} \int_{-\infty}^t e^{-u^2}(t-u)^{s-1}dudt$

I thought I would just write out some examples that I have worked out.

$\phi(s) = \sin(\frac{\pi s}{2}) + \phi(s-1)$

$\phi(s) = \int_{0}^{\infty} e^{-t}\sin(t - \frac{\pi s}{2})dt$

Same for cosine.

$\psi(s) = \int_{0}^{\infty} e^{-t}\cos(t - \frac{\pi s}{2})dt$

And miraculously: $\frac{d\phi}{ds} = -\frac{\pi}{2}\psi(s)$
Whats truly amazing is that this is a linear operator. Therefore we have an operator $\mathcal{S}f(s) = \mathcal{Z}\mathcal{J}^{-1} f(s)$ which sends functions from themselves to their continuum sum.

What's truly even more amazing is that I have found an inverse expression for $\mathcal{Z}$ This was an extra trick.
(08/10/2013, 09:06 PM)JmsNxn Wrote: [ -> ]Whats truly amazing is that this is a linear operator. Therefore we have an operator $\mathcal{S}f(s) = \mathcal{Z}\mathcal{J}^{-1} f(s)$ which sends functions from themselves to their continuum sum.

What's truly even more amazing is that I have found an inverse expression for $\mathcal{Z}$ This was an extra trick.

Since S = Z J^-1, the difference operator D satisfies D = J Z^-1.
Is this your inverse expression for Z or do you have an integral transform for Z^-1 ?

I assume you were influenced by Ramanujan ('s master theorem).

Maybe this too :
http://en.wikipedia.org/wiki/Mertens_function

You seem to like integrals

Regards

tommy1729
Actually I wasn't thinking about Ramanujan. I was thinking about Euler and the integral representation for the Gamma function. I wanted to exploit integration by parts as beautifully as he did.

Why yes the difference operator is how I found the inverse, but this gives us an integral transform:

$\mathcal{Z}^{-1} f(s) = \int_{\sigma - i\infty}^{\sigma + i\infty }e^{-\pi i t}\Gamma(t) (f(t) - f(t-1))s^{-t}dt$

Is an expression for one of the inverses. We also have:

$\mathcal{Z}^{-1} f(s) = \sum_{n=0}^{\infty} (f(-n) - f(-n-1))\frac{s^n}{n!} = \int_{-\infty}^{\infty} (f(-y) -f(-y-1))\frac{s^y}{y!}dy$

Each inverse operator works on different classes of functions. I'm having a little trouble finding the exact restrictions on the functions we can use.

Being more general, we can change the Riemann-liouville differintegral to work on different functions by changing the limits of integration in the integral expression. We can solve the following continuum sum using different limits:

$\frac{d^{-s}}{dt^{-s}_0} f(t) = \frac{1}{(s-1)!}\int_0^t f(u)(t-u)^{s-1}du$

$Rf(s) = \frac{d^{-s}}{dt^{-s}_0}f(t) |_{t=1}$

$R t^n = \frac{n!}{(s+n)!}$

Therefore:

$\phi(s) = \int_0^\infty e^{-t} \frac{d^{-s}}{dt^{-s}_0}(t+1)^ndt$

$\phi(s) = \frac{n!}{(s+n)!} + \phi(s-1)$

I'm wondering how to apply this to tetration or hyper operators. This performs a fair amount of mathematical work and solves a nice iteration problem--maybe its related to hyper operators *fingers crossed*

Intresting idea.

Although I must say that - if I recall correct - the continuum sum does not imply uniqueness for tetration.
Then again you might not be after uniqueness but just a nice solution.

This is probably one of your best posts imho.
Although I havent checked all. It seems a bit weird , your integrals seem to lack a variable by being very selfreferential. This might not be a problem but it might be restrictive ... such as L-series or exponential sums ... maybe it can be solved by adding variables.

A problem might be that the n-th derivative is not just the formula of the nth integral with a minus sign added. I assume you knew that already from reading/lectures/exercises about fractional calculus.

Im curious about what others think of it.

regards

tommy1729
(08/11/2013, 07:31 PM)tommy1729 Wrote: [ -> ]This is probably one of your best posts imho.
Although I havent checked all. It seems a bit weird , your integrals seem to lack a variable by being very selfreferential. This might not be a problem but it might be restrictive ... such as L-series or exponential sums ... maybe it can be solved by adding variables.
Thanks for the comment. I feel like this is finally a pay off after a good year of work in fractional calculus. It definitely feels like the first nice result. I've certainly come a long way since I first joined this forum. Nice 3 years of toning my mathematical muscles ^_^

It is quite restricted but if you fiddle with the lower limit you can mix up the types of functions that appear. Any fractional integral works so long as it satisfies: $\frac{d}{dt} I^{s} f = I^{s-1}f(t)$ and works better if it has an inverse over s at any t.

Quote:A problem might be that the n-th derivative is not just the formula of the nth integral with a minus sign added. I assume you knew that already from reading/lectures/exercises about fractional calculus.

Exactly! I've found that it works for exponentials and the closure of exponential functions, so double exponentials. We can take any exponential and infinite combinations of them.

The types of functions this works very beautifully on are functions I call distributed. where f is distributed iff

$\int_{-\infty}^\infty f(-X) \frac{s^X}{X!} dX = \sum_{n=0}^{\infty} f(-n)\frac{s^n}{n!}$

These functions have lots of beautiful properties, insofar as their derivatives and integrals are very easy to calculate. And if $f$ is distributed then so is $f(s+1)$ and $sf(s)$ and $b^s f(s)$.

I have been working on conditions for a function to be distributed and I've found a few. $e^{-e^x}$ is distributed (note the minus sign)

$\int_{-\infty}^{\infty} e^{-e^{-X}}\frac{s^X}{X!}dX = \sum_{n=0}^{\infty} e^{-e^{-n}}\frac{s^n}{n!}$

We get that functions are distributed if $(\mathcal{J}f)(-n) = \frac{d^n}{dt^n} f(t) |_{t=0}$

This is because I have shown that if $\int_{-\infty}^{\infty} (\mathcal{J} f)(-y) \frac{s^y}{y!}dy$ converges it equals f.

I'm a little stuck on my proof, all I have to do is show that this operator (continuum taylor transformation) is continuous and the theorem is perfect. Once I have this I'll work on trying to prove the continuity of the continuum sum transformation and other little nitpickys.

Also, if a function is distributed, by the fourier inversion theorem, we get:

$\mathcal{J} f(s) = \frac{(-s)!}{2\pi} \int_{-\infty}^{\infty} f(e^{-it})e^{-its}dt$

Which I'm sure you can see how I got. This is my own representation for the iterated derivative. It has the property of being non convergent for integration values and convergent for derivative values. The exact opposite of the mellin transform expression.
I was looking for posts about the continuum product and uniqueness related to tetration (and in general the construction of superfunctions).

But I was a bit dissapointed. Not much has been said or concluded compared to other topics here (including myself). Im sure we know more than we said.

But to settle this here - which I feel is about time after all those years ! - I will reformulate the problem.

( base e is used here , analogues must exist for other bases )

Continuum product = exp( Continuum sum ln ) by def.

In analogue with integrals

To do a numeric integral we need to know a point where integral f(x) = 0. And without that point , the symbolic integral is only " generalized nonsense " in the sense that we do not understand the symbolic integral very well.

I think this clarifies what I mean : http://mathworld.wolfram.com/SoldnersConstant.html

Now back to the continuum sum.

we need a point where Continuum sum f(x) = 1. Also very well known as the " empty product ".

***
It is often problematic to take the Continuum product before this " empty product point ", let alone solve equations involving Continuum product and some derivatives (expecially with the restriction of being COMPLEX ANALYTIC )

( this has already been discussed by mike3 )

***

It is also desired that (super)functions have at most 1 real fixpoint for x > 0.

Now tetration should satisfy

sexp ' (x) dx = Continuum product [sexp(x)]

Now comes something that is often done wrong so pay attention

When ppl consider the product equation they do
sexp ' (3,2) = sexp(3,2)*sexp(2,2)*sexp(1,2)*sexp ' (0,2).

However when we by def take sexp(0)=0.

Ppl think its ok to go to "0,2" but its NOT OK IF they think sexp '(0,2) has to be Continuum product sexp(0,2).

Since sexp(1)=1 the CONTINUUM PRODUCT IS AT BEST VALID FOR x > 1.
( after the *empty product* as discussed above )

1) Now this ALSO requires ofcourse that sexp has only 2 fixpoints for x>=0 : {0,1}

2) It is also required that sexp ' (1) = 1 for the product to work. VERY convenient that this works well with the fixpoint condition 1).

Now a sexp(x) analytic for x>=1 and continu for x>=0
should then satisfy :

for x >= 1 : sexp ' (x) = Continuum product [sexp(x)]

More specific we need to include the remarks about the "numerical computation" and hence arrive at

for x >= 1 :

sexp ' (x) - sexp ' (1) = Continuum product[sexp(x)] - Continuum product[sexp(1)]

We know that sexp '(1) = Continuum product[sexp(1)] = 1 =empty product hence :

for x >= 1 :

sexp ' (x) = Continuum product (from 1 to x) [sexp(x)]

( since x is also > 0 we can write by using the q-derivative ...)

(for x >=1)
=> [sexp(q x) - sexp(x)]/[(q-1)x] = exp(Continuum sum (from 1 to x) [sexp(x-1)] )

( Notice we could use l'hopital here because we used q-derivative , this would lead to adding more continuum products in the equations , but its not certain that this could be helpfull )

So far the equation and first step to computations.

It is not clear to me that this would yield a solution that is also analytic FOR 0 < x =< 1 ??

However we are not finished.
How about uniqueness ? We wonder about uniqueness and might even desire it for computation of a solution to the equation. ( or for a proof of the consistancy of a method to solve it )

In order not to retype everything lets use f(x) and g(x).

f(x) = sexp(x) satisfying the equation (and properties) described above.

f(x) + g(x) is also "a sexp(x)" satisfying the equation and properties.

Now we can build the "complex" equation :

for x >= 1 and g(x) =/= 0

f ' (x) + g ' (x) = CP [ f(x) + g(x) ]

=> g ' (x) = CP [ f(x) + g(x) ] - CP [ f(x) ]

Notice if analytic continuation applies to g(x) we can show that from the above equation we must have g(0) = 0 however we already have that property. It might however play a role in non-tetration-dynamics.

Notice g(1) = 0 and g ' (1) = 0.

IF lim x -> 1 : g(x) / g ' (x) = 1

g ' (1) = CP[ f(1) + g(1) ] - CP [ f(1) ] = 0

As expected.

Note that IF Q > 1 : g ' (Q) = 0 then g ' (Q) = CP [ f(Q) + G(Q) ] - CP [ f(Q) ] = 0 HENCE

g ' (Q) = 0 => g(Q) = 0 WHICH contradicts the required properties !

Hence there is no such g(x) (apart from g(x) = 0) !!!

This means f(x) + g(x) is strictly between f(x) and x or strictly above f(x).

This restricts the " second sexp " that f(x) + g(x) can be !!

Lets call this " Lemma1729 " , " theorem1729 " or " property1729 "

( Im narcistic and did not use tex , forgive me. Naming this however might be usefull for the future talks about tetration )

NOW COMES THE FINAL KEY ARGUMENT :

f(1) + g(1) = 1
f(2) + g(2) = e
f(3) + g(3) = e^e

This must be true if g(x) is not identically 0 , BUT it contradicts "theorem1729" HENCE IT MUST BE THAT A C^1 G DOES NOT EXIST !!

**Tommy's continuum product theorem**

Q.E.D.

Now just a few simple integral transforms from James and we have a new method of tetration

Many thanks to the tetration forum and its members.

regards

tommy1729
I think I need to explain that last part again ...
Might have made a mistake. But with another intresting conclusion then.

regards

tommy1729
I'm a little confused about what you're doing but I understand your arguments about sexp as a continuum product.

We have A beautiful result I would like to show:

$\mathcal{J} f = \frac{d^{-s}f(t)}{dt^{-s}} |_{t=0}$

$\mathcal{J}( f \cdot g )= \mathcal{J}f * \mathcal{J}g= \sum_{n=0}^{\infty} \frac{\Gamma(1-s)}{\Gamma(n + s+1) n!}(\mathcal{J} f)(n) (\mathcal{J} g)(-s-n)$

And even more generally:

$\frac{d^{-s}f(t)g(t)}{dt^{-s}} = \frac{d^{-s}f(t)}{dt^{-s}} * \frac{d^{-s}g(t)}{dt^{-s}}$
whree the convolution is done over s, and the values at t are the same for both f and g.

Therefore: if $\mathcal{Z} f = \phi$ and $\mathcal{Z} g = \psi$

$\mathcal{Z} f \cdot g = \int_0^\infty e^{-t} \frac{d^{-s}}{dt^{-s}} f(t) g(t) dt= \phi * \psi$

That means even more remarkably

$\mathcal{Z} \mathcal{J}^{-1} (f * g) = \mathcal{Z} ((\mathcal{J}^{-1} f) \cdot (\mathcal{J}^{-1} g)) = (\mathcal{Z} \mathcal{J}^{-1} f) * (\mathcal{Z}\mathcal{J^{-1}}g)$

That means $S( f * g) = (Sf) * (Sg)$

This has so much value for continuum sums. This is remarkable!

I have to properly justify this using the continuity of these operators over some hilbert space. That's the only way I can think of.

I would also like to standardize a notation that is very intuitive. If we take the continuum sum over the interval [a,b] we say:

$\sum_a^b f(y) \, \sigma y = S f(b) - Sf (a)$

This has all the linearity rules of the integral, and some own unique rules of its own.