# Tetration Forum

Full Version: Continuum sum - a new hope
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Hi, I wanted to share with you some things I found in the last few months, that might interest you.

These are a few ideas that involve Continuum sum, fixed points, Taylor series and etc.

I discovered Ansus's formula independently in the past.

The disadvantages of the formula are that we must know the value of the derivative in 0, but we don't.

Instead we can try a similar approach:

Define

$f_0(x) = x$
$f_{n}(x) = a^{f_{n - 1}(x)}$

Then

$f_{n}(x) = a^{f_{n - 1}(x)}$
$f'_{n}(x) = a^{f_{n - 1}(x)} \cdot \ln{a} \cdot f'_{n - 1}(x) = f_{n}(x) \cdot \ln{a} \cdot f'_{n - 1}(x)$

$\frac{f'_{n}(x)}{f'_{n - 1}(x)} = f_{n}(x) \cdot \ln{a}$

$\frac{f'_{n}(x)}{f'_{0}(x)} = \ln{a}^n \cdot \prod_{k=1}^{n}{f_{k}(x)}$

$f'_{n}(x) = \left( \ln{a} \right)^n \cdot \prod_{k=1}^{n}{f_{k}(x)}$

Which is similar to Ansus's formula, but has its advantages:

If we use the natural definition for continuum product (that can work if the limit exists, i.e. if ${\frac{1}{e}}^{\frac{1}{e}} \le a \le e^{\frac{1}{e}}$):

Let $y = f_{\alpha}(x)$

Then:

$y' = f'_{\alpha}(x) = \left( \ln{a} \right)^\alpha \cdot \prod_{k=1}^{\alpha}{f_{k}(x)} = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \cdot \prod_{k=1}^{\infty}{\frac{f_{k}(x)}{f_{k+\alpha}(x)}}$

where $b = \lim_{m \to \infty} {f_m(x)}$ is the fixed point of $a^x = x$

We can rewrite it:

$y' = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \cdot \prod_{k=1}^{\infty}{\frac{f_{k}(x)}{f_{k+\alpha}(x)}} = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \cdot \prod_{k=1}^{\infty}{\frac{f_{k}(x)}{f_{k}(f_{\alpha}(x))}} = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \cdot \prod_{k=1}^{\infty}{\frac{f_{k}(x)}{f_{k}(y)}}$

This is a differential equation that we can solve, but we "don't" know the initial conditions. We can assume that y(b) = b, and solve the equation.

I tried to solve the truncated equation for some bases with wolfram alpha and got the following result.

It seems, that the solution for that equation is the regular iteration Tetration aka "natural" Tetration, so it seems it is really natural.

We get problems at $a = e^{\frac{1}{e}}$, there we get $f_{\alpha}(x) = x$.

This is the first idea I wanted to share.

The second one is the PDE of $f_{\alpha}(x)$ which you can see here. I will explain how I discovered it later.

(It reminds a bit the first idea)

The third idea, is an idea that is inspired by Ansus's last post about the tetration's derivative:

Let $g_{\alpha}(x) = f'_{\alpha}(x)$.

Then

$\ln\left({g_{n}(x)}\right) = \ln\left({g_{n - 1}(x)}\right) + \ln{\ln{a}} + \ln\left({f_{n}(x)}\right)$

$\frac{g'_{n}(x)}{g_{n}(x)} = \frac{g'_{n - 1}(x)}{g_{n - 1}(x)} + \frac{f'_{n}(x)}{f_{n}(x)}$

$\frac{f'_{n}(x)}{f_{n}(x)} = f'_{n - 1}(x) \cdot \ln{a}$

$\frac{g'_{n}(x)}{g_{n}(x)} - \frac{g'_{n - 1}(x)}{g_{n - 1}(x)} = f'_{n - 1}(x) \cdot \ln{a} = g_{n - 1}(x) \cdot \ln{a}$

Therefore,

$\frac{g'_{n}(x)}{g_{n}(x)} = \ln{a} \cdot \sum_{k = 1}^{n} { g_{k - 1}(x) }$

It may seem useless, but now we try to develop a Taylor series around a fixed point b of $a^x$ and we get:

$f_{\alpha}(b) = b$
$f'_{\alpha}(b) = \left( \ln{a} \right)^\alpha \cdot \prod_{k=1}^{\alpha}{f_{k}(b)} = \left( \ln{a} \right)^\alpha \cdot \prod_{k=1}^{\alpha}{b} = \left( \ln{a} \right)^\alpha \cdot b^{\alpha}$

$\frac{g'_{n}(x)}{g_{n}(x)} = \ln{a} \cdot \sum_{k = 1}^{n} { g_{k - 1}(x) }$
$f''_{\alpha}(b) = g'_{\alpha}(b) = g_{\alpha}(b) \cdot \ln{a} \cdot \sum_{k = 1}^{\alpha} { g_{k - 1}(b) } = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \ln{a} \cdot \sum_{k = 1}^{\alpha} { \left( \ln{a} \right)^{k - 1} \cdot b^{k - 1} } = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \cdot \ln{a} \cdot \frac{\left( \ln{a} \right)^{\alpha} \cdot b^{\alpha} - 1}{ \ln{a} \cdot b - 1}$

I think we can continue in a similar fashion and find values in higher derivatives order, but I am not sure.

I hope you enjoy these ideas and maybe develop them to bigger things...
Ansus, You missed the whole point.

The derivative is by x, therefore there are no constants.
Hey Kobi,

(04/24/2010, 09:01 PM)kobi_78 Wrote: [ -> ]$y' = f'_{\alpha}(x) = \left( \ln{a} \right)^\alpha \cdot \prod_{k=1}^{\alpha}{f_{k}(x)} = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \cdot \prod_{k=1}^{\infty}{\frac{f_{k}(x)}{f_{k+\alpha}(x)}}$

The formula looks very interesting, however it seems only useful with the natural continuum product which you describe, which limits the base range to $. It seems indeed to yield regular iteration tetration, a nice finding.

For the case $b>e^{1/e}$ we however lack an expression of $f_k$ continuous in $k$ (and if we had that then we would already have tetration). So we have no possibility to compute the continuum product in an alternate way, which however would be necessary for the case $b>e^{1/e}$.

Quote:The second one is the PDE of $f_{\alpha}(x)$ which you can see here. I will explain how I discovered it later.

Actually I also have no experiences with partial differential equations. So we have to wait until someone comes with more knowledge about it and can make use of this equation.

Quote:The third idea,
....
$f'_{\alpha}(b) = \left( \ln{a} \right)^\alpha \cdot \prod_{k=1}^{\alpha}{f_{k}(b)} = \left( \ln{a} \right)^\alpha \cdot \prod_{k=1}^{\alpha}{b} = \left( \ln{a} \right)^\alpha \cdot b^{\alpha}$

$\frac{g'_{n}(x)}{g_{n}(x)} = \ln{a} \cdot \sum_{k = 1}^{n} { g_{k - 1}(x) }$
$f''_{\alpha}(b) = g'_{\alpha}(b) = g_{\alpha}(b) \cdot \ln{a} \cdot \sum_{k = 1}^{\alpha} { g_{k - 1}(b) } = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \ln{a} \cdot \sum_{k = 1}^{\alpha} { \left( \ln{a} \right)^{k - 1} \cdot b^{k - 1} } = \left( \ln{a} \right)^\alpha \cdot b^{\alpha} \cdot \ln{a} \cdot \frac{\left( \ln{a} \right)^{\alpha} \cdot b^{\alpha} - 1}{ \ln{a} \cdot b - 1}$

I think we can continue in a similar fashion and find values in higher derivatives order, but I am not sure.

I think this is easier derived via the standard methods of regular iteration at the fixed point.

(04/25/2010, 09:34 AM)Ansus Wrote: [ -> ]Oh in that case if you obtain derivatives f'(x), f''(x) etc and build a Taylor series for $f_\alpha(x)$, this would not help to find $f'_x(1)$, $f''_x(1)$ etc (derivatives by x) which are necessary to build Taylor series for tetration...

Ansus, Taylor series are not the only allowed tools in computation of tetration! (Eg. would you object Dmitrii's Cauchy-integral computation only because it doesnt contain Taylor series?)
(04/28/2010, 09:47 PM)Ansus Wrote: [ -> ]We can celebrate a new working method to compute tetration :-)

Its not a new method, indeed it is the oldest one: regular iteration.
But seems it is easier to rederive than getting acquainted with the existing methods.

However it would be interesting if you could verify that the continuum sum method indeed yields regular iteration in the case of a (real) fixed point.
Why does what Ansus wrote fails for a = e, for instance?
(05/03/2010, 09:27 PM)kobi_78 Wrote: [ -> ]Why does what Ansus wrote fails for a = e, for instance?

For regular iteration you need a fixed point.
Real fixed points exist only for base $b.
Of course you can do it also at a complex fixed point of $e^x$ however then the resulting tetration is not real on the real axis (actually it is not even analytic).
(05/09/2010, 11:35 AM)bo198214 Wrote: [ -> ].... For regular iteration you need a fixed point.
Real fixed points exist only for base $b.
Of course you can do it also at a complex fixed point of $e^x$ however then the resulting tetration is not real on the real axis (actually it is not even analytic).
Henryk,
Could you clarify what function you are referring to that is not analytic? I thought that regular tetration developed from the complex fixed point for base e, with complex values on the real number line, was analytic and entire. It must have something to do with the rest of this post....
- Sheldon
(06/11/2010, 12:34 AM)sheldonison Wrote: [ -> ]Henryk,
Could you clarify what function you are referring to that is not analytic? I thought that regular tetration developed from the complex fixed point for base e, with complex values on the real number line, was analytic and entire. It must have something to do with the rest of this post....
- Sheldon

I am referring to the regular Abel function at the primary fixed point.
The inverse function (regular superfunction) is entire. But the Abel function has singularities on the real line at $\exp^{[n]}(0)$, $n\in \mathbb{N}$.
(06/12/2010, 04:42 AM)bo198214 Wrote: [ -> ]I am referring to the regular Abel function at the primary fixed point.
The inverse function (regular superfunction) is entire. But the Abel function has singularities on the real line at $\exp^{[n]}(0)$, $n\in \mathbb{N}$.

Does this mean that there are no values of $z$ such that $\mathrm{reg}_F\left[\exp^z\right](u) = {}^n e = \exp^n(1) = \exp^{n+1}(0)$ for $n = 0, 1, 2, ...$? (here $F$ is the fixpoint, $u$ is the starting point, and $reg$ means regular iteration) I.e. the regular iteration super-function does not contain the sequence of integer-height tetrations of e? That makes no sense, since an entire function must take on every complex value with at most one exception. Or are these singularities only on some "branches" of the Abel function, so there's other branches in which there are no singularities there, thus yielding values for where the integer-height tetration sequence occurs? (which would make more sense)
(06/12/2010, 11:10 AM)mike3 Wrote: [ -> ]Or are these singularities only on some "branches" of the Abel function, so there's other branches in which there are no singularities there, thus yielding values for where the integer-height tetration sequence occurs? (which would make more sense)

I guess so. But indeed I didnt have a closer look at other branches of the Abel function, I only vaguely remember that JayD had a "spider-web" picture of it, with slightly overlapping ends of two branches. (but couldnt find it again, the search function of the forum software is really weak.)
Pages: 1 2