Grand Unity Conjecture
#1
After some numerical experiments I indeed come to the conclusion that
matrix power tetration and intuitive tetration are equal.
For base <= eta both are equal to regular tetration.
For base > eta both are equal to Cauchy tetration.

If they are equal to regular tetration how do they know how to choose the right (lower) fixed point?
This depends on the location of the development point of the method.
If the development point lies inside the basin of attraction of the fixed point, then it converges to the regular iteration of that fixed point.
E.g. the lower fixed point on the real axis is the only attracting fixed point of exp_b at all.
For b = sqrt(2) you can for example choose the development point to be the imaginary unit and despite the values on the real axis have vanishing imaginary part.

This would imply that the by some of the above methods iterated function does not continuously depend on the development point but it stays the same function for all points of the basin of attraction and suddenly jumps chaotically/is non-convergent as long as it is in the Julia set of the function.

This behavior is very similar to the behavior of the limit formulas or the Newton&Lagrange formulas for regular iteration. To which fixed point the regular iteration belongs is just determined by in which basin of attraction the initial value x0 lies.
Particularly in the parabolic case where there are petals of attraction and repulsion at the *same* fixed point, which yield different regular iterations. For exp_eta there are two petals the lower one is attracting, the upper repelling.

Somehow one can imagine infinity perhaps also as such a fixed point, which comes into play for b > eta. The basin of attraction is the whole complex plane, so the development point should not be important.

However there are still some unresolved questions. If we - for b=sqrt(2) - develop at some point above the upper fixed point, do we obtain then the regular iteration at the upper (repelling) fixed point? And if so, how does the method know to choose this repelling fixed point though there are so many more repelling fixed points in the complex plane? Why does a non-real development point not work (according to Jay) for b>eta?
#2
(08/30/2009, 04:32 AM)bo198214 Wrote: Why does a non-real development point not work (according to Jay) for b>eta?

It does work. But like the (0 < b < 1) case, the result is a complex-valued tetrational along the real axis. This is a huge difference between regular iteration and intuitive iteration. I would like to refer to this thread in which ONLY the purple region is where regular iteration and intuitive iteration would appear numerically equal. Outside the blue region, I believe intuitive iteration is undefined. Outside the red region, regular iteration still works, but its "justification" does not hold any more.

The "justification" is this: in order to express the coefficients of regular iteration in terms of \( f_k \) (the Taylor coefficients), then you must use finite geometric sums (as indicated in the thread above), however, to express the fixed points, then you must use infinite geometric sums, which have a different domain. The finite domain corresponds to the complex plane, and the infinite domain corresponds to \( |\log({}^{\infty}b)| < 1 \).
#3
With development point I dont mean the base. Here I only consider tetrations with real base > 1, the tetrational should be real on the real axis.

With "development point" I mean the following: Our methods (intuitive and matrix power) take in a powerseries development and put out a powerseries development. This input powerseries is usually the powerseries of exp_b developed at 0. But it works for lots of other functions too.
So if we take in the powerseries (at 0) of the shift conjugated function z-> exp_b(z+z0)-z0 with the output of the Abel function \( \alpha \) and the superfunction \( \sigma \), then the function \( z\mapsto\alpha(z-z0) \) and \( z\mapsto\sigma(z)+z0 \) are Abel and superfunction of the original exp_b respectively. I call z0 the development point of the method, or say for example the islog_b developed at z0.

For the particular case of \( z_0 \) being a fixed point we already know that the matrix power iteration is equal to regular iteration at that fixed point, and we also know that in this case the intuitive slog/iteration can not be applied.

So my original objection was: that the matrix power method would be continuous in z_0. So if I go with z_0 from one fixed point to another the iteration must change (as the iterations are different at both fixed points). While if I assume that this change is not continuous the iteration stays a same for a while and then jumps if it comes near to another fixed point.
These regions of non-changing iteration could be the attracting regions of fixed point. Perhaps there are also "repelling" regions (where the inverse function is attracting) which dictate the iteration outcome in the same way.
#4
(08/30/2009, 04:32 AM)bo198214 Wrote: If they are equal to regular tetration how do they know how to choose the right (lower) fixed point?

This is a really good question. For some reason, I haven't really understood the dependence on the fixed point. I think I remember that if two fixed points are complex conjugates of each other, then the resulting tetrations will also be conjugate, but this is just a consequence of assuming analytic tetration (all analytic functions have this property). In the real case (1 < b < eta), I don't see any reason a priori why the resulting tetrations would be different, and my intuition would even want to believe that they are the same, for example that 2 and 4 should produce the same tetrations for b=sqrt(2). Maybe this intuition is wrong, but if it is, then could you point me to the threads in which we talk about this?
#5
(08/30/2009, 04:29 PM)andydude Wrote: This is a really good question. For some reason, I haven't really understood the dependence on the fixed point. I think I remember that if two fixed points are complex conjugates of each other, then the resulting tetrations will also be conjugate, but this is just a consequence of assuming analytic tetration (all analytic functions have this property).
yes, just everything is mirrored.

Quote: In the real case (1 < b < eta), I don't see any reason a priori why the resulting tetrations would be different, and my intuition would even want to believe that they are the same, for example that 2 and 4 should produce the same tetrations for b=sqrt(2). Maybe this intuition is wrong, but if it is, then could you point me to the threads in which we talk about this?
But Andrew the Bummer thread is all about only this!
Also in the upper superexponential features the both regular iterations at 2 and 4. Or rather there are two at each fixed point which you can see here.
There is even the recognition that the iterations of neraly *every* function is different at different fixed points except the linear fractional, which has two fixed points and the regular iteration is the same at both fixed points.
#6
(08/30/2009, 04:32 AM)bo198214 Wrote: After some numerical experiments I indeed come to the conclusion that
matrix power tetration and intuitive tetration are equal.
For base <= eta both are equal to regular tetration.
For base > eta both are equal to Cauchy tetration.

wow euh.

matrix power tetration ?? = carleman matrix method ??

cauchy tetration ?? = andrews slog ?? = julia ?? = kouznetsov ?? = ??

i might be slow on terminology , but this is not in my math book.
#7
(08/31/2009, 09:26 PM)tommy1729 Wrote: matrix power tetration ?? = carleman matrix method ??

Take the Carleman matrix of a function, then take the t-th matrix power, extract the first line, these are the coefficients of f^t.
This is previously called diagonalization method, which is not completely correct, because we can also take the matrix power of non-diagonalizable matrix (e.g. for parabolic functions are only trigonalizable).
This method is equivalent to consider the Jordan decomposition of the Carleman matrix C = P^{-1} J P; C^t = P^{-1} J^t P. If J is a diagonal matrix then J^t is the diagonal matrix with the coefficients taken to the t-th power. In the non-diagonal case you can compute the t-th power of a Jordan block by some finite sum.

Quote:cauchy tetration ?? = andrews slog ?? = julia ?? = kouznetsov ?? = ??

Cauchy tetration is the tetration described and calculated by Dmitrii Kouznetsov which involves the Cauchy-/Contour- integrals.

We call Andrew's slog the intuitive tetration, it was AFAIK first described by Walker. It can also be described with help of the Carleman matrix.

The Julia function is usually used in the regular iteration of parabolic functions.
#8
bo198214 Wrote:Cauchy tetration is the tetration described and calculated by Dmitrii Kouznetsov which involves the Cauchy-/Contour- integrals.

We call Andrew's slog the intuitive tetration, it was AFAIK first described by Walker. It can also be described with help of the Carleman matrix.

dear Bo , i probably have got a bad memory , could you post a good link to the ( simplest ) Cauchy tetration integrals ?

has Cauchy tetration been (dis)proven to be equivalent to kneser tetration ?

is Andrew's slog = carleman ?

sorry for those questions Blush
#9
(09/04/2009, 12:15 AM)tommy1729 Wrote: has Cauchy tetration been (dis)proven to be equivalent to kneser tetration ?

No proof. One of the reasons why we list these as separate methods is that we don't know for sure if they are equivalent. If we could prove that two methods are equivalent, then we would list them as a single method.

(09/04/2009, 12:15 AM)tommy1729 Wrote: is Andrew's slog = carleman ?

No proof. However, there is strong numerical evidence to indicate that they are equal. Procedurally, intuitive Abel iteration and Carleman matrix iteration are very different. Granted, they are both matrix methods, but the first uses matrix inverse, while the second uses matrix power, which is why they cannot be compared directly.
#10
(09/04/2009, 12:15 AM)tommy1729 Wrote: dear Bo , i probably have got a bad memory , could you post a good link to the ( simplest ) Cauchy tetration integrals ?

I dont think it is properly explained on the forum. The main reference is Dmitrii's article, "Solution of \(F(x+1)=\exp(F(x))\) in complex \(z\)-plane".

Quote:is Andrew's slog = carleman ?

There is no Carleman method. The Carleman matrix is used by the matrix power method as well as by the intuitive Abel method. (As Andrew just described)


Possibly Related Threads…
Thread Author Replies Views Last Post
  tommy's group addition isomo conjecture tommy1729 1 927 09/16/2022, 12:25 PM
Last Post: tommy1729
  [NT] primitive root conjecture tommy1729 0 686 09/02/2022, 12:32 PM
Last Post: tommy1729
  tommy's new conjecture/theorem/idea (2022) ?? tommy1729 0 807 06/22/2022, 11:49 PM
Last Post: tommy1729
  conjecture 666 : exp^[x](0+si) tommy1729 2 2,215 05/17/2021, 11:17 PM
Last Post: tommy1729
  @Gottfried : answer to your conjecture on MSE. tommy1729 2 7,635 02/05/2017, 09:38 PM
Last Post: Gottfried
  Polygon cyclic fixpoint conjecture tommy1729 1 5,944 05/18/2016, 12:26 PM
Last Post: tommy1729
  2015 Continuum sum conjecture tommy1729 3 8,839 05/26/2015, 12:24 PM
Last Post: tommy1729
  Conjecture on semi-exp base change [2015] tommy1729 0 4,197 03/24/2015, 03:14 PM
Last Post: tommy1729
  Tommy's conjecture : every positive integer is the sum of at most 8 pentatope numbers tommy1729 0 4,676 08/17/2014, 09:01 PM
Last Post: tommy1729
  Wild conjecture about 2 fixpoints. tommy1729 0 4,107 05/03/2014, 10:56 PM
Last Post: tommy1729



Users browsing this thread: 1 Guest(s)