Zeration
#31
Hey James,

thats really amazing how many people are out there investigating hyper (and hypo) operations; and ... how they come independently to equal results!

James Knight Wrote:Ok you guys are getting closer and closer to what I have defined as zeration....

Your approach also first assures the law
a[n+1](b+1)=a[n](a[n+1]b)

Quote:Left to Right is the the Pluse One LAw

x[n](x[n+1](b)) = x[n+1](b+1)
VOILA!

then derives the necessary consequence that a[0]b=b+1:

Quote:x [n-1] (x[n](b-1)) = x [n] (b)
Substituting n = 1 and y = x+b -1
Definition 1
x [0] y = y + 1

However I think we dont need extra names for the left or right inverse of such a simple operation. I would call the operation increment and the rigth inverse decrement. Or maybe also successor/predecessor are acceptable names.

Quote:Ok that leaves the exciting right inverse!!!
Since last fall, I had my doubts over the commutativity of zeration as well as the discontinuity. I have spent numerous hours redoing laws and being frustrated. Ok now I would like to present to you
Knightation or Nitation (struggling on what to call it...)

Knightation is the Right Inverse of Zeration.
The Operator J is used to refer to Knightation (it's supposed to look like an ear lobe idea)

if x o y = z then
y = z J x

Definition 2
x J y = x - 1

Quote:Well I hope you have gained something from this or have been entertained by my random jokes.

So you start to compete with Gianfranco! I really like his and your style of writing.

Quote:Also I am a computer programmer and I am going to soon start a program that will compute and graph hyperoperations. Anyway, I am sooooo happy right now because I got accepted to the University of Waterloo!! Soo tired!

Wish you the best on your further way.
#32
Here are some Laws that are true by this definition of zeration

Law 1 Iterative Property

Zeration
x o x o ... x = x + n - 1
n
Knightation
x J x J x J ... x = x - n + 1 or = x - (n-1)

Cancelation Property (cancels the +- 1)
(x o x o ... x )+ (x J x J x J ... x) = x + m - n
.............n.........................m

Law 2 Addition / Distributive Law
Zeration
(x o b) + a = (x + a) o (b+a)
= (x) o (b+a)
= (a) o (b + a)
= b + a + 1

Knightation
(x J b) + a = (x+a) o (b+a)
= (x+a) o (b)
= (x+a) o a
= x+a - 1

Law 3 Relativity Law
Deltation
(x+a)Δ(b+a) = x Δ b
also
(x-a)Δ(b-a) = x Δ b
Multi-Lines
-----------

Also something that is interesting is multi-lines using polynomial functions as the operands for Zeration, Deltation and Knightation.

One could write:
y = -1 +- 1 to show dual horizontal Lines
and similarly x = -1 +- 1 to show dual vertical lines

Using Zeration, the expression

y = x o (y^2 + 3y -1) means the same thing
and will produce two horizontal lines

The equation can be also written for y in terms of x

y = -1 +- sqrt ( 2 J x) but simplifies to
y = -1 +- sqrt (2-1)
y = -1 +- sqrt (1)
y = -1 +- 1 which is what we started with.

Also two vertical lines can be made by remembering that in
y = a Δ b, y is all values when b = a -1

Therefore if x = a, and b = x^2 + 3x - 1
y = x Δ (x^2 + 3x - 1) gives x = -1 +- 1

Anyway, this can be extended to all polynomial functions which many generate multilines. This may be useful in determining roots of equations but I highly doubt it.

The only problem I am having is with deltation.

I know it's not commutative because
x Δ a = a Δ x
Let I be the value for all values (Vertical Line) then

x Δ a = I
whenever a = x -1

and a Δ x = I
whenever x = a-1
therefore a = x + 1
a cannot = x - 1 and x + 1
for then -1 = 1 and that is false

However Associativity is another question
Because it is difficult to evaluate for hyperreal infinite or infinitesimal operands of deltatation

What would I Δ x be? what would +- Infintity Δ x be?

However I have somewhat came to a conclusion that it is not associative

If associativity were true then
(aΔb)Δc = aΔ(bΔc) for all values of a, b and c
Rearranging I could get
a Δ b = (aΔ(bΔc)) o c

by definition of zeration assuming of course that the definition extends to non real values

a Δ b = c + 1
therefore
a = (c+1) o b
a = b + 1

from a Δ b = c + 1
we could also say that since a Δ b gives nonreal values, then a Δ b is never equal to c+1 if a,b and c are real numbers. Only when b + 1 = a is the equation true IE VERTICAL LINE INTERSECTION
Therefore, since conditions exists, deltation is not associative which follows the same for left inverses of noncommutative, non associative regular operations( ie exponentiation and rooting, tetration and superooting etc.)

Of course there is that assumption I made which may be wrong.
So things to look into:

Zeration, Deltation and Knightation Values for NonReal operands. It may be interesting...

Value of I
----------
'I' is the value of y when y = x Δ (x-1)
I = x Δ (x -1)
I have found a resemblance in the following equation

b = 0c

when b = 0 c is all values
however, when b is not 0, c is undefined (see the similarity?)
In other words,

I = c, when 0c = b and b = 0
Therefore we can define c in terms of x and b as follows;
If y = x Δ a
when y = c = I
a = x -1
b = 0
Therefore a - b = x -1,
a = x + b -1
Therefore y = x Δ (x+b -1)
Therefore the solution of 0c = b where b is a constant is
c = x Δ (x+b -1)
we can also use the Relativity Law to simplify by adding 1 - x to both operands
(you can also see that it's virtually the same as substituting x =1)
Therefore c = 1 Δ b is the simplest solution to 0c = b

These values look to me to being hyperreal. They technically are a part of the real numbers but not "officially". Also, if one looks at the limits of 1/x as x -> 0 +- infinity occurs. Something to consider for deltation...
Well enjoy! and remember to keep an open mind!

James
#33
Sorry I am posting so much! I am just so excited to share my ideas with other people who are researching what I am researching!

Exponent Laws
-----------------
Recall that

(x^a)^b = x^(ab)

(x^a)(x^b)= x^(a+b)

Exponent Law 3
ZERATION exponent LAW (you already know this it's just a different way of looking at it....)

(x)(x^a) = x^(x o a)

Recall that
bth root (x^a) = x^(a/b)

(x^a)/(x^b) = x ^ (a-b)

Exponent Law 6
(x^a)/x = x^(a J x)

There!!! Notice how the pattern isn't only decreasing operations but decreasing combinations of operations. Notice how b is alone in the top and missing in the bottom. Notice how multiplication or division is present in each of the equations. Just thought it was neat!

James
#34
Hello, James ... Knight! Nice to meet you!

I don't completely agree with what you say concerning zeration, but I like it very much. I am preparing a short 2-3 page report, together with Konstantin (KAR), and hope to be able to throw it on the ring. The "commutativity-and-not-associativity" business might still stand on its legs, because of the priority-to-the-right rule. We shall see. It's a question of ... trafic (!) Wink

The big problem is fitting "zeration" exactly into the hyperops hierarchy and granting, at the same time, its "uniqueness". Another problem is: "Do we have hypos ?". Bo says that we don't, but he is not sure ... Sad

Welcome abord and stay with us. More will come soon, thanks also to your ... visions!

Gianfranco
#35
I think the Delta Numbers are elements of the hyperreal sets. In addition, I think that Deltation will revolutionize calculus once it has been well defined. I don't think anymore that deltation values are complex or undefined, but are either infinitely far or infinitesimally close to all real numbers. I think that it is rather interesting that the delta symbol was chosen to represent deltatation as it has to do with calculus and infinitesimal quantities. I am beginng to wonder whether or not hyper real positive vs negative quantites exist. (ie does positive and negative infinity mean the same thing?)

Deltation -> Hyper Real Infinite and Infinitesimal
Subtraction -> Integer Negative Numbers
Division -> Rational Fractions
Roots and Logarithms -> Irrational and Complex/Imaginary
etc.

Notice how you cant produce the "new" number type in a previous level without using that number type.
ie. you can't get a rational number by subtracting two integers
ie. you can't get an irrational number by diving two rationals

I'm not saying you can't have negative infity, but what I am asking is whether you can result negative infinity from deltation?
so can a hyper real really be infinitely negative when negativity doesn't exist in the natural number set
Possibly, because Knightation subtracts so it might produce negative numbers... (also a better name for Knightation might me Jeration... )

I am also pondering when in Knightation whether there a limit to how far back you can go like with logarithms. (ie you can't take the logarithm of zero. This would mean an asymptotic relationship for zeration and knightation. This might be something to look in to.
Well, I think I got my post quota for today!

James
#36
James Knight Wrote:I am also pondering when in Knightation whether there a limit to how far back you can go like with logarithms. (ie you can't take the logarithm of zero. This would mean an asymptotic relationship for zeration and knightation. This might be something to look in to.

James

If You take always ln( mod ln(x)) when iterating back (change negative numbers into positive) You end up at -Omega (=-LambertW(1)=ln(LmabertW(1)) for all x in average and for all iterations in average , but for each particular x it depends whether it is (contains) e, e^(1/e), 1/e, -e etc. in some combinations (is expressable by) or not.


e.g. ((e^(-e))^e)^(1/e)))^e^(-e))^(-e)^(-e))) will definitely stop after certain number of ln(ln(ln( while e.g. x=2 might be iterable infinitely.

I have a thread about it next door.

Iterating logarithms until ln(Omega)=-Omega is reached

Ivars
#37
James Knight Wrote:I think the Delta Numbers are elements of the hyperreal sets. In addition, I think that Deltation will revolutionize calculus once it has been well defined. I don't think anymore that deltation values are complex or undefined, but are either infinitely far or infinitesimally close to all real numbers.

Deltation (as you use it) has nothing to do with hyperreal numbers. If you have a constant function \( f(x)=c \) then this function is simply not injective and has hence no inverse function (moreover this function is not injective on *each* open interval). This is true also when considering the constant function on the hyperreal numbers. So it is nothing gained by escaping to the hyperreal numbes. Btw, the hyperreals calculus is translateable into the classical calculus, its just another view on the same thing, no new properties are gained.

Another thing is, that you cant extend the real numbers with infinities, without destroying really fundamental properties. For example \( 1+\infty=\infty\Rightarrow 1=0 \) by subtraction.

Quote: I think that it is rather interesting that the delta symbol was chosen to represent deltatation as it has to do with calculus and infinitesimal quantities.
I think you mean chosen by Rubtsov et. al. However take into account that he defines deltation different from you. I think it anyway makes not much sense to make such a fuss about the increment function \( x+1 \) as to give the inverse, which every shool kid knows to be \( x-1 \) a new name etc.

PS: Please dont double post. Every post should exist in only one thread. So please remove your symbol/notation post in this thread.
#38
Concerning:

bo198214 Wrote:
GFR Wrote:The problem, as you also correctly said, is to define an operation (ONE hyperop) that would be the unique operation, exactly fitting in the hyperops hierarchy, at rank 0.
Only to summarize and clarify the current situation:
If we agree on the law a[n+1](b+1)=a[n](a[n+1]b) for all hyperoperations [n] for integer n and agree that a[1]b=a+b then it stringently follows (without assumptions about initial values) that a[0]b=b+1 and it also stringently follows for all hypo operations that a[-n]b=b+1. (This was shown in this thread by Andrew and me.) I would call this "exactly fitting in the hyper operations hierarchy".
Actually, I use to write the abovementioned "Mother Law" as:
a[n-1](a[n]b) = a[n](b+1), which gives:

a+(a*b) = a*(b+1)
a*(a^b) = a^(b+1)
a^(a#b) = a#(b+1)

and which, for n = 1, also gives:
a[0](a[1]b) = a[1](b+1), i.e.:

a ° (a+b) = a + (b+1) = a+b+1 = (a+b) + 1... so far, so good ... !

Let me try now to take a new (multiple...) way, starting from this initial conclusion, where, by putting a + b = k and reasoning, for the moment, only with positive integers, we should have:
a ° k = k + 1, with, obviously (but not compulsorily), k > a.
This multiple way (quadruple, not ... octuple) is of the inductive and not of the deductive type. I hope that everybody would be patient enough to read it, without ... fainting, deciding to go to the Foreign Legion or (BO) organizing a metaphorical Srafspedition for the democratic elimination of ... somebody from this Forum. Sad , I mean Wink

Pillar 1 - The Mother Pillar. Supposing that the "Mother Law" means exactly fitting zeration into the hyper operations hierarchy, then we could assume that:

a ° b = b + 1, apparently only depending on the second operand.

The problem here is that, in my humble opinion, this fact would demonstrate that a zeration binary operation could not exist. Unless (there is always an ... unless) the Mother Law is not alone. I mean, it might be necessary, but not sufficient, or sufficient, but not necessary, or neither of them (but this would be too much!). In fact, it would be a nonsense just to say that a ° b is the successor of b, for any a. If we are looking for a new binary operation, we should be prepared to find other additional conditions, accompanying and supporting the Mother Law.

Pillar 2 - The Ackermann Pillar. We know that the Ackermann Function (AF) can be defined as follows:

A(0, n) = n+1
A(s, 0) = A(s-1, 1)
A(s, n) = A(s-1, A(s, n-1))

The AF can be shown as an infinite matrix, starting form line s=0 and column n=0, extended to all the natural n's and s's. Terrific landscape! Nevertheless, strangely enough and by using the hyperops formalism, any A(s, n) element of the AF matrix, for s>0, can also be shown as follows:
A(s, n) = 2[s](n+3) - 3

For instance:
A(1, 1) = 2 + 4 - 3 = 3
A(2, 1) = 2 * 4 - 3 = 5
A(3, 1) = 2 ^ 4 - 3 = 13
A(4, 1) = 2 # 4 - 3 = 65533
etc..., usw...

The first line of the matrix is a ... problem, because it is simply given by: A(0, n) = n + 1, while the general AF formula gives: A(0, n) = 2[0](n+3) - 3. In conclusion, for n >= 0, we shold have:
A(0, n) = (2[0](n+3)) - 3 = n + 1, or (all bracketing is necessary), with k = n + 3:
(2[0](n+3)) - 3 = n + 1 = (2[0]k) - 3 = n + 1, or:
2[0]k = n + 4 = k + 1, with n >= 0, i.e.: k > 2.

In conclusion, the Ackermann general formula A(s, n) = (2[s](n+3)) - 3 can be made valid also for line s=0 if we would define a general rank zero operation of the type:
a[0]b = b + 1, if b > a, coinciding, for a = 2, with:
2[0]b = b + 1, for b > 2.

Under these conditions, we should have, for zeration:

a ° b = b + 1, if b > a

Now, the problem (... again!) is that zeration seems not to be defined in the case of b =< a, which, indeed ... and again, is not acceptable. We need more pillars.

Pillar 3 - The Hyper-roots. It is known, from the Ancient Greeks' times, that the square root of a number can be calculated by iterating the following functional equation:
y = sqrt x ---> (y + x/y) / 2 => y
Iteration (n + x/n) / 2 = m -> n, starting from an approximate solution n, rapidly converges to the square root of x. About 20 years ago, Konstantin Rubtsov, thought to apply a similar formulation for calculating the square superroot, as well as the half of a number (!!), both left-inverse hyperops, of the root type. The compact formulation of that can be generalized as follows:
y = x /[s]2 ---> y <= (y[s-1](y[s]\ x)) /[s-1]2.

This formula can be implementes as follows:
.....
y = ssqrt x ---> y <= sqrt (y * log_y(x))
y = sqrt x ----> y <= (y + x/y) / 2
y = x / 2 -----> y <= (y ° (x-y)) - 2

It can be easily verified that the square superroot (super square root) and the square root are rapidly converging to an acceptable value after few iterations. Concerning the formula including zeration, the situation is that:
- "y" must be an even natural number, for allowing us to find its half;
- zeration must be commutative, for allowing us to calculate the approximate values, for any initial "y".

The first condition is due to the fact that zeration has been initially defined only for integer numbers and that the formula needs an even number for calculatind its half. The second condition has a deeper meaning, because it just doesn't converge if zeration is not commutative. The conclusion is that the second condition is fulfilled only if the order of the operand can be commuted, i.e. if:

a ° b = b ° a = max(a, b) + 1, if a >< b.

Now, commutativity of zeration is one of its most important properties, if fully and surely demonstrated. Unfortunately, the abovementioned "speach" is not a rigorous demonstration, but a cloudy (quick and ... dirty Wink) mathematical experiment. Konstantin Rubtsov (Rubcov) knows a complicated, but very "clean" demonstration of the commutativity of zeration, based on the consideration of the left and right neutral elements, homomorphism with addition and/or multiplication, cathegory theory and ... other similar amenities. It takes several DIN A4 pages, like the Goedel's Theorem, and any shorter presentation is just hermetical. We should convince him, when he shall have time, to present a "people's demokratic" version of it, for simple minded guys, like me. For the moment, I keep the Faith, thinking that, after all, we could consider that statement as part of a postulated axiom (BO ipse, in a moment of ... weakness, dixit!).

No instructions are given if a = b. However, this condition doesn't contradict those of the other pillars, but it completes them, despite the fact that the constraints under which it has been discussed are a little bit weak. This pillar is, nevertheless, reiforced by the following one.

Pillar 4 - The Hyper-means. Standard Algebra has defined, since a long time important binary operations such as the arithmetic and the geometric means, strongly associated with two important classical hyperops, i.e.: addition and multiplication, as follows:
am(a, b) = (a + b) / 2, with: am(a, a) = a (the arithmetic mean)
gm(a, b) = sqrt(a * b), with: gm(a, a) = a (the geometric mean).

In the hyperops hierarchy framework, we can also coherently define two other hyper-means, the power and the zeric mean, such as:
pm(a, b) = ssqrt(a ^ b), with pm(a, a) = a (the power mean)
zm(a, b) = (a ° b) - 2, with zm(a, a) = a (the zeric mean).

The problems with the power mean is that it operates on a non commutable operation (exponentiation), on one hand, so that pm(a, b) >< pm(b, a). On the other hand, ssqrt(x) has real values only for x > e^(-1/e). For these reasons, despite its importance for studying possible fractional hyperop ranks, its analysis requires further attention. On the contrary, the zeric mean appears also in Pillar 3 and this strongly justifies the fact that it must be:

a ° a = a + 2

This provisional and partial conclusion also justifies the following (almost initial) relations, which were at the origin of each research work made in the hyperops field:
.........
a ^ a = a # 2
a * a = a ^ 2
a + a = a * 2
a ° a = a + 2
........
as well as, more particularly (for a = 2), the almost "holy" tetragonal equality:
.... 2 ° 2 = 2 + 2 = 2 * 2 = 2 ^ 2 = 2 # 2 = ..... 2 [s] 2 .... = 4 !!!!!!

Conclusions based on the Four Pillars - Based on the Four Pillars, the existence of a zeration hyperop could be justified by the following definitions (considered as a postulate by BO and as a consequence of the overall existing mathematical environment, by KAR):

a ° b = max(a, b) if a >< b
a ° b = a + 2 = b + 2 if a = b


Personally, I think that this definition just satisfies the constraints put forward by the four Pillars and, for this reason, it justifies the KAR's zeration. I have some doubts concerning its unicity, but this happens in the best families, such as the Euler's Gamma function as extension of the factorial.

For my intellectual equilibrium, I use to pronounce "over" (or "more" or "beyond") the "°" zeration infixed operator, like "plus" or "(multiplied) by" in case of addition and multiplication. This gives, expressing it as a prefixed operator (just for ... confusing a little bit the reader):
over(a, b) = max(a, b) + 1, if a >< b
over(a, b) = c + 2, if a = b = c.
Konstantin doesn't like that, because he thinks that this would create an additional confusion with operator "+", but I feel very cool when I pronouce it so. Under these conditions, the zeric mean would be:
zm(a, b) = max(a, b) - 1, if a >< b
zm(a, b) = c, if a = b = c.

The Neutral Elements - What somebody calls the left/right unit elements. Let us consider the following general functional equation, including the definition of a fixpoint of the type x = f(x):
x = a[s]x <---> x = a[s+1]oo, which means:
a = x /[s]x <---> a = x /[s+1]oo (inverses, of the root types)

The implementation of that for ranks 3, 2, 1, 0 gives:
x = a^x <---> x = a#oo
a = x-rt x <---> a = oo-srt x = x^(1/x) (no neutral element for rank 3)

x = a*x <---> x = a^oo
a = x/x = 1 <---> a = oo-rt x = x^0 = 1

x = a+x <---> x = a*oo
a = x-x = 0 <---> a = x/oo = 0

x = a°x <---> x = a+oo
a = xçx <---> a = x-oo = -oo (ç stands here for the delta symbol, inverse of "°")

From the hyperops point of view, such fixpoint can be used (if it is defined via a constant) for the definition of the left neutral element (of the root type). In fact, we have:
-- x = a^x: no left neutral element for exponentiation (a = x^(1/x));
-- x = a*x means a = 1;
-- x = a+x means a = 0;
-- x = a°x means a = -oo.

The -oo element is, therefore, the left neutral element of zeration and, due to its commutativity, also the right one and we have:
(-00)°a = a°(-oo) = a.

Final Remarks- Please find in annex two plots of y = 2°x and of its inverse y = 2çx = 2-delta-x, according to the abovementioned KAR definitions. Please note:
(1)- Zeration y = a ° n, with n natural, is defined for -oo =< n =< +oo and it can be considered as a (one-valued) "function" .
(2)- Zeration can easily be extended to the real numbers but it is absolutely not a contimuous function and, therefore, it is not analytic (as somebody initially suggested); in particular, it contains a single "spot" (one separate point) function value and an "infinite discontinuity", both characterizing its clear and deep discontinuity. This fact should not impress people knowing "functions" like the Dirac and the Step and the Ramp functions, as well as other strange discontinuous mathematical objects.
(3)- Zeration y = a ° x is inversible (Deltation) but this fact opens the "door" to new classes of numbers, such as the non-standard trans-finite numbers and also the rather new hypothetical "trans infinite numbers" (Delta Numbers, according to KAR's terminology). This fact should not impress people knowing all the secrets of the logarithms of negative and/or complex numbers (or of logarithm "tout-court"), which involve the "wild" complex multi-valued "numbers".
(4)- The down-up methodology used in this thread implies a sort of "experimental mathematics", which should not impress distinguished Researchers familiar with these procedures. Take, for instance the ghost hunting programs lauched to find complex (i.e.: unreal and almost ... non existing!) fixpoints.
(5)- The KAR's definition (postulate or experimental finding) were published in 1987 and 1996 and presented in two World Congresses of Mathematicians (held in Zurich and Madrid .... I don't remember the dates), as well as in the WRI Forum and no criticism was received concerning that, ... until now.
(6)- We should avoid to state that an operation with multiple solutions is not defined. Think, for instance, of the number of solution of the square root of 4, of the cubic root of 8 or of the fourth root of 16, which are two, three and four, respectively. "Disequation" such as: real x > 5 defines all the real numbers greater than 5. If we think of that formula as an "operation", the number of its solutions is a non-coutable infinity. Try also, for instance, to draw the "plot" of y = x ^ e. Please don't tell me that these things are not in the ... "Manual".
(7)- This is the moment of re-analyzing the bases of those Pillars and definitions, in the framework of the perspective developments in the field of tetration and of the overall hyperops hierarchy. Nevertheless, the analysis should be carefully and precisely done, trying to avoid easy simple criticisms to the "strangeness" of the entire ... business. It would be interesting to find a new "pattern", functioning as additional pillar for the entire hierarchy. Why not ... !

Please consider these notes as a personal comment, concerning only me and not KAR, who is very busy in this moments of his professional life. Thanks to all of you for your kind attention and for your useful cooperation in this enterprise.

I stop here, apologizing for my macaronical English and for all the repetitions and possible errors. I also do it for giving to the Administrator space and time to his usual and friendly "But Gianfranco ..." routine. Wink Wink Wink !!

Gianfranco


Attached Files
.pdf   Zeration and Deltation plots.pdf (Size: 18.35 KB / Downloads: 1,468)
#39
GFR Wrote:This multiple way (quadruple, not ... octuple) is of the inductive and not of the deductive type. I hope that everybody would be patient enough to read it, without ... fainting,

At least I do it! Smile

Quote:deciding to go to the Foreign Legion or (BO) organizing a metaphorical Srafspedition for the democratic elimination of ... somebody from this Forum.
We dont eliminate anyone from the forum. And regarding opinions its not so much a democratic principle, but rather the mathematical principle similar to Ockham's razor: Unnecessary assumptions are eliminated.

Quote:Unless (there is always an ... unless) the Mother Law is not alone.
...
we should be prepared to find other additional conditions, accompanying and supporting the Mother Law.
You can not supplement the mother law, it alone prescribes that a[0]b=b+1 for all b.

Quote:Pillar 2 - The Ackermann Pillar.
The Ackermann function as you describe it is a modification of the original Ackermann function with 3 arguments
[Ackermann, Zum Hilbertschen Aufbau der reellen Zahlen, Math. Ann. 99 (192Cool, 118-133]
which is exactly our hierarchy a[n]b except that it starts with the 0th operation being the addition instead of the first one.

The modification was used to easier prove statements like the existence of recursive but not primitive recursive functions. And no wonder does it lead also here to the conclusion that a[0]b=b+1.

Quote:Pillar 3 - The Hyper-roots.
...
y = x /[s]2 ---> y <= (y[s-1](y[s]\ x)) /[s-1]2.
I think it should read (y[s-2](y[s-1]\ x)) /[s-1]2 right?

So let us see why this formula actually computes the value of x/[s]2.
It is in iteration formula, so if it has a limit y then
y = (y[s-2](y[s-1]\ x)) /[s-1]2

If we further assume that all arguments are in its bijectivity domain this formula is equivalen to:

y [s-1] 2 = y [s-2] ( y [s-1]\ x )
y [s-2] (y[s-1]1) = y [s-2] (y [s-1]\ x )
y[s-1]1=y [s-1]\ x
y [s-1] (y[s-1] 1) = x

And now under the assumption that y[s]0=1:

y [s] 2 = y [s] (1+1+0) = y[s-1](y[s-1](y[s]0)) = x
y = x [s]/ 2

However the assumption is wrong for s=2, y[2]0=0.

Quote:Konstantin Rubtsov (Rubcov) knows a complicated, but very "clean" demonstration of the commutativity of zeration,
However he did not cleanly state from what laws this commutativity follows. Surely not from the mother law.

Quote:For the moment, I keep the Faith, thinking that, after all, we could consider that statement as part of a postulated axiom (BO ipse, in a moment of ... weakness, dixit!).
Of course if you could derive your solution by the assumption of commutativity that would be fine, however it still contradicts the mother law and I think not even commutativity would make it unique.

Quote:Pillar 4 - The Hyper-means.
The hyper means (a[n]a)/[n+1] 2 = a
follow directly from the assumption that a[n]a=a[n+1]2 which is equivalent to a[n+1]1=a.

Quote:a ^ a = a # 2
a * a = a ^ 2
a + a = a * 2
........
as well as, more particularly (for a = 2), the almost "holy" tetragonal equality:
.... 2 + 2 = 2 * 2 = 2 ^ 2 = 2 # 2 = ..... 2 [s] 2 .... = 4 !!!!!!

Yes, but Gianfranco ( Wink ), all those laws follow from the mother law:
2[s+1]2 = 2[s](2[s+1]1) = 2[s]2 = 4, only for 2[s+1]1=2!
Not for s=0, as 2[1]1=3\( \neq \)2.

Asserting that a[0]a=a+2 is a bit like asserting that a[1]1=a.
Do you see the similarity?
Assume we had the hierarchy beginning at multiplication [2] and we try to find the operation below [2], i.e. the addition [1].
Then you see, oh, a[n]1=a for all n>1, hence you would be tempted to propose that a[1]1=a though this contradicts the mother law:
1\( \neq \)2=1[2]2=1[1](1[2]1)=1[1]1

In the same way you see, oh, a[n]a=a[n+1]2 for all n>0 and you are tempted to propose that a[0]a=a+2 though this contradicts the mother law (thanks for giving us this term!).

Quote:I have some doubts concerning its unicity, but this happens in the best families, such as the Euler's Gamma function as extension of the factorial.
Absolutely not, the Euler Gamma function is unique under the condition of logarithmic convexity.
However I never saw any condition that would make KAR's zeration unique. I even showed that it is not unique under certain strong conditions and provided a counter example (different from a[0]b=b+1).

Quote:The Neutral Elements - What somebody calls the left/right unit elements. Let us consider the following general functional equation, including the definition of a fixpoint of the type x = f(x):
x = a[s]x <---> x = a[s+1]oo, which means:
a = x /[s]x <---> a = x /[s+1]oo (inverses, of the root types)

The implementation of that for ranks 3, 2, 1, 0 gives:
x = a^x <---> x = a#oo
a = x-rt x <---> a = oo-srt x = x^(1/x) (no neutral element for rank 3)

x = a*x <---> x = a^oo
a = x/x = 1 <---> a = oo-rt x = x^0 = 1

x = a+x <---> x = a*oo
a = x-x = 0 <---> a = x/oo = 0

x = a°x <---> x = a+oo
a = xçx <---> a = x-oo = -oo (ç stands here for the delta symbol, inverse of "°")

But even the neutral elements dont fall from sky, they can also derived from the mother law, though then you see the real law:

If \( \lim_{t\to\infty} \) x /[n+1] t = a then a[n]x=x[n+1]1 (or equivalently (x[n+1]1) /[n] x = a).

And a=x/[n]x only for n>0, because there x[n+1]1=x! You see, even though I didnt intended it, that this is exactly the errornous conclusion/induction to go from x[m]1=x for all m>1 to x[1]1=x, if you assert that also a[0]x=x instead of a[0]x=x[1]1=x+1 (setting n=0 in the above law).

Derivation of the above law:
a[n]x =\( \lim_{t\to\infty} \)(x /[n+1] t) [n] x

(x /[n+1] t) [n] x = (x /[n+1] t) [n] ((x /[n+1] t)[n+1]t) = (x /[n+1] t) [n+1] (t+1) = x [n+1] 1

then a[n]x=\( \lim_{t\to\infty} \) (x[n+1]1)=x[n+1]1.

Conclusion: each mentioned pillar (without the incorrectly made generalization to zeration) is a (mathematically strict) consequence of the mother law. The general form of the pilars are:

Pillar 3 - The Hyper-roots.
If \( y \) is the limit of the sequence \( y_{n+1}=(y_n[s-2](y_n[s-1]\backslash x)) /[s-1]2 \) then \( y[s-1](y[s-1]1)=x \).
If (and only if) \( y[s]0=1 \) then \( y=x/[s]2 \).

So the conclusion that \( y_{n+1}=(y_n[0](y_n [1]\backslash x)) /[1] 2 \) tends to \( x/[2]2 \) is wrong, as \( y[2]0=0 \).

Pillar 4 - The Hyper-means.
The general law is a[s]2=a[s-1](a[s]1).
If (and only if) a[s]1=a then a = (a[s-1]a)/[s] 2.

this is not satisfied for s=1, a[1]1=a+1\( \neq \)a, and hence the conclusion a=(a[0]a)/[1] 2 is wrong.

Pillar 5 - The hyper root limits
If \( \lim_{t\to\infty} \) x /[n+1] t = a then a[n]x=x[n+1]1 (or equivalently (x[n+1]1) /[n] x = a).
If (and only if) x[n+1]1=x then a[n]x=x.

x[1]1\( \neq \)x hence the conclusion that a[0]x=x is wrong.

In turn all the by you mentioned pillars completely *support* the definition a[0]b=b+1.

Gianfranco, I know you and KAR invested a lot into the development of your zeration. But sometimes its just time to let go.
#40
Thank you for your strong destructive testing of my extremely long stipulations. Unfortunately, I was very busy and I didn't have time enough to make them shorter. In particular
bo198214 Wrote:
GFR Wrote:I hope that everybody would be patient enough to read it, without ... fainting,
At least I do it! Smile
Do you mean ... fainting? Oh, no! Try to stand under this strange situation. I told you that you would become nervous hearing about zeration. But you insisted so much. By the way, did you mean "at least" or "at last"?

bo198214 Wrote:You can not supplement the mother law, it alone prescribes that a[0]b=b+1 for all b.
My problem is that "for all b" means also "for any a", which, as you have seen, disturbs me a lot. In fact, in my opinion, this would mean that zeration does not exist at all, as a binary operation. And this would be very sad, at least for me.

bo198214 Wrote:And no wonder (... that the Ackermann Function ... may) ... lead also here to the conclusion that a[0]b=b+1.
So dry? Only to that, for any "a"? Are you sure? Unfortunately, I am not. I always understood "b+1" as a starting point in the definition of AF and not as a conclusion. But, ... Wink Henryk, I am not a recursivist and, therefore, I might be wrong.

bo198214 Wrote:y = x /[s]2 ---> y <= (y[s-1](y[s]\ x)) /[s-1]2
... should read (y[s-2](y[s-1]\ x)) /[s-1]2 right?
Right! Sorry!
bo198214 Wrote:So let us see why this formula actually computes the value of x/[s]2.
It is in iteration formula, so if it has a limit y then
y = (y[s-2](y[s-1]\ x)) /[s-1]2

If we further assume that all arguments are in its bijectivity domain this formula is equivalen to:
y [s-1] 2 = y [s-2] ( y [s-1]\ x )
y [s-2] (y[s-1]1) = y [s-2] (y [s-1]\ x )
y[s-1]1=y [s-1]\ x
y [s-1] (y[s-1] 1) = x

And now under the assumption that y[s]0=1:
y [s] 2 = y [s] (1+1+0) = y[s-1](y[s-1](y[s]0)) = x
y = x [s]/ 2
Which, of course, should read y = x/ [s]2. You see? In the best families!. We just invented it and we still don't have enough experience with that. But, ... Wink ... Henryk! This is the Slash Algebra. I have to think about these very nice manipulation, sleep on that and tell you later what I think. Neverthaless, for the moment, it seemed to me a very correct ... demonstration.

bo198214 Wrote:However the assumption is wrong for s=2, y[2]0=0.
Yes, it is wrong, but I don't see the point. Let me read the entire stuff again and again. I shall answer to you ... later. For the moment, I just wish to remind, for our further discussions, that we indeed have:
y = b[4]1 = b .... and .... y = b[4]0 = 1
y = b[3]1 = b .... and .... y = b[3]0 = 1
y = b[2]1 = b .... and .... y = b[2]0 = 0
y = b[1]1 = b+1 . and .... y = b[1]0 = b

So, we should not be surprised reading:
y = b[0]1 = b+1 .... for b>1 ... !
y = b[0]0 = b+1 .... for b>0 ... !
y = b[0]b = b+2
y = 2[0]2 = 4 ....... for b=2 ... !!!
y = 0[0]0 = 2 ....... for b=0 ... !!!!!!

bo198214 Wrote:
Quote:Konstantin Rubtsov (Rubcov) knows a complicated, but very "clean" demonstration of the commutativity of zeration,
However he did not cleanly state from what laws this commutativity follows. Surely not from the mother law.
Surely not. It is based on metamathematical reasoning.

bo198214 Wrote:
Quote:Pillar 4 - The Hyper-means.
The hyper means (a[n]a)/[n+1] 2 = a
follow directly from the assumption that a[n]a=a[n+1]2 which is equivalent to a[n+1]1=a.
Stop a moment, please! I don't believe in a generalisation of: a[n+1]1 = a. In fact, for n=0, a[1]1 = a+1 = a is wrong!

bo198214 Wrote:Asserting that a[0]a=a+2 is a bit like asserting that a[1]1=a.
Do you see the similarity?
Not really! At level a[s]1, we have different behaviours. In fact, as we have seen:
a[2]1 = a*1 = a
a[1]1 = a+1 (and tht's it!)
while:
a[2]a = a*a = a^2
a[1]a = a+a = a*2

bo198214 Wrote:.........
The general law is a[s]2=a[s-1](a[s]1).
If (and only if) a[s]1=a then a = (a[s-1]a)/[s] 2.
This is not satisfied for s=1, a[1]1=a+1\( \neq \)a, and hence the conclusion a=(a[0]a)/[1] 2 is wrong.
.........
If \( \lim_{t\to\infty} \) x /[n+1] t = a then a[n]x=x[n+1]1 (or equivalently (x[n+1]1) /[n] x = a).
If (and only if) x[n+1]1=x then a[n]x=x.
x[1]1\( \neq \)x hence the conclusion that a[0]x=x is wrong.

Gianfranco, I know you and KAR invested a lot into the development of your zeration. But sometimes its just time to let go.
These are two important points, based on tyhe Mother Law, and a friendly advice, which I shall seriously take into consideration. But, Henryk, the problem is that the formula mentioned in Pillar 4 may work, and the choice of -oo as neutral element of zeration (as we defined it) is shared by other researchers in this field. Would that mean that tha Mother Law is not completely right (or ... sufficient) ? I don't dare to think of that, despite the fact that we (KAR & GFR) admitted this possible limited catastrophe as a working hypothesis! What a life! Wink

However, I agree with you that this controversial subject is out of the scope of the present Forum and suggest to perhaps stop here our discussions. Perhaps, later. we shall have more arguments to finally destroy or .., resuscitate it. Thank you for your attention.

Gianfranco


Possibly Related Threads…
Thread Author Replies Views Last Post
  More literature on zeration mathexploreryeah 1 586 10/09/2023, 11:25 PM
Last Post: leon
  Zeration reconsidered using plusation. tommy1729 1 7,138 10/23/2015, 03:39 PM
Last Post: MphLee
  Is this close to zeration ? tommy1729 0 4,944 03/30/2015, 11:34 PM
Last Post: tommy1729
  [2015] 4th Zeration from base change pentation tommy1729 5 15,093 03/29/2015, 05:47 PM
Last Post: tommy1729
  [2015] New zeration and matrix log ? tommy1729 1 7,622 03/24/2015, 07:07 AM
Last Post: marraco
  Zeration = inconsistant ? tommy1729 20 55,912 10/05/2014, 03:36 PM
Last Post: MphLee



Users browsing this thread: 4 Guest(s)