Notations and Opinions
#1
A remark Gottfried made about the matrix transpose notation got me thinking about notation in general. I really hate arrow notation for hyper-operators, but it is also the most common and well-known notation. I also don't like dot or prime notation (f') for derivatives, I perfer Leibniz notation the best (although some people confuse it with division).

So what is the relationship that notation has to our feelings? Well, when we want to express something, and communicate an idea, the failure to do so can make us feel bad. On the other hand, the asthetics of a particular notation can also make us feel bad. So if we were to express ideas in a notation that no one understands, but looks good, then we would feel good, and other people would suffer (this seems selfish). Conversely, if we were to use common notations that make us feel bad and help other people understand, then we suffer, but others will be able to understand (this seems more selfless). I think in the grand scheme of things, it is better for one person suffer, now, so that hundreds (or billions, I don't know) of people can understand what is written for a long time in the future.

A while back I wrote a questionnaire that I never sent. I was going to email it to everybody I knew studying tetration. Since this forum was started, many of the questions I had, are not worth asking now. Some of the questions remain, however, and I don't have a recommendation for them. Hyper-operator notation is most commonly expressed with Knuth's arrow notation (and the Bromer-Mueller arrow can help extend Knuth's arrow notation to mixed hyper-operators). The notation system developed by Barrow, Shell, Thron, and several others is already apt at expressing nested exponentials, but there is no standard for which letter is used (although I perfer T). The notation for iteration usually involves some kind of super-script (with optional decorators). Tetration is usually written with either iterated exponential notation (combination of exponential notation, and iteration notation), nested exponential notation, hyper-operator notations, or the notation exclusively devoted to tetration: Mauerer's left-superscript notation.

With all of these things worked-out, I still would like to know:
  1. Opinions on Romerio's box notation.
  2. Opinions on combining (Knuth arrow) and (Bromer-Mueller arrow).
  3. Opinions on the terms hyper-operator / hyper-operation.
  4. Opinions on the notation of iterated powers.
  5. Opinions on the notation of auxiliary super-roots.
  6. Opinions on the notation of hyper-logarithms and hyper-roots.
  7. Opinions on the Greek letter used for nested exponentials (E or T).
  8. Opinions on the "decorators" used with iteration notation.
  9. Opinions on Mauerer's left-superscript notation.

In general I feel that notations should be extensible, for hyper-operator notations, you should be able to write n instead of ... and a bracket (both Arrow and Box notations provide this). There should also be provisions for commonly used inverse functions (only box notation provides for hyper-logs and hyper-roots). And there should not be any confusion with existing notations (This applies to Knuth/Bromer-Mueller confusion, as well as E/Sigma confusion). Although it could be said that left-superscript notation is too confusing for dislexic people. Lastly, whatever notations are used in the FAQ, the collection should be consistent, ideally we should use Occam's razor to decide notations, but history has a way of circumventing this.

One simplification that was sort of an epiphany was Jay D. Fox's recomendation for the nth tetrate, which in my mind consolidated both iteration and tetration terminology. In this post, we came to the conclusion that for any binary operation called X-ation (just as an example) the one-variable function can be called the nth X-ate or the other one-variable function can be called an X-ational. Although each term must be assigned on a case-by-case basis. In the case of hyper-operations, we can assign all of them at the same time, since they are all defined by the same pattern in the first place. This not only simplifies tetration terms, but also the terms for pentation and beyond. This even makes it easier to talk about higher operations, like "iterated pentationals", and "nested hexationals".

I suppose I'm really thinking about the FAQ while I'm doing this. Anyone can use any notation, in any paper, in any journal (UXP is proof of this). So what I'd really like to know, is not what I should be using (I will only use T, not E), but what would be the most common, accepted, unambiguous terms and notation to be used in the FAQ so that minimal confusion will result. I personally think that the most confusing aspect of all of these notations is iteration notation.

I have seen several notations used for iteration:
  • \( f_n(x) \) -- Many older texts on iteration.
  • \( f^n(x) \) -- 90% of modern texts on iteration.
  • \( f^{(n)}(x) \) -- Galidakis, can be confused with derivatives.
  • \( f^{[n]}(x) \) -- Campagnolo et.al., and myself.
  • \( f^{<n>}(x) \) -- Aldrovandi.
  • \( f^{\circ n}(x) \) -- Trappmann.
  • \( f(n, x) \) -- some people
  • \( f(x, n) \) -- some people
  • \( (f \circ)^{n}(x) \) -- no one, but it makes use of sections \( (a \times) \ :\ b \rightarrow a \times b \).
  • \( (f {\uparrow}{\circ} n)(x) \) -- no one, but uses the Bromer-Mueller arrow.
but most of these do not follow a rule. Ideally we should use the (Bromer-Mueller) arrow operation to repeat the composition operator (the last notation) but since I have never seen it used, I'm assuming its a bad idea.

The last example reminds me of Large numbers on Wikipedia. This article and related articles use section notation extensively (incomplete infix expressions are called sections in Haskell). Although this is quite intuitive for some people, there are not many places that talk about this kind of notation. Sections are the easiest way of writing powers and exponentials (if you don't want to use "pow" and "exp"), but there is an implicit convention that this is a shortcut for a function mapping, and as such, it should be treated as a notation rather than something that is assumed the reader knows. However, I would argue that section notation is far more extensible than using mnemonics like "pow", "exp", "spow", "sexp", since these can all be written \( ({\uparrow}n) \), \( (b{\uparrow}) \), \( ({\uparrow}{\uparrow}n) \), \( (b{\uparrow}{\uparrow}) \) respectively. What bothers me about this is that if we were to be consistent (which is something I am very concerned about), then we should also use this notation for exponentials, which up to this point have been written with "exp".

One advantage of using section notation is that this is the only way that you can represent hyper-logarithms and hyper-roots with Knuth's arrow notation, currently. Although you could represent them in box notation very easily, using Knuth's arrow notation, this would be accomplished through \( (b{\uparrow}{\uparrow})^{-1}(x) = \text{slog}_b(x) \) and \( ({\uparrow}{\uparrow}n)^{-1}(x) = \text{srt}_n(x) \) respectively. With this notation we could even talk about hyper-logarithms and hyper-roots in the FAQ, since we would be using the standard arrow (rather than box notation, or mnemonic notation, which would have to be introduced). I believe that this is very consistent, and would cause the least confusion.

To overview, the notations that are available for expressing tetration-related ideas range from boxes, arrows, chained-arrow (Conway-Guy), map-arrow ("\mapsto" looks different than "\rightarrow"), symbolic (for ssqrt), mnemonic (srt/slog/spow/sexp), left-superscript, towers (nested exponentials), iteration, and sections, to the more obscure notations based on specific authors, like uxp and Campagnolo's \( b^{[n]}(x) \) for iterated exponentials. But for the FAQ, I think we should limit the notations to: Arrow, Iteration, Section, and Tower notations. These are enough.

Andrew Robbins

PS. Attached is the original questionnaire I never sent.


Attached Files
.txt   questionnaire.txt (Size: 13.37 KB / Downloads: 1,617)
Reply
#2
Pheww - that's a lot of jungle. Anyway: thanks for that collection of information.

I've not much to say: I second, concerning the differentation, that the Leibniz-notation has its advantages over the f'-notation. But this reminds me to another distinction: while in difficult problems, for instance where I'm searching for a solution, the Leibniz-notation helps sometimes, because you see even possibilities of cancelling, for a fluently written text the shorter one, f', is more convenient. So we have another demand, which leads possibly to different notations as well.

Also I would like to refer to one consideration of mine concerning the left-down subscript notation. Well - I didn't force it after I proposed it, but often, when I read articles now, I feel discomfortable with the notation a1^a2^a3^...^an - it is a bit misleading, since we evaluate it beginning from an and *not* from a1, so in fact it should-at least- be rewritten as an^...^a3^a2^a1. The left-down subscript would prevent this inconsistency inherently.

Also, for ASCII-notation, which is always needed, I preferred the {base,x}^^iteration - notation first; but to make it better fit into the usual binary-operator-scheme of most of our formulae, I tried with
x {operator,base}^iterator , which can then easily be concatenated and I feel is also a bit intuitive (possibly ° instead of ^ although ^ has already a certain common-understanding of iteration, but ° has much more). So x{^^,b}°h can be concatenated to
x {^^,b}°h {^^b}°k = x {^^,b}°(h+k)
and -in specialized texts- this may be extended to other operators
x {+,b}°h = x + b*h , x {*,b}°h = x * b^h , x{^,b}°h = x^b^h ,
x {^^,b}°h = b^b^b^...^x and for higher operators without specific symbols this can easily be extended to a general hierarchy
x {+,b}°h = x {o1,b}°h
x {*,b}°h = x {o2,b}°h
x {^,b}°h = x {o3,b}°h // if this is really needed
x {^^,b}°h = x {o4,b}°h
...
but - well, this adds again to the present jungle, and doesn't reflect the need for appropriate notation for inverse operations. So ...

The term "nested" is usually be taken for more complicated structures (like trees), and I would avoid it, if only "repeated" , "concatenated", "sequenced" is meant, like working through a linear list of sequential operations. Where I would use "repeated" if the same base is taken (as in tetration) and "concatenated" or "sequenced" or the like, if the bases are varying/undetermined.

So much for short, it's also a bit early in the morning...

Gottfried
Gottfried Helms, Kassel
Reply
#3
Dear Andrew!

Your idea of a forum survey on terminology and symbols is excellent. I should say: essential. Unfortunately, my TeX reader doesn't work any more. It will work again ... next week. Could you please, in the meantime, post a pdf version of it? Thanks in advance. For the moment, to show my interest on this matter, I am sending you (again ... !) the attachment that I posted in August 2007, I presume. That is my position on terminology.

I should like to add or reiterate:
- we need general logotypes for hyper-operations (boxes, for example), as well as for hyper-logs and hyper-roots and, more particularly for the super-roots (at the 4th, tetra, level) ;
- I don't like the arrows notation;
- for iteration, I accept the 90% standard but, if there is a need: Campagnolo's brackets.

But, I wait for your kind pdf format, if you can do it. Thanks again!

GFR


Attached Files
.pdf   Hyperoperations Terminology.pdf (Size: 60.49 KB / Downloads: 7,263)
Reply
#4
Maybe I should clarify. First, the "questionnaire.txt" that I attached was not TeX, it is just text. Second, the questions in the attachment are simply to show my thoughts at the time, many of the questions have been answered. So the questions that remain, are items 1-9 of the first post. If you want, I can write these up as a PDF, with multiple choice, but I felt that you all know what I'm talking about, so this did not seem necessary.

Also, now that I think about it, Gottfried brought up a good point. The term "nested" should be on the questionnaire as well, because I don't have any references for it, (I might have made it up). However, I do remember reading about nested logarithms and nested radicals, so this might be where I got the term from. I don't remember. Anyways, other options for "nested" might be: "multiple", "recursive", "heterogeneous", or "N-ary". If you don't like the term "nested exponential", how do you feel about "N-ary exponential" or "N-ary tower"?

Andrew Robbins
Reply
#5
My humble opinion,

Whereever "nested" seems the right word, it should be prefered to any other as it has clear intuitive meaning.

Another question is to the whole concept of starting iteration from the end, from right as opposed to usual. I think that is the most important thing that distiquishes tetration from other operations, and , if analoques to such approach could be constructed in any other operation of function, it would be very much of interest to study them.

So that notion, iterating from right ( in my terminology from end - or perphaps it is actualy the beginning?) is very important to clarify to an extent that we can say e.g.

Today we will look at convergent summations starting from right- and everyone knows what that means. Or even better, to divergent summation starting from right. In case of h(z) it is a clean case, but what about other divergent iterations? I think there is something to be discovered there as well.

Ivars

P.S. ( of courseSmile: Have You ever wandered why some nations write from right to left, which seems so unnatural given we are mostly right handed species?
Reply
#6
OK! Thanks, the "text" display of the ... text, in my computer, seemed to me rather fuzzy, difficult to read. Now, I understand. Not necessary of present it in pdf.

By the way, "nest" is successfully used in Mathematica and, perhaps, in other software packages. "Recursive" has an excellent and hiher level press. "N-ary" is modular and adaptable to various situations. Perhaps "N-th", like the "N-th" derivative ("1-st", "2-nd", "3-rd", ... "5-th, ...). "The n-th function of x", like "the n-th logarithm, base e, of x". Why not! But, my mother tong, as everybody can verify, is not English.

GFR

(@ IVARS: there are also texts written in both directions in the same display (I think some texts in Ancient Egyptian, Etruscan, Ancient Latin, Ancient Greek, etc ...), with a method called "boustrophedon" i.e. "to turn around like oxen". Perhaps the writers were just ... lazy!)
Reply
#7
GFR

Perhaps they were lazy- who can tell that? but nevertheless this property of tetration has to be well described and clearly notated otherwise You will always need omnipresent 1 page of explanations that n=3 tetration of 3 means .

3^(3^3) not (3^3)^3.

Ivars
Reply
#8
Right you are! Nevertheless, I always assumed that tetration was:
- 3-tetra-3 = 3#3 = 3^(3^3) = 3 ^ 27 = 7.62559747... x 10^12; and not:
- (3^3)^3 = 3^(3x3) = 3^(3^2) = 3^9 = 19683 [a ... collapsing tower].

This above-mentioned assumption, in my opinion, is to be considered valid for all the hyper-operation hierarchy, because it defines an "elementary" operation. For level 4, in my opinion, right priority towers are the correct definition of tetration. "Left priority towers" ("left tetrates") are not elementary operations. In fact:
- ((a^a)..^a)^a) [n times] = a^(a^(n-1) [un-homogeneous collapsed tower]

But, of course you are right. We must adopt a convention (that one, ... in my opinion), otherwise it would be impossible to talk of y = e # x = e-tetra-x = sexp x and/or of the superlog.
Other various bracketing conventions, like a^(a^(a^a))^a can be dealt with by the fully flexible Reihenalgebra (see Henryk's methodology).

GFR
Reply
#9
Ivars Wrote:Another question is to the whole concept of starting iteration from the end, from right as opposed to usual. I think that is the most important thing that distiquishes tetration from other operations, and , if analoques to such approach could be constructed in any other operation of function, it would be very much of interest to study them.

So that notion, iterating from right ( in my terminology from end - or perphaps it is actualy the beginning?) is very important to clarify to an extent that we can say e.g.

What you are talking about only applies to binary operations. Integer iteration of one-variable functions is always unique, no matter what. However, when discussing binary operations (which are two-variable functions), there is more than one way to construct a one-variable function from a two-variable function. This is where all the possibilities come from, from binary operations, not from iteration. The iteration of a binary operation could be described as right-iteration, and left-iteration, and each of these is pretty unique. With right-iteration of the binary operation B we get (xB(xB(xB...B(xBy)))), which actually has 3 parameters (x, y, and n -- the iterator). With left-iteration of the binary operation B we get ((((xBy)B...By)By)By) which also has 3 parameters (x, y, and n).

What the Bromer-Mueller arrow allows (which Henryk mentions also) is that you can alternate between left-iteration and right-iteration however you want, and this is where you get things like iterated powers (left-iterated exponentiation or lower-tetration), and iterated iterated powers (left-iterated left-iterated exponentiation, or lower-pentation). This gives a binary tree of hyper-operators from exponentiation, which I call mixed hyper-operators. Each of the hyper-operators in this binary tree can be described by a rank (like GFR uses), but instead of being unique by rank, there is 1 rank-2 operator, 1 rank-3 operator, 2 rank-4 operators (tetration and lower-tetration), 4 rank-5 operators, 8 rank-6 operators, 16 rank-7 operators, and so on. Each time you have a choice of left-iteration or right-iteration, so the number of operators doubles. The terminology I prefer uses pure iteration rather than right/left-iteration, which means right-iterated exponentiation = iterated exponentials, and left-iterated exponentiation = iterated powers.

Now iteration aside, there are other ways to associate binary operations. If you consider expressions rather than left-iteration and right-iteration of a binary operation, then there is no implied associativity. With no implied associativity, you could have expressions like (aB((bB(cBd))Be)) or something like that. A special case of these when all the elements are the same is what Henryk considers, I think. What this reduces to is a set of hyper-operators on binary trees, which is mind-bending and very hard to cope with. These operators are much more general than mixed hyper-operators, and they should not be confused at all. One thing I noticed that I included in the huge-FAQ is that Henryk calls this "repeated exponentiation" which I believe is very appropriate.

Andrew Robbins
Reply
#10
andydude Wrote:What you are talking about only applies to binary operations. Integer iteration of one-variable functions is always unique, no matter what. However, when discussing binary operations (which are two-variable functions), there is more than one way to construct a one-variable function from a two-variable function.

Andrew Robbins

Well I will have to read it many times, most likely You have once again made a good explanation - which includes much more I could have imagined when asking the question.

Sometimes I feel that with pentation etc we are making the same circle on a plane above some area, just 1 km higher. Just as scales are relative ( if they are) You have the option to describe all scales by an algorithm (somehow manipulating infinties and infinitesimals logically consistently) or describe all scales in all their complexity - and then You and up with infinite expansion of names, branches , notations etc. Just intuitively, despite Godel's theorem , I can not see math as not beiing possible to close logically while allowing it to develop because of undetermined symbols like infinity, etc..
Reply




Users browsing this thread: 1 Guest(s)