• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 Operator definition discrepancy? cacolijn Junior Fellow Posts: 2 Threads: 1 Joined: Jan 2013 01/06/2013, 05:03 PM When looking at the commonly used definition of zeration, there are three rules: • a [0] b = a + 1 for a > b • a [0] b = b + 1 for a < b • a [0] b = a + 2 = b + 2 for a = b The last one is needed to get a [0] a = a + 2. It also causes a non-continuous function for y = a [0] c at a = c. This 3rd rule therefore seems a bit of a hack to me, needed to let the zeration operation fall in line with the rest of the hyperoperation sequence. However, if the definition of the other operators is looked at more closely, it seems they are not defined consistently with the + operator. The expression "a * b" is taken to mean "take b a's and put plus signs between them." This also leads to "a * 1" meaning "taka just 1 a", which is just of course the value "a". In the same vein we would expect "a + 1" to mean "take 1 a without any zeration operators between them", which would be just the value "a" - an uncomfortable set-up. Another way we can approach this discrepancy between the definition of + and the higher operators is by indeed using "a [0] a" to actually mean "a + 1", and redefining the higher operators instead. A more consistent way would then be to define "a [N] b" as being "take b [N-1] operators and put a's around them." This does lead to "a * 1" to get the value 2a, so it gets weird fast from there. A benefit however is that the identity value for all operators N > 0 is 0, and e.g. "c = a * b" means c will be < a if b < 0 and > a for b > 0, which also seems nicer to me. I haven't really looked at the further implications of these supposed alterations to the standard operators - maybe working it all out will make the new set-up fail miserably. I lack the knowledge to get deeper into e.g. deriving the correct reverse operators in the new situation (/, log, etc) or deriving a continuous function derivation for an altered to-the-power-of operator, but this angle does look interesting with regard to setting up the correct rules needed for zeration. All insights appreciated! Carl Colijn mike3 Long Time Fellow Posts: 368 Threads: 44 Joined: Sep 2009 01/06/2013, 11:28 PM (01/06/2013, 05:03 PM)cacolijn Wrote: When looking at the commonly used definition of zeration, there are three rules: • a [0] b = a + 1 for a > b • a [0] b = b + 1 for a < b • a [0] b = a + 2 = b + 2 for a = b The last one is needed to get a [0] a = a + 2. It also causes a non-continuous function for y = a [0] c at a = c. This 3rd rule therefore seems a bit of a hack to me, needed to let the zeration operation fall in line with the rest of the hyperoperation sequence. However, if the definition of the other operators is looked at more closely, it seems they are not defined consistently with the + operator. The expression "a * b" is taken to mean "take b a's and put plus signs between them." This also leads to "a * 1" meaning "taka just 1 a", which is just of course the value "a". In the same vein we would expect "a + 1" to mean "take 1 a without any zeration operators between them", which would be just the value "a" - an uncomfortable set-up. Another way we can approach this discrepancy between the definition of + and the higher operators is by indeed using "a [0] a" to actually mean "a + 1", and redefining the higher operators instead. A more consistent way would then be to define "a [N] b" as being "take b [N-1] operators and put a's around them." This does lead to "a * 1" to get the value 2a, so it gets weird fast from there. A benefit however is that the identity value for all operators N > 0 is 0, and e.g. "c = a * b" means c will be < a if b < 0 and > a for b > 0, which also seems nicer to me. I haven't really looked at the further implications of these supposed alterations to the standard operators - maybe working it all out will make the new set-up fail miserably. I lack the knowledge to get deeper into e.g. deriving the correct reverse operators in the new situation (/, log, etc) or deriving a continuous function derivation for an altered to-the-power-of operator, but this angle does look interesting with regard to setting up the correct rules needed for zeration. All insights appreciated! Carl Colijn A much simpler definition of zeration is just $\mathrm{zer}_a(b) = b + 1$. Then, we can write out the next operations in the sequence as $a + b = \mathrm{add}_a(b) = \mathrm{zer}_a^b(a)$ $a * b = \mathrm{mul}_a(b) = \mathrm{add}_a^b(0)$ $a^b = \mathrm{\exp}_a(b) = \mathrm{mul}_a^b(1)$ $^b a = \mathrm{tet}_a(b) = \mathrm{\exp}_a^b(1)$ $a \uparrow \uparrow \uparrow b = \mathrm{pen}_a(b) = \mathrm{tet}_a^b(1)$ ... That is, you think of the nth operation as being to apply the previous operation b times with a "base" of a to some starting value, which for zeration to build addition is *a* and not a constant 0 or 1. The big problem with the idea you mention is we lose algebraic identities. Namely, addition and multiplication in the usual sense have $a + b = b + a$ $(a + b) + c = a + (b + c)$ $ab = ba$ $(ab)c = a(bc)$ $a(b + c) = ab + ac$ $(b + c)a = ba + ca$ (same as above) This alternate definition of multiplication, which we'll denote here by $a \otimes b$, is equivalent to $a \otimes b = a(b+1)$. This fails all three laws, e.g. $b \otimes a = b(a+1) = ba + b \ne ab + a = a(b+1) = a \otimes b$. cacolijn Junior Fellow Posts: 2 Threads: 1 Joined: Jan 2013 01/07/2013, 09:13 AM (01/06/2013, 11:28 PM)mike3 Wrote: A much simpler definition of zeration is just $\mathrm{zer}_a(b) = b + 1$. Hi Mike, It's not that I'm on a crusade here or something, so take my reactions below only as a thought experiment. What I find unsatisfying of this definition is that zeration is then really just a unary operator. Of course, increment can hardly be binary, but still... (01/06/2013, 11:28 PM)mike3 Wrote: Then, we can write out the next operations in the sequence as $a + b = \mathrm{add}_a(b) = \mathrm{zer}_a^b(a)$ $a * b = \mathrm{mul}_a(b) = \mathrm{add}_a^b(0)$ $a^b = \mathrm{\exp}_a(b) = \mathrm{mul}_a^b(1)$ $^b a = \mathrm{tet}_a(b) = \mathrm{\exp}_a^b(1)$ $a \uparrow \uparrow \uparrow b = \mathrm{pen}_a(b) = \mathrm{tet}_a^b(1)$ ... That is, you think of the nth operation as being to apply the previous operation b times with a "base" of a to some starting value, which for zeration to build addition is *a* and not a constant 0 or 1. You can also say that it's the different constants needed here that indicate the 'old' system is defined improperly When taking $a * b$ to mean "take b plusses and put a's around them" you always end up with 0 as the constant (apart for zerating maybe). After all, $a * 1$ means "take just 1 a and skip the adding", while $a + 1$ would in the same vein mean "take just one a and skip the zerating", which would logically be again just "a" - this logical imbalance just rubbed me the wrong way. Of course you could also say that $a [N] b = a [N-1] a [N-1] ... [N-1] a [N-1] C_{N-1}$ (with the a's repeated b times), where the C indeed fixes up the imbalance being either 0 for "+" or 1 for everything higher up, but this seems more like a hack to me than a proper universal definition. (01/06/2013, 11:28 PM)mike3 Wrote: The big problem with the idea you mention is we lose algebraic identities. Namely, addition and multiplication in the usual sense have $a + b = b + a$ $(a + b) + c = a + (b + c)$ $ab = ba$ $(ab)c = a(bc)$ $a(b + c) = ab + ac$ $(b + c)a = ba + ca$ (same as above) This alternate definition of multiplication, which we'll denote here by $a \otimes b$, is equivalent to $a \otimes b = a(b+1)$. This fails all three laws, e.g. $b \otimes a = b(a+1) = ba + b \ne ab + a = a(b+1) = a \otimes b$. Indeed, this was also something I didn't like too much. So you then have to give up e.g. commutativity, but you also gain something nice like e.g.: $a [N] b > a$ for a > 0, b > 0 $a [N] b < a$ for a > 0, b < 0 $a [N] b = a$ for a > 0, b = 0 On the other hand, commutativity is something that is already lost at exponentiation and higher up, so only addition and multiplication benefit from it. If tossing it out allows you to get the framework straight, then it doesn't seem like a too big a loss anymore. Again, this was just a thought experiment and not a plea to redefine the basic building blocks of mathematics. The rules used to define the operators of rank 2 and up just seem to be illogical when trying to add zeration to the picture. Kind regards, Carl Colijn « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post [YT] One parameter groups of transformations, vector fields and differential operator MphLee 1 723 06/16/2021, 04:31 AM Last Post: JmsNxn Generalized Kneser superfunction trick (the iterated limit definition) MphLee 25 9,180 05/26/2021, 11:55 PM Last Post: MphLee A fundamental flaw of an operator who's super operator is addition JmsNxn 4 12,151 06/23/2019, 08:19 PM Last Post: Chenjesu Tetration and Sign operator tetration101 0 2,053 05/15/2019, 07:55 PM Last Post: tetration101 Generalized arithmetic operator hixidom 16 25,603 06/11/2014, 05:10 PM Last Post: hixidom Hyper operator space JmsNxn 0 3,390 08/12/2013, 10:17 PM Last Post: JmsNxn the infinite operator, is there any research into this? JmsNxn 2 8,861 07/15/2011, 02:23 AM Last Post: JmsNxn The "little" differential operator and applications to tetration JmsNxn 6 15,838 04/23/2011, 06:25 PM Last Post: JmsNxn Matrix Operator Method Gottfried 38 71,617 09/26/2008, 09:56 AM Last Post: Gottfried The Power Series definition for tetration beboe 4 10,100 09/24/2008, 06:29 PM Last Post: Gottfried

Users browsing this thread: 1 Guest(s)