• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
 generalizing the problem of fractional analytic Ackermann functions Gottfried Ultimate Fellow Posts: 767 Threads: 119 Joined: Aug 2007 11/18/2011, 06:13 AM (This post was last modified: 11/18/2011, 06:13 AM by Gottfried.) (11/18/2011, 12:41 AM)JmsNxn Wrote: That's a good way to approach the question. I'm not too familiar with carleman matrices but I think it goes something like this (...)Hi James - unfortunately I can't follow in the above. But concerning tetration of carleman-matrices themselves there is a simple example: the pascal-matrix is the carleman-matrix for the operation of incrementation by 1 and powers of it are carleman-matrices for the operation of addition. I've worked out an example how to tetrate the pascal-matrix, perhaps this is useful (and possibly generalizable). See http://go.helms-net.de/math/tetdocs/index.htm go to the short statement at "pascal matrix tetrated" and open http://go.helms-net.de/math/tetdocs/Pasc...trated.pdf Nearly all what I'm doing in tetration is based on this concept of "carleman-matrices" and I might be able to answer if you have some concrete questions. I did not know that name when I stumbled on that concept so my first expositions of my fiddlings are mostly in threads headed by "matrix-method" and might be informative/helpful before (though it is extremely exploratory and not well structured from the beginning) Gottfried Gottfried Helms, Kassel JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 11/19/2011, 07:50 PM (This post was last modified: 11/19/2011, 07:51 PM by JmsNxn.) (11/18/2011, 06:13 AM)Gottfried Wrote: (11/18/2011, 12:41 AM)JmsNxn Wrote: That's a good way to approach the question. I'm not too familiar with carleman matrices but I think it goes something like this (...)Hi James - unfortunately I can't follow in the above. But concerning tetration of carleman-matrices themselves there is a simple example: the pascal-matrix is the carleman-matrix for the operation of incrementation by 1 and powers of it are carleman-matrices for the operation of addition. I've worked out an example how to tetrate the pascal-matrix, perhaps this is useful (and possibly generalizable). See http://go.helms-net.de/math/tetdocs/index.htm go to the short statement at "pascal matrix tetrated" and open http://go.helms-net.de/math/tetdocs/Pasc...trated.pdf Nearly all what I'm doing in tetration is based on this concept of "carleman-matrices" and I might be able to answer if you have some concrete questions. I did not know that name when I stumbled on that concept so my first expositions of my fiddlings are mostly in threads headed by "matrix-method" and might be informative/helpful before (though it is extremely exploratory and not well structured from the beginning) Gottfried Thanks Gottfried, I'll take a look. It may be in my interest to research more into how the matrix method actually works. It seems like the matrix method may have a potential solution. I stress the may. So far, from what I can make of it, it would require taking a limit, and an exponential continuum sum. In the sense $M$ is a carlemann matrix. $T[M]_{n=0}^{R}\,\, g(n) = ((...(M[f]^{g(0)})^{g(1)})^{g(2)})...)^{g( R )}$ this isn't really tetration, but rather a type of left handed tetration. This would require a continuum sum type object for fractional R iff the following identity isn't true: $(M[f]^a)^b = M[f]^{a \cdot b}$ which I think is given if matrix multiplication isn't commutative. However, this seems to contradict how Carlemann matrices work, because evidently $M[f^{\circ t}(x)] = M[(f^{\circ \frac{t}{2}})^{\circ 2}(x)] = M[f^{\circ \frac{t}{2}}(x)]^2 = (M[f(x)]^{\frac{t}{2}})^2$ but we already know $M[f^{\circ t}(x)] = M[f(x)]^t$ so this would imply $(M[f]^a)^b = M[f]^{a \cdot b}$ I'm a little bit wishy washy now. But I think in general I just need to better understand most fractional iteration methods. I doubt this type of reasoning is unique to Carlemann Matrices. And to my knowledge, the matrix method doesn't quite make the cut--I don't remember why, though; does it fail for $b < e^{\frac{1}{e}}$? Maybe if I could make sense of Kouznetsov's method of finding superfunctions I might be able to iterate that. And if your confused, you shouldn't be, this is fairly simple. It's following this reasoning, if: $F\{ f \}(x)$ is the super function of f(x) and $F\{ F \{ f \} \} (x)$ is the super function of $F \{ f \}(x)$ and the second superfunction of f(x) so that in general $F^{t}\{ f \}(x)$ is the t'th super function of f(x) what is the value of $F^{\frac{1}{2}} \{ f \} (x)$ There are plenty of ways to find the super function of f and to define $F \{f \} (x)$ So I'll try to experiment with as many as possible. There should be one that sticks out though, or is simplest. Carlemann matrices were just the first. JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 11/20/2011, 09:26 PM (This post was last modified: 11/20/2011, 11:59 PM by JmsNxn.) Alright, so I finally uncovered the recurrence relation for superfunctions and half-superfunctions! It can linguistically be put forth as: the half superfunction of the half superfunction of f is the superfunction of f. which to me now, is a "no duhhh"; Can't believe I missed it. This is written using the following notation (I've switched from the diamond notation because it is confusing using it, I'll adopt this transformation notation instead). $F^t \{ f \} ( G^t \{ f \} (x)) = x$ $F^t \{ f \} (x) = F^{t + 1} \{ f \} (G^{t+1} \{ f \} (x) + 1)$ $G^t \{ f \} (x) = F^{t + 1} \{ f \} (G^{t+1} \{ f \} (x) - 1)$ $F^0 \{ f \} (x) = f(x)$ therefore $F^t \{ f \}(x)$ is the t'th superfunction of $f$ and $G^t \{ f \} (x)$ is the t'th abel function of $f$. and now we have the recurrence relation: $F^m \{ F^n \{ f \} \} (x) = F^{m + n} \{ f \} (x)$ Giving us what I put forth earlier the half superfunction of the half superfunction of f is the superfunction of f. $F^{\frac{1}{2}} \{ F^{\frac{1}{2}} \{ f \} \} (x) = F \{ f \} (x)$ This definition cannot fall victim to the method of disproof Tommy gave forth because the transformation notation takes three arguments whereas the diamond notation only takes two. Gottfried Ultimate Fellow Posts: 767 Threads: 119 Joined: Aug 2007 11/21/2011, 10:33 AM (This post was last modified: 11/21/2011, 12:37 PM by Gottfried.) (11/20/2011, 09:26 PM)JmsNxn Wrote: Giving us what I put forth earlier the half superfunction of the half superfunction of f is the superfunction of f. $F^{\frac{1}{2}} \{ F^{\frac{1}{2}} \{ f \} \} (x) = F \{ f \} (x)$ This definition cannot fall victim to the method of disproof Tommy gave forth because the transformation notation takes three arguments whereas the diamond notation only takes two.Hmm, this looks then as if this is -in terms of Carleman-matrices- a continuation of the diagonalization. Assume we have alsready a diagonalization of our carleman-matrix for (decremented) exponentiation dxp°h(x) B as $B = W * D * W^{-1}$, and for the h'th iteration using the h'th power of D $B^h = W * D^h * W^{-1}$, then your logic seems to me the idea to diagonalize W and give it fractional powers: $W = V * E * V^{-1}$, and $W^g = V * E^g * V^{-1}$ such that we have $B_g^h = (V *E^g*V^{-1} ) * D^h *( V * E^{-g}*V^{-1} )$ where the *g* gives the "rate" of the superfunction. We had one time a small discussion about this (I don't remember the thread, perhaps I can provide the reference later); I'd observed, that the three operations addition, multiplication, exponentiation could be listed by powers of W: $W^{-1}, W^0 \text{ and } W^1$ respectively . But I didn't proceed here because of some "unevenness" with this expression for addition. But well: if this meets your idea at all then why not try and find a better extrapolation/embedding now than that sketchy discussion which we didn't continue ... Gottfried [update] the link to the earlier thread: http://math.eretrandre.org/tetrationforu...hp?tid=364 Gottfried Helms, Kassel tommy1729 Ultimate Fellow Posts: 1,372 Threads: 336 Joined: Feb 2009 11/21/2011, 08:08 PM (11/20/2011, 09:26 PM)JmsNxn Wrote: This definition cannot fall victim to the method of disproof Tommy gave forth because the transformation notation takes three arguments whereas the diamond notation only takes two. do not underestimate my powers. a simple example : f3 is super of f2 , f2 is super of f1 , f1 is super of f0. f0(x) = x + e f1(x) = e*x f2(x) = exp(x) f3(x) = sexp(x) can you generate this sequence with your method ? every intuitive solutions seems to be different , but only a few can exist. it is unclear to me how to use carleman matrices - although i mentioned it myself - or anything else ... i dont even have the impression anyone came close. and btw general group theory is not necc limited by amount of variables or operations. we have to be carefull with intuition in mathematics. regards tommy1729 JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 11/22/2011, 02:13 AM (This post was last modified: 11/22/2011, 02:28 AM by JmsNxn.) (11/21/2011, 08:08 PM)tommy1729 Wrote: can you generate this sequence with your method ? every intuitive solutions seems to be different , but only a few can exist. it is unclear to me how to use carleman matrices - although i mentioned it myself - or anything else ... i dont even have the impression anyone came close. Oh no, I definitely cannot generate this sequence yet. I just got a little over excited with Carlemann matrices; which I think now will not work. First of all; the law: $F^m \{ F^n \{ f \} \} (x) = F^{m + n} \{ f \} (x)$ holds pretty solidly, that's really all I'm going for now. Just creating a rule by which superfunction composition follows the same way composition follows; namely $(f^{\circ m} \circ f^{\circ n}) (x) = f^{\circ m + n}(x)$ Also I'm quite aware that group theory isn't limited by the number of variables. I was just referring to the fact that where your previous proof method worked, it won't now. But, I urge you to try and find a direct contradiction with the law of superfunction composition I've put forth. Even a contradiction is a step in the right direction. Furthermore, I'm really not even working on the sequence of superfunctions themselves yet and how to generate them; just the criterion by which they need to be defined and satisfied. (11/21/2011, 10:33 AM)Gottfried Wrote: .... Hey Gottfried. I think I get your approach, and I'm gonna speculate that it's a coincidence the Schroeder functions for addition, multiplication and exponentiation are iterates of the exponential function. This will not produce the hyper operations sequence when iterated; namely: $\exp_b^{\circ 2}(b \cdot \log_b^{\circ 2}(x)) \neq ^x b$ However I do think that's a nifty result. ... Returning to this superfunction sequence, I have some more tiring results It can be proven that: if $F^t \{ f \} (x)$ is a t'th superfunction of f; then $F^t \{ f \} (x + \theta (x))$ is also a solution where theta is a one periodic function that takes zero at $x \in \mathbb{Z}$. We'll also find something very similar with our superfunction sequence; namely if $F^t \{ f \} (x)$ is the t'th superfunction of f, continuous over reals, then $R^t \{ f \} (x) = F^{t + \theta(t)} \{ f \} (x)$ is the t'th superfunction of f, continuous over reals, satisfying the condition: $R \{ R^t \{ f \} \}(x) = R^{t + 1} \{ f \} (x)$ which is just like the iteration sequence we see with composition $(f \circ f^{\circ t})(x) = f^{\circ t + 1}(x)$ However, we would have the draw back $R^m \{ R^n \{ f \} \} (x) = F^{m + \theta(m)} \{ F^{n + \theta(n)} \{ f \}\} (x) = F^{m + n + \theta(m) + \theta(n)} \{ f \} (x) \neq R^{m + n} \{ f \} (x)$ unless of course we redefine superfunction composition as; here I'll use square brackets to accentuate the differences: $R^m [ R^n [f] ] (x) = R^{m + n } [ f ] (x)$ so that $F^{m + \theta(m)} [ F^{n + \theta(n)} [f ] ](x) = F^{m + n + \theta(m + n)} [f] (x)$ which again, is perfectly consistent with $F [ F^t [f] ](x) = F^{t + 1} [ f ] (x)$ So therefore we're going to have two layers of an infinitude of solutions for our superfunction sequence... but it continues more we can define: $F \{ f \} (x)$ by Carlemann matrices, or by Kouznetsov's method, or by regular iteration of a fixpoint, or by pretty much any form that works. So which one do we choose? This is the question I'm trying to ask; which method of making superfunctions generalizes the easiest and the best (and has the least restricted domain for the base value in the hyperoperation sequence (we all know tetration has a knack for failing b < e^(1/e); we'll probably have a similar failure for b < p involving pentation; so on and so forth) to create an infinite sequence of superfunctions, and then from that sequence, create fractional indexes. Essentially it will be a very esoteric problem for iteration, iterating the method a superfunction is created (which most likely involves isolating a fixpoint); and then stopping halfway through that iteration to get a half superfunction. Once we have that, we'll have to identify uniqueness faced with the dilemma $F^t \{ f \} (x)$ is the t'th superfunction and so is $F^t \{ f \} (x + \theta (x))$. And then identify another form of uniqueness given by the dillema $F^t \{ f \} (x)$ is the t'th superfunction of f and so is $F^{t + \theta(t)} $f$ (x)$. It is all so very mind boggling! tommy1729 Ultimate Fellow Posts: 1,372 Threads: 336 Joined: Feb 2009 11/23/2011, 06:00 PM as the title says , i object to linear interpretations of the superfunctionoperator. why ? first reason is that it isnt always true. second reason is i dont know when it applies and when not , or in other words how many exceptions there are. the following example should clarify my objections : to compute the inversesuperoperation once we take g(x) is inverse super of f(x) f( inv.f(x)+1) = g(x) so ; the inverse of exp(x) is e*x. the inverse of e*x is x+e the inverse of x+e is x+1 the inverse of x+1 is x+1 the inverse of x+1 is x+1 the inverse of x+1 is x+1 ... see , we have an annoying fixed point like function that is irreversibel and we lost all information of the original function in the process. the super of x+1 could be any x + c. if we call x + c the 0th super , then there are no NEGATIVE supers. so the kth integer super is in trouble for negative integers and hence so is substracting anything too large from any finite k. another remark is that most superfunctions are periodic or quasi-periodic. also most superfunctions have branches , which is hard to " half anything with ". and as asked before , what does the converging ooth super look like ? like i said , i dont know how many exceptions there are ... are there other functions apart from non-negative k superth of x + c that also lead to a paradox or fixed points ? what functions cycle under the superfunction operator ? just some remarks regards tommy1729 JmsNxn Long Time Fellow Posts: 291 Threads: 67 Joined: Dec 2010 11/24/2011, 01:18 AM (This post was last modified: 11/24/2011, 01:44 AM by JmsNxn.) Oh I know this is by no means concrete yet, and your concerns are viable and duly noted. I probably should've made it explicit that there will no doubt be restrictions on $f$ and $t$ when we talk about $F^t \{ f \} (x)$; what those restrictions are though, that's still up for debate. For the consideration of the superfunction fix point at successorship I thought of a way of accommodating it, but it only really works in definition of the hyperoperation sequence. We define the superfunctions by the identity function: $S(\sigma)$, so that $S(1) = 0; S(2) = 1; S(3) = 1 ... S(n) = 1; n \in \mathbb{Z}/ \{ 0 \}$ and then for negative integers we define it relative to the base value, i.e: $S(-n) = a -1$ then we get the result: $a\,\,\bigtriangleup_\sigma\,\,S(\sigma) = a$ $a\,\,\bigtriangleup_{\sigma -1} \,\,(a\,\,\bigtriangleup_\sigma\,\,b) = a\,\,\bigtriangleup_\sigma\,\,(b+1)$ this defines $a\,\,\bigtriangleup_{-n}\,\,b = b+1$ $a\,\,\bigtriangleup_{0}\,\,b = b + 1$ and $a\,\,\bigtriangleup_{1} b = a + b$ and so on and so forth as the hyperoperation sequence continues. This is more easily expressed using the diamond notation: $f^{\diamond \sigma}(x) = ((f^{\diamond \sigma-1})^{\circ x})(f^{\diamond \sigma}(0))$ and now the definition of setting the identity function for $S(1) = 0$ gives us the result that $f^{\diamond \sigma}(b) = a\,\,\bigtriangleup_\sigma\,\,b$ $f^{\diamond 1}(0) = a$ where as for all negative integers and zero we get $f^{\diamond 0}(0) = 1 ; f^{\diamond -1}(0) = 1 ; f^{\diamond -2}(0) = 1 ... f^{\diamond -n}(0) = 1$ It's necessary that we'd make the requirement for all integers except 1 and 2 $k \in \mathbb{Z}/ \{ 1,2 \}; f^{\diamond k}(0) = 1$ where $f^{\diamond 1}(0) = a$ and $f^{\diamond 2}(0) = 0$ now apply that to $f^{\diamond n}(x) = ((f^{\diamond n-1})^{\circ x})(f^{\diamond n}(0))$ and we get successorship for negative integers and zero, and addition at one, multiplication at two etc. etc... It just requires a little bit of algebraic manipulation and specification as to what we really mean when we say $F^t \{ f \} (x)$ I think this answers your question earlier about generating the sequence of superfunctions. Though rather simply and not with much usefulness. also, $S(\sigma) = a\,\,\bigtriangleup_{\sigma + 1}\,\,(S(\sigma + 1) - 1)$; this is how I derive that all the positive integers greater than two produce one at zero $f^{\diamond k}(0) = 1$. And two produces zero at zero $f^{\diamond 2}(0) = 0$ Furthermore, I've been figuring that it's perfectly possible to have: $a\,\,\bigtriangleup_{-0.5}\,\,b \neq a\,\,\bigtriangleup_{-1.5}\,\,b$ even though, technically they both equal: $F^{-0.5} \{ f \} (b)$ where $f(b) = b + 1$ The methods by which we would deduce this to me seem purely based on an "analytic solution". Quite obviously the hyper operation sequence cannot start being periodic once it hits zero and the negative real numbers; and also, it cannot simply become a constant successorship for all negative reals; so we'll have to create a formula that satisfies: $a\,\,\bigtriangleup_{-n-\frac{1}{2}}\,\,(a\,\,\bigtriangleup_{-n + \frac{1}{2}}\,\,b) = a\,\,\bigtriangleup_{-n + \frac{1}{2}}\,\,b+1$ and even just looking at this equation we cannot have $\bigtriangleup_{-\frac{1}{2}}$ be successorship because then $\bigtriangleup_{\frac{1}{2}}$ would be a variation of addition and would undoubtedly not be analytic across sigma. Partially, I believe, where as you say "it isn't true"--which I agree with, I think it's more "abuse of notation". And that the actual means by which a superfunction is created is far more complicated then a simple transformation, as my notation implies. BUT, and this is an important but, the notation very much adds to the visualization and interpretation of how such a superfunction sequence would be generated. And until we have more rigorous ties to how such a sequence would be generated we're left with primitive formations. If you find errors, and full on problems with the notation, we can address them and carve out how to work around them, or if necessary, scrap the notation and start all over again. Because, quite frankly, I don't think there is much research out there on "half-superfunctions". Partly because superfunctions themselves are still very much unexplored. And by all means, I see your argument, if superfunctions are created through fixpoints and can only be created from the previous "sub function" (that's what I think people have been calling the inverse superfunction), how is it possible to iterate that process from only a single function with, presumably, only one fixpoint? It seems totally unfeasible! And the question of periodic superfunctions. Well that would be interesting, and I think I wrote out a few notes on functions that satisfy identities of that sort: $g(x) = f(f^{-1}(x) + 1)$ $f(x) = g(g^{-1}(x) + 1)$ this would give $g(x) = g(g^{-1}(f^{-1}(x) + 1) + 1) = g(g^{-1}(g(g^{-1}(x) - 1) + 1) + 1)$ I'm not sure if I managed to prove any lemmas about it, it may have gotten too convoluted, and I generally lose these notes; there more just done to keep my brain active. I think you can extend that to iteration as: $g^{\circ t}(x) = f(f^{-1}(x) + t) = g(g^{-1}(g(g^{-1}(x) - 1) + t) + 1)$ I'm sure you can deduce tons of equations like that for integer periodic superfunctions. I'm totally clueless as to what you would do for fractional or complex periods, let alone quasi periods! And the question of $\lim_{n \to \infty} F^{n} \{ f \} (x)$; I can't even begin to interpret the meaning of that question! And someone in another thread said the Ackermann function will not be continued analytically in our lifetime. I can agree with that, considering that the Abel function (which is essentially the superfunction) wasn't investigated till the early 1800s, and only now are we really creating powerful methods for actually determining abel functions of special functions. I'm just hoping to clear some of the path and prove a few lemmas about some of the requirements of half-superfunctions It's almost all speculative. « Next Oldest | Next Newest »

 Possibly Related Threads... Thread Author Replies Views Last Post Math overflow question on fractional exponential iterations sheldonison 4 5,168 04/01/2018, 03:09 AM Last Post: JmsNxn [repost] A nowhere analytic infinite sum for tetration. tommy1729 0 1,646 03/20/2018, 12:16 AM Last Post: tommy1729 Analytic matrices and the base units Xorter 2 3,154 07/19/2017, 10:34 AM Last Post: Xorter The AB functions ! tommy1729 0 1,952 04/04/2017, 11:00 PM Last Post: tommy1729 THE problem with dynamics tommy1729 1 2,987 04/04/2017, 10:52 PM Last Post: tommy1729 Non-analytic Xorter 0 1,816 04/04/2017, 10:38 PM Last Post: Xorter A conjectured uniqueness criteria for analytic tetration Vladimir Reshetnikov 13 14,283 02/17/2017, 05:21 AM Last Post: JmsNxn Is bounded tetration is analytic in the base argument? JmsNxn 0 1,791 01/02/2017, 06:38 AM Last Post: JmsNxn Are tetrations fixed points analytic? JmsNxn 2 3,812 12/14/2016, 08:50 PM Last Post: JmsNxn the inverse ackerman functions JmsNxn 3 6,897 09/18/2016, 11:02 AM Last Post: Xorter

Users browsing this thread: 1 Guest(s)