Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
generalizing the problem of fractional analytic Ackermann functions
#11
(11/18/2011, 12:41 AM)JmsNxn Wrote: That's a good way to approach the question. I'm not too familiar with carleman matrices but I think it goes something like this

(...)
Hi James -

unfortunately I can't follow in the above. But concerning tetration of carleman-matrices themselves there is a simple example: the pascal-matrix is the carleman-matrix for the operation of incrementation by 1 and powers of it are carleman-matrices for the operation of addition. I've worked out an example how to tetrate the pascal-matrix, perhaps this is useful (and possibly generalizable). See http://go.helms-net.de/math/tetdocs/index.htm go to the short statement at "pascal matrix tetrated" and open http://go.helms-net.de/math/tetdocs/Pasc...trated.pdf

Nearly all what I'm doing in tetration is based on this concept of "carleman-matrices" and I might be able to answer if you have some concrete questions. I did not know that name when I stumbled on that concept so my first expositions of my fiddlings are mostly in threads headed by "matrix-method" and might be informative/helpful before (though it is extremely exploratory and not well structured from the beginning)

Gottfried
Gottfried Helms, Kassel
Reply
#12
(11/18/2011, 06:13 AM)Gottfried Wrote:
(11/18/2011, 12:41 AM)JmsNxn Wrote: That's a good way to approach the question. I'm not too familiar with carleman matrices but I think it goes something like this

(...)
Hi James -

unfortunately I can't follow in the above. But concerning tetration of carleman-matrices themselves there is a simple example: the pascal-matrix is the carleman-matrix for the operation of incrementation by 1 and powers of it are carleman-matrices for the operation of addition. I've worked out an example how to tetrate the pascal-matrix, perhaps this is useful (and possibly generalizable). See http://go.helms-net.de/math/tetdocs/index.htm go to the short statement at "pascal matrix tetrated" and open http://go.helms-net.de/math/tetdocs/Pasc...trated.pdf

Nearly all what I'm doing in tetration is based on this concept of "carleman-matrices" and I might be able to answer if you have some concrete questions. I did not know that name when I stumbled on that concept so my first expositions of my fiddlings are mostly in threads headed by "matrix-method" and might be informative/helpful before (though it is extremely exploratory and not well structured from the beginning)

Gottfried

Thanks Gottfried, I'll take a look. It may be in my interest to research more into how the matrix method actually works.

It seems like the matrix method may have a potential solution. I stress the may.


So far, from what I can make of it, it would require taking a limit, and an exponential continuum sum.

In the sense is a carlemann matrix.



this isn't really tetration, but rather a type of left handed tetration. This would require a continuum sum type object for fractional R iff the following identity isn't true:



which I think is given if matrix multiplication isn't commutative. However, this seems to contradict how Carlemann matrices work, because evidently



but we already know



so this would imply





I'm a little bit wishy washy now.

But I think in general I just need to better understand most fractional iteration methods. I doubt this type of reasoning is unique to Carlemann Matrices. And to my knowledge, the matrix method doesn't quite make the cut--I don't remember why, though; does it fail for ?


Maybe if I could make sense of Kouznetsov's method of finding superfunctions I might be able to iterate that.



And if your confused, you shouldn't be, this is fairly simple.

It's following this reasoning, if:

is the super function of f(x)

and
is the super function of and the second superfunction of f(x)

so that in general

is the t'th super function of f(x)

what is the value of



There are plenty of ways to find the super function of f and to define So I'll try to experiment with as many as possible. There should be one that sticks out though, or is simplest. Carlemann matrices were just the first.
Reply
#13
Alright, so I finally uncovered the recurrence relation for superfunctions and half-superfunctions!

It can linguistically be put forth as:

the half superfunction of the half superfunction of f is the superfunction of f.

which to me now, is a "no duhhh"; Can't believe I missed it.

This is written using the following notation (I've switched from the diamond notation because it is confusing using it, I'll adopt this transformation notation instead).









therefore is the t'th superfunction of and is the t'th abel function of .

and now we have the recurrence relation:



Giving us what I put forth earlier

the half superfunction of the half superfunction of f is the superfunction of f.



This definition cannot fall victim to the method of disproof Tommy gave forth because the transformation notation takes three arguments whereas the diamond notation only takes two.
Reply
#14
(11/20/2011, 09:26 PM)JmsNxn Wrote: Giving us what I put forth earlier

the half superfunction of the half superfunction of f is the superfunction of f.



This definition cannot fall victim to the method of disproof Tommy gave forth because the transformation notation takes three arguments whereas the diamond notation only takes two.
Hmm, this looks then as if this is -in terms of Carleman-matrices- a continuation of the diagonalization. Assume we have alsready a diagonalization of our carleman-matrix for (decremented) exponentiation dxp°h(x) B as , and for the h'th iteration using the h'th power of D , then your logic seems to me the idea to diagonalize W and give it fractional powers: , and such that we have where the *g* gives the "rate" of the superfunction.

We had one time a small discussion about this (I don't remember the thread, perhaps I can provide the reference later); I'd observed, that the three operations addition, multiplication, exponentiation could be listed by powers of W: respectively . But I didn't proceed here because of some "unevenness" with this expression for addition. But well: if this meets your idea at all then why not try and find a better extrapolation/embedding now than that sketchy discussion which we didn't continue ...

Gottfried

[update] the link to the earlier thread: http://math.eretrandre.org/tetrationforu...hp?tid=364
Gottfried Helms, Kassel
Reply
#15
(11/20/2011, 09:26 PM)JmsNxn Wrote: This definition cannot fall victim to the method of disproof Tommy gave forth because the transformation notation takes three arguments whereas the diamond notation only takes two.

do not underestimate my powers.

a simple example :

f3 is super of f2 , f2 is super of f1 , f1 is super of f0.

f0(x) = x + e
f1(x) = e*x
f2(x) = exp(x)
f3(x) = sexp(x)

can you generate this sequence with your method ?

every intuitive solutions seems to be different , but only a few can exist.

it is unclear to me how to use carleman matrices - although i mentioned it myself - or anything else ...

i dont even have the impression anyone came close.

and btw general group theory is not necc limited by amount of variables or operations.

we have to be carefull with intuition in mathematics.

regards

tommy1729
Reply
#16
(11/21/2011, 08:08 PM)tommy1729 Wrote: can you generate this sequence with your method ?

every intuitive solutions seems to be different , but only a few can exist.

it is unclear to me how to use carleman matrices - although i mentioned it myself - or anything else ...

i dont even have the impression anyone came close.

Oh no, I definitely cannot generate this sequence yet. I just got a little over excited with Carlemann matrices; which I think now will not work.

First of all; the law:



holds pretty solidly, that's really all I'm going for now. Just creating a rule by which superfunction composition follows the same way composition follows; namely



Also I'm quite aware that group theory isn't limited by the number of variables. I was just referring to the fact that where your previous proof method worked, it won't now. But, I urge you to try and find a direct contradiction with the law of superfunction composition I've put forth. Even a contradiction is a step in the right direction.

Furthermore, I'm really not even working on the sequence of superfunctions themselves yet and how to generate them; just the criterion by which they need to be defined and satisfied.


(11/21/2011, 10:33 AM)Gottfried Wrote: ....


Hey Gottfried. I think I get your approach, and I'm gonna speculate that it's a coincidence the Schroeder functions for addition, multiplication and exponentiation are iterates of the exponential function.

This will not produce the hyper operations sequence when iterated; namely:



However I do think that's a nifty result.

...

Returning to this superfunction sequence, I have some more tiring results

It can be proven that:

if
is a t'th superfunction of f; then is also a solution where theta is a one periodic function that takes zero at .

We'll also find something very similar with our superfunction sequence;

namely

if
is the t'th superfunction of f, continuous over reals, then

is the t'th superfunction of f, continuous over reals, satisfying the condition:

which is just like the iteration sequence we see with composition

However, we would have the draw back



unless of course we redefine superfunction composition as; here I'll use square brackets to accentuate the differences:



so that



which again, is perfectly consistent with




So therefore we're going to have two layers of an infinitude of solutions for our superfunction sequence... but it continues more


we can define:
by Carlemann matrices, or by Kouznetsov's method, or by regular iteration of a fixpoint, or by pretty much any form that works. So which one do we choose?

This is the question I'm trying to ask; which method of making superfunctions generalizes the easiest and the best (and has the least restricted domain for the base value in the hyperoperation sequence (we all know tetration has a knack for failing b < e^(1/e); we'll probably have a similar failure for b < p involving pentation; so on and so forth) to create an infinite sequence of superfunctions, and then from that sequence, create fractional indexes.

Essentially it will be a very esoteric problem for iteration, iterating the method a superfunction is created (which most likely involves isolating a fixpoint); and then stopping halfway through that iteration to get a half superfunction.

Once we have that, we'll have to identify uniqueness faced with the dilemma is the t'th superfunction and so is .

And then identify another form of uniqueness given by the dillema is the t'th superfunction of f and so is .

It is all so very mind boggling!
Reply
#17
as the title says , i object to linear interpretations of the superfunctionoperator.

why ?

first reason is that it isnt always true.

second reason is i dont know when it applies and when not , or in other words how many exceptions there are.

the following example should clarify my objections :

to compute the inversesuperoperation once we take

g(x) is inverse super of f(x)

f( inv.f(x)+1) = g(x)

so ; the inverse of exp(x) is e*x.

the inverse of e*x is x+e

the inverse of x+e is x+1

the inverse of x+1 is x+1

the inverse of x+1 is x+1

the inverse of x+1 is x+1

...

see , we have an annoying fixed point like function that is irreversibel and we lost all information of the original function in the process.

the super of x+1 could be any x + c.

if we call x + c the 0th super , then there are no NEGATIVE supers.

so the kth integer super is in trouble for negative integers and hence so is substracting anything too large from any finite k.

another remark is that most superfunctions are periodic or quasi-periodic.

also most superfunctions have branches , which is hard to " half anything with ".

and as asked before , what does the converging ooth super look like ?

like i said , i dont know how many exceptions there are ...

are there other functions apart from non-negative k superth of x + c that also lead to a paradox or fixed points ?

what functions cycle under the superfunction operator ?

just some remarks

regards

tommy1729
Reply
#18
Oh I know this is by no means concrete yet, and your concerns are viable and duly noted.

I probably should've made it explicit that there will no doubt be restrictions on and when we talk about ; what those restrictions are though, that's still up for debate.

For the consideration of the superfunction fix point at successorship I thought of a way of accommodating it, but it only really works in definition of the hyperoperation sequence.

We define the superfunctions by the identity function: , so that

and then for negative integers we define it relative to the base value, i.e:

then we get the result:





this defines




and


and so on and so forth as the hyperoperation sequence continues.

This is more easily expressed using the diamond notation:



and now the definition of setting the identity function for

gives us the result that




where as for all negative integers and zero we get

It's necessary that we'd make the requirement for all integers except 1 and 2 where and

now apply that to


and we get successorship for negative integers and zero, and addition at one, multiplication at two etc. etc... It just requires a little bit of algebraic manipulation and specification as to what we really mean when we say

I think this answers your question earlier about generating the sequence of superfunctions. Though rather simply and not with much usefulness.

also, ; this is how I derive that all the positive integers greater than two produce one at zero . And two produces zero at zero

Furthermore,
I've been figuring that it's perfectly possible to have:


even though, technically they both equal:
where

The methods by which we would deduce this to me seem purely based on an "analytic solution". Quite obviously the hyper operation sequence cannot start being periodic once it hits zero and the negative real numbers; and also, it cannot simply become a constant successorship for all negative reals; so we'll have to create a formula that satisfies:



and even just looking at this equation we cannot have be successorship because then would be a variation of addition and would undoubtedly not be analytic across sigma.



Partially, I believe, where as you say "it isn't true"--which I agree with, I think it's more "abuse of notation". And that the actual means by which a superfunction is created is far more complicated then a simple transformation, as my notation implies. BUT, and this is an important but, the notation very much adds to the visualization and interpretation of how such a superfunction sequence would be generated. And until we have more rigorous ties to how such a sequence would be generated we're left with primitive formations.
If you find errors, and full on problems with the notation, we can address them and carve out how to work around them, or if necessary, scrap the notation and start all over again. Because, quite frankly, I don't think there is much research out there on "half-superfunctions". Partly because superfunctions themselves are still very much unexplored.

And by all means, I see your argument, if superfunctions are created through fixpoints and can only be created from the previous "sub function" (that's what I think people have been calling the inverse superfunction), how is it possible to iterate that process from only a single function with, presumably, only one fixpoint? It seems totally unfeasible!

And the question of periodic superfunctions. Well that would be interesting, and I think I wrote out a few notes on functions that satisfy identities of that sort:




this would give



I'm not sure if I managed to prove any lemmas about it, it may have gotten too convoluted, and I generally lose these notes; there more just done to keep my brain active.

I think you can extend that to iteration as:


I'm sure you can deduce tons of equations like that for integer periodic superfunctions. I'm totally clueless as to what you would do for fractional or complex periods, let alone quasi periods!


And the question of ; I can't even begin to interpret the meaning of that question!



And someone in another thread said the Ackermann function will not be continued analytically in our lifetime. I can agree with that, considering that the Abel function (which is essentially the superfunction) wasn't investigated till the early 1800s, and only now are we really creating powerful methods for actually determining abel functions of special functions. I'm just hoping to clear some of the path and prove a few lemmas about some of the requirements of half-superfunctions Tongue It's almost all speculative.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Math overflow question on fractional exponential iterations sheldonison 4 3,060 04/01/2018, 03:09 AM
Last Post: JmsNxn
  [repost] A nowhere analytic infinite sum for tetration. tommy1729 0 949 03/20/2018, 12:16 AM
Last Post: tommy1729
Question Analytic matrices and the base units Xorter 2 2,087 07/19/2017, 10:34 AM
Last Post: Xorter
  The AB functions ! tommy1729 0 1,368 04/04/2017, 11:00 PM
Last Post: tommy1729
  THE problem with dynamics tommy1729 1 2,042 04/04/2017, 10:52 PM
Last Post: tommy1729
  Non-analytic Xorter 0 1,246 04/04/2017, 10:38 PM
Last Post: Xorter
  A conjectured uniqueness criteria for analytic tetration Vladimir Reshetnikov 13 9,907 02/17/2017, 05:21 AM
Last Post: JmsNxn
  Is bounded tetration is analytic in the base argument? JmsNxn 0 1,164 01/02/2017, 06:38 AM
Last Post: JmsNxn
  Are tetrations fixed points analytic? JmsNxn 2 2,548 12/14/2016, 08:50 PM
Last Post: JmsNxn
  the inverse ackerman functions JmsNxn 3 5,476 09/18/2016, 11:02 AM
Last Post: Xorter



Users browsing this thread: 2 Guest(s)