Would there be new insights if hyperops are extended to functions?
#1
Hi,

Just wondering - sometimes jumping forward and looking back gives new perspective.

If we use f(x) f( b) instead of x, b, we get:

f(x)+f(x)= 2f(x)
f(x)*f(x)= f(x)^2
f(x)^f(x)= f(x)[3]f(x)

f(x)[4]n= ?

h(f(z)^(1/f(z))= f(z)?

d(h(f(z)^(1/f(z))=d(f(z))?

If d(e^z)/dz = e^z, then dh((e^z)^(e-z) ) /dz = d(e^z)/dz =e^z = h(((e^z)^(e-z))?

d(sin z)/dz = cos z, h(sin(z)^(1/sin(z))= sin(z)? dh(sin z ^(1/sin z))/dz = dsin(z)/dz = cos z = h(cosz^(1/cos(z)))?

d(W(z))/dz = W(z)/z(1+W(z))

d(h(W(z)^(1/W(z)))/dz= W(z)/z(1+W(z))?

solutions to f(x) = a^f(x)?

Then if f(x) = const=b we get b[4]n , if f(x)=x we get F(x)= x[4]n , if f(x)=x^2 we have x^2[4]n , if f(x)=e^x we get e^x[4]n

Many generalizations are probably possible. Do they lead anywhere?
I just got the idea while reading about time scale calculus, which generalizes notion of continuos diff equations/difference equations and makes thing unified by replacing difference with graininess which can be a function of point.

Interestingly, in Time scales First order Harmonic numbers (sum(n) 1/n ) is one of the most studied Time scales with certain provable properties.

Excuse me for jumping ahead of events...

Ivars
Reply
#2
Ivars Wrote:Just wondering - sometimes jumping forward and looking back gives new perspective.

Since many of these are simply making the substitution , then yes, these are true, the question you are asking though is whether it gives new insights... I don't know. A nice part about treating numbers as functions in this way is the flexibility that power series give you. Power series allow you to put things together and take them apart, in ways that does not allow. But I think most insights on this level will end up being related to some other well-known theorem. For example, S. C. Woon's expansion (here and here) depends only on applying the binomial theorem, twice.

I think that the scope of iteration theory is limited only by our understanding of power series, and the more we understand power series, the more things we can solve with continuous iteration.

As an example of an understanding of power series that would extend the scope of iteration theory, take this. Many people (like GFR) on this forum have noticed that we have entirely restricted ourselves to functions of one variable, and this is because almost all useful results in iteration theory are in terms of a function of one variable. But what if we chose not to make this restriction?

We would find that the function would have to be an endofunction to be iterated, meaning a function of the form , and that the set A could be a vector if we really wanted it to be. And if you ask: "What is a vector derivative?" I would say the Jacobian matrix! And if you ask: "How do you square a vector?" I would say tensor multiplication! Using these concepts, the idea of a power series can be generalized to vector functions quite easily. The hard part, though, is trying to applying regular iteration to these functions, and generalizing the fixed-points to eigenvectors or the vector-power-series.

Using the Jacobian matrix and tensor multiplication, a vector function could be written as:


where


Andrew Robbins
Reply
#3
andydude Wrote:
Ivars Wrote:Just wondering - sometimes jumping forward and looking back gives new perspective.

A nice part about treating numbers as functions in this way is the flexibility that power series give you. Power series allow you to put things together and take them apart, in ways that does not allow.
Andrew Robbins

I follow Andrew's view, and indeed, as Ivars proposed: sometimes we get new insights.
For me the first time I thought differently was when I considered the sin and cos-function. More and more I got used to the fact, that these two are twins and should always be considered simultaneously. For instance I got to implement the sincos-function, giving a pair of results. Then - how to invert this? Would it add a bit more unqiueness, determination to the simple acos,asin-function? Well - you get more information by using this in the task of rotation and/or complex analysis. You begin to think in 2x2 rotation-matrices containing cos & sin as such "twins" (as they are) which perform (x,y)->(x'y') (rotation) and get aware, that this is also *uniquely* invertible, for instance for the general use in complex analysis. Since then I only think of such rotations in terms of matrices...

Next step - how can this basic idea be generalized to general functions (if they are, for instance, expressible as powerseries). The input of the function are (constant) coefficients and not only x, but all consecutive powers of x. Now - the result of the computation should also be a set of consecutive powers to make the operation iterable and also invertible.

This led then to the need of matrix-operators, which work on formal powerseries. With this we do not only have
x -> f(x)
which seemed then to be a oversimplification, since the input is in fact the vector of all powers of x, but the result is only a single scalar, but we have
V(x) -> V(f(x))
which denotes the used powers of x as vector, and the result of powers of f(x) as vector as well.

Then there is only one more step to matrix-operators which express a certain function
V(x) * M_f = V(f(x))

This is then a new view: completely naturally you arrive at iterations, and most important: inversion. Also it is completely natural to start thinking in terms of mapping: M_f provides the "map" of x to y (=f(x)) and its new abstractions.

For me, the described change of view brought a fundamentally new understanding of properties of such functions (basically powerseries but also,as a spin off, dirichlet series) - although I'm still exploring and learning. (Successful !) abstraction of some operation is often a *qualitative* step upwards to a new understanding (and then operationalizing) of formerly unconnected and scattered individual observations/theorems etc.

Another example, from our current context. The new insight concerning such a map using powerseries (thus infinite matrices) is that of non-uniqueness, especially of the inversion (as it occurs, for instance with the inverse of the exp-matrix-operator).
If for V(x) * M_f = V(y) we (may) have a unique matrix M_f, then there may still be multiple inverses Lf_k, which all perform V(y) * Lf_k = V(x).
But here we have, that the input for the operation is just one vector and also the output. What, if the input is an (infinite) set of vectors (= a matrix instead) and the output as well, say VX * M_f = VY - may this define a unique inverse? Possibly... let's see... There is an article of Helmut Hasse concerning the computation of Bernoulli-numbers, which principally uses such a construct: a vector multiplied with a matrix giving a vector containing bernoulli-numbers / values for the zeta-function. If one extends his approach to all elements being matrices... then suddenly a relation pops up, which describes the Stirling-matrices 1'st and 2'nd kind as eigensystem of the matrix of bernoulli-polynomials, which is a) a generalization and b) shows a *new* property/relation (and from this also possibly new insights in other known number-theoretical relations)

A practical example from another area is the introduction of the paradigma of object-orientation in the teaching of software-development/programming: that has been a qualitative (!) step-up in the scene of software programming.

Gottfried
Gottfried Helms, Kassel
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The modified Bennet Operators, and their Abel functions JmsNxn 6 1,359 07/22/2022, 12:55 AM
Last Post: JmsNxn
  the inverse ackerman functions JmsNxn 3 11,306 09/18/2016, 11:02 AM
Last Post: Xorter
  Proof Ackermann function extended to reals cannot be commutative/associative JmsNxn 1 6,125 06/15/2013, 08:02 PM
Last Post: MphLee
  generalizing the problem of fractional analytic Ackermann functions JmsNxn 17 42,859 11/24/2011, 01:18 AM
Last Post: JmsNxn
  Attempt to formally generalize log, exp functions to 3,4,5..(n,m) log exp Ivars 21 40,477 04/15/2011, 07:29 AM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)