Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
f(x) = f(g(x))
#1
f(x) = f(g(x))

its a simple equation.

and i talked about it before. as did e.g. gottfried.

in particular for a certain given g(x) the equation is solvable for f(x) by using the super of g(x) and a 1-periodic function...

but that is not the end of the story.

what if f(x) is given and we want to find g(x) ?

well for starters g(x) might not be unique.

if f(x) = f(exp(x)) then we must also have f(x) = f(x + 2pi i)

the pattern is clear

f(x) = f(g(x))

=> g(x) = g(g1(x)) => f(x) = f(g1(x))

and we can now replace g(x) with g1(x) , g1(x) with g2(x) and repeat ...

until we arrive at a moebius function.... if we ever ...

i guess we could call g_n(x) the invariants of f(x).

and i guess one could say : hey , this is just the construction of a riemann surface in disguise.

but those are just names !

another question is the following :

if we go up the chain , from g_n+1 to g_n , indefinitely , then what f(x) satisfies that ?

thus : f(x) = f(g_-n(x)) for all n. => f(x) = ??

if g_(-)n is not cyclic and f(x) is coo do we have a uniqueness condition on f(x) ?

is f(x) then a fractal or a constant ? ( i assume if we require f(x) to be coo and g_-n is dense ( non-repelling point ) then f(x) must be constant )

also related : is lim n-> oo g_(-)n(x) convergent or divergent ?

is brouwers fixed point theorem related ?

i believe lim n-> oo g_-n(x) is divergent because

g(x) = g(g(x)) is the associated equation , and this has no solution.

we do not accept g_(-)n(x) = x of course ...

and finally of course the following

f(x) = f(a(x)) = f(b(x))

which belong more too the "functional equation category" perhaps ...

that equation is trickier than it seems ; at first sight it seems impossible since we have sin(super(a)/2pi) = sin(super(b)/2pi) ( each side as solution )

but then again g_2 and g_3 e.g. satisfy it.

i assume lim n -> oo f(x) = f(a_g_(-)n(x)) = f(b_g_(-)n(x))

has no solution.

quite a brainstorm ....


tommy1729
Reply
#2
f(x) = f(a(x)) = f(b(x))

when a(x) and b(x) are non-linear it "seems" that a(x) and b(x) need to commute. (*)

in fact P(x) = 2-periodic function ( period a and b )

S(x) => S(a(x)) = S(x) + a , S(b(x)) = S(x) + b

hence S(a(b(x))) = S(b(a(x))) = S(x) + a + b (*)

and

f(x) = P(S(x))

"seems" the only solution.

note that we must have poles in this case because of the 2-periodic function.

however , i mentioned "seems" , because i dont know many counterexamples.

but if

f(x) = f(exp(x))

then f(x) = f(x + 2pi i)

and i excluded that by writing ' nonlinear ' above.

but thats just because i know the following :

f(x) = 1-periodic [(slog(x))]

slog(x) = slog(exp(x)) - 1

slog(x + 2pi i) = slog(exp(x + 2pi i)) -1 = slog(x)

hence f(x) = 1-periodic [(slog(x))] = 1-periodic [(slog(x + 2pi i))]

thus f(x) = f(x + 2pi i)

in the OP i wrote :

**

f(x) = f(g(x))

its a simple equation.

and i talked about it before. as did e.g. gottfried.

in particular for a certain given g(x) the equation is solvable for f(x) by using the super of g(x) and a 1-periodic function...

but that is not the end of the story.

what if f(x) is given and we want to find g(x) ?

well for starters g(x) might not be unique.

if f(x) = f(exp(x)) then we must also have f(x) = f(x + 2pi i)

the pattern is clear

f(x) = f(g(x))

=> g(x) = g(g1(x)) => f(x) = f(g1(x))

and we can now replace g(x) with g1(x) , g1(x) with g2(x) and repeat ...

until we arrive at a moebius function.... if we ever ...

i guess we could call g_n(x) the invariants of f(x).

**

so it seems the g_n are a family of invariants.

now back to the modified (!) equation

f(x) = f(a_n(x)) = f(b_n(x))

where a_n and b_n are not iterations or invariants of eachother.

now it seems that if a_n and b_n are not cyclic iterations , then indeed

P(x) = 2-periodic function ( period a and b )

S(x) => S(a(x)) = S(x) + a , S(b(x)) = S(x) + b

hence S(a(b(x))) = S(b(a(x))) = S(x) + a + b (*)

and

f(x) = P(S(x))

"seems" the only solution....

or not ?

does this rule out ( with the similar above conditions )

f(x) = f(a_n(x)) = f(b_n(x)) = f(c_n(x)) then ?

i think so.

but what about algebraic functions then ?

consider for instance

f(x) = f(alg(x)) and the related alg^[n]alg(x).

more specifly if they have closed forms , such as the following ( with thanks to "achille" )

For p, r, s with p > 0,
Let Q(x) be (p^2-1)*x^2 + 2*(p+1)*r*x + s,
then

f(x) = p*x + sqrt(Q(x)) + r;
=> f^[n](x) = T_n(p)*x + U_{n-1}(p)*sqrt(Q(x)) + r*(T_n(p)-1)/(p-1);

where T_n(p) and U_n(p) are the n-th Chebyshev's
polynomial of first and second kind.

In particular, for (p,r,s) = (3,1,20)

f(x) = 3*x+ 2*sqrt(2*x^2+2*x+5) + 1
=> f^[2](x) = 17*x + 12*sqrt(2*x^2+2*x+5) + 8
f^[3](x) = 99*x + 70*sqrt(2*x^2+2*x+5) + 49
...
f^[n](x) = T_n(3)*x + 2*U_{n-1}(3)*sqrt(2*x^2+2*x+5) + (T_n(3)-1)/2

makes one wonder about equations like

f(x) = f(3*x+ 2*sqrt(2*x^2+2*x+5) + 1)

or more complicated ones !

maybe i just need a good book on invariants or alike :p

regards

tommy1729
Reply
#3
maybe i sound naive.

but i think that is naive.

in other words , intuitively it may seem simple :

f(x) = f(g(x))

we set q(g(x)) = q(x)+1 and p(x) is 1 periodic and t(x) is the inverse of p(x).

then f(x) = p(q(x))

now to find g(x) from f(x) we ' believe ' that reversing the above will help :

f(x) = p(q(x))

t(f(x)) = q(x) (*)

t(f(x)) + 1 = q(x)+1

from (*) we know that inv q(x) = invf(p(x)). lets call that F(x).

then

from q(g(x)) = q(x)+1 = t(f(x)) + 1 we get

g(x) = F(t(f(x)) + 1)

( assuming the correct branches are taken )

this formula looks powerfull ... lets try

exp(x) = exp(g(x))

g(x) = log( sin (2pi * (1/(2pi) arcsin(exp(x)) + 1) ) )

we simplify

g(x) = log( sin( arcsin(exp(x)) + 2pi) ) )

this doesnt look good ... we simplify further

g(x) = log(sin(arcsin(exp(x)))))

now note that we cannot work with the branches of log , since we need to know an invariant of exp to know the branches of log.

if we do not know that exp(x + 2pi i) = exp(x) we do not know that log(x) + 2pi i is another branch of log(x) and visa versa !

we could find that by investigating log and exp , but that is not a general method.

the riemann surfaces of a function and its inverse reveals branches and invariants , but not easily in closed form rather some numerical values.

so how do we reduce g(x) = log(sin(arcsin(exp(x))))) ?

sure g(x) = log(sin(arcsin(exp(x))))) implies that g(x) = x.

but g(x) = x is a uselessly trivial result.

my example included functions we understand well , but of course more complicated ones are not so easy to analyse.

i am thus tempted to conclude that no decent theory of invariants exists without investigating the riemann surfaces.

more precisely , i assume that we must find a function mapping one branch to the next , therefore requiring to know the structure of the riemann surface ( shape and cuts ) and expressing that function as a taylor series , which we compute by taking nth derivatives of the analytic continuations that take us to the next branch value of the imput.

although that may work in many cases , since we need to understand the structure of the surface alot , it still might not be - or lead to - a complete theory of invariants.

i conclude that , unless i am missing something important , that solving f(x) = f(g(x)) for g(x) is non-trivial.

however my knowledge is limited , maybe someone can "open my eyes" with some theorems or theories.

since functions like tetration and the alike are more exotic than any standard function i therefore find it justified to consider finding invariants , even if they dont have closed forms.

regards

tommy1729
Reply
#4
to make a long story short

for f(z) an entire function

it seems that to find at least 1 function g(z) =/= z such that

f(z) = f(g(z))

we can *do* the following :

consider a branch of f^[-1](z) , call it branch A.

the branch below it is branch B.

the domain of branch A is nonempty and connected and the domain of B is also nonempty and connected.

we can map dom A or dom B uniquely and bijectively to a unit circle.

RMT1(dom(A)) = unit circle RMT2(dom(B)) = unit circle.

hence by the Riemann Mapping Theorem (RMT), we can map the dom A to the dom B and visa versa by :

RMT2^[-1] [RMT1(dom(A))] = dom B

RMT1^[-1] [RMT2(dom(B))] = dom A

hence RMT2^[-1] [RMT1(z)] or RMT1^[-1] [RMT2(z)] is an invariant.

these 2 solutions however can equal id(z) when dom(A) = dom(B).

further , these invariants have branches too.

we need to find and express the domains A and B.

and we need to find the riemann mappings.

and even when we have done that , we still dont have a property of the invariant ... we dont even know if its cylic under iteration , if all the other invariants are interations of it , if it is entire , if it has singularities or branches , ...

but i thought id mention that once again the Riemann Mapping Theorem is involved and how it works !!

note :

for instance if one would try to find the period of a function by using this , it wont be that simple :

if the invariant turns out to be x + C you have your period.

but to find x + C you (probably*) first need to know that dom B is just dom A +/- C !!

this is circular !!

( * to construct the Riemann Mappings , at least by this method )

well maybe there is a way around that , but the general case f(z) = f(g(z)) probably doesnt have a simple way around such problems ...

nevertheless , we know g(z) relates to the superfunction , so we(?) are not giving up on this Smile

if i am missing something trivial about this subject , plz inform me.

regards

tommy1729
Reply




Users browsing this thread: 1 Guest(s)