Tetration Forum

Full Version: Uniqueness of half-iterate of exp(x) ?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Ahhh. Thank you. That makes a lot of sense. I assume b is a constant, then I solve for b as a function of x! I am pretty dense.

Anyways, here's something else I found that may be of use,

Let's say that .

Then by definition


Now substitute g(x) for x,


So, in conclusion, the half-iterate of e^x is the exponential of the half-iterate of ln(x).

Not sure if that is useful but I think it's kinda neat. Hopefully I didn't make another mistake, but it's always possible.
Hello!
I found an interesting site about the Carleman matrix: https://en.m.wikipedia.org/wiki/Carleman_matrix


And the most important of this case:
M[fog] = M[f] M[g]
So these matrices convert composition to matrix multiplication.
Thus

Therefor


So we get the M[exp(x)], it was the easy part of the thing. We need to get the squered root of this matrix, and I could find a program for it: http://calculator.vhex.net/calculator/li...quare-root
And I got another matrix, which satisfies that:
So the function is:


But it is not the half-iterate of exp(x), Could you help me why not, please? What was my mistake?
(01/07/2017, 11:00 PM)Xorter Wrote: [ -> ]Hello!
I found an interesting site about the Carleman matrix: https://en.m.wikipedia.org/wiki/Carleman_matrix
(...)
So these matrices convert composition to matrix multiplication.
Thus

Therefor


So we get the M[exp(x)], it was the easy part of the thing. We need to get the squered root of this matrix, and I could find a program for it: http://calculator.vhex.net/calculator/li...quare-root
And I got another matrix, which satisfies that:
So the function is:


But it is not the half-iterate of exp(x), Could you help me why not, please? What was my mistake?

Well, using the truncated series of the exp(x)-function up to 16 terms (Carleman-matrix-size) I get, using my own routine for matrix-square-root in Pari/GP with arbitrary numerical precision (here 200 decimal digits for internal computation) , the following truncated series-approximation:
 
This gives, for eight correct digits when applying this two times (and should approximate Sheldon's Kneser-implementation).                                   
The reason, why your function is badly misshaped might be: matrix is too small (did you only take size 4x4?) and/or the matrix-squareroot-computation is not optimal.


To crosscheck: one simple approach to the matrix-square-root is the "Newton-iteration".                       

Let M be the original Carleman-matrix and N denote its approximated square-root                 

initialize ...                 


iterate  ...                                
.
until convergence   .

Unfortunately, the matrix N shall not be "Carleman" unless M were of infinite size; nitpicking this means, the function with coefficients taken from the second row (or in my version:column) is not really well suited for iteration. (But this problem has not yet been discussed systematically here in the forum, to my best knowledge)
Gottfried
(01/08/2017, 01:31 AM)Gottfried Wrote: [ -> ]Well, using the truncated series of the exp(x)-function up to 16 terms (Carleman-matrix-size) I get, using my own routine for matrix-square-root in Pari/GP with arbitrary numerical precision (here 200 decimal digits for internal computation) , the following truncated series-approximation:
 
This gives, for eight correct digits when applying this two times (and should approximate Sheldon's Kneser-implementation).                                   
The reason, why your function is badly misshaped might be: matrix is too small (did you only take size 4x4?) and/or the matrix-squareroot-computation is not optimal.

               
To crosscheck: one simple approach to the matrix-square-root is the "Newton-iteration".                       

Let M be the original Carleman-matrix and N denote its approximated square-root                 

initialize ...                 
                 

iterate  ...                                
                 
until convergence   .
               
Unfortunately, the matrix N shall not be "Carleman" unless M were of infinite size; nitpicking this means, the function with coefficients taken from the second row (or in my version:column) is not really well suited for iteration. (But this problem has not yet been discussed systematically here in the forum, to my best knowledge)
Gottfried

Did I take size 4x4? No, of course not, It was 20x20 later 84x84.
I made and recognised my mistake: I generate wrong Carleman matrix instead of M[exp(x)]_i,j = i^j/j!.
Now I regenerate the matrix and I got approximately the same solution.
It works, yuppie!  Smile
Thank you very much.
Could you tell me what you wrote into pari to calculate it out, please? I am not so good at pari codes.
Here is Pari/GP - code
Code:
 default(realprecision,200)   \\ increase internal precision to 800 digits or higher when matrixsize more than, say, 64 ...
 default(format,"g0.12")      \\ only 12 digits for display of float numbers

 dim=16                       \\ increase later, when everything works, but stay less then, say, 128
 M = matrix(dim,dim,r,c,(c-1)^(r-1)/(r-1)!)   \\ carlemanmatrix, transposed, series-coefficients along a column!

 M=1.0*M           \\it is better to have float-values in M otherwise the number-of-digits in N explodes over iterations
 N = matid(dim)
 N = (M * N^-1 + N) / 2
 N = (M * N^-1 + N) / 2
 N = (M * N^-1 + N) / 2
 /* ... do this a couple of times to get convergence; careful: not too often to avoid numerical errors/overflow*/
/* note, N's expected property of being Carleman-type shall be heavily distorted. */

M - N*N   \\ check for sanity, the difference should be near zero

/* define the function;   */
exp05(x) = sum(k=1,dim, x^k * N[k,2])  \\ only for x in interval with good convergence (0<=x<1 )

/* try, 6 to eight digits might be correct when dim is at least 32 x32 /*
x0 = 0
x05 = exp05(x0)   \\ this should be the half-iterate about   0.498692160537

x1 = exp05(x05)  \\ this should be the full iterate and equal exp(x0)=1 and is  about 1.00012482606
x1 - exp(x0)   \\ check error

Example. With dim=8 I got after 8 iterations for N:
Code:
N=            
  1.00000000000        0.498692160537     0.248258284527    0.123313067961  0.0613783517169  0.0309705773951  0.0161518156415  0.00900178983873
              0        0.876328584414     0.875668009082    0.651057846300   0.427494354197   0.262472285853   0.155983031832   0.0925625176728
              0        0.246718723415      1.01708995680     1.34271949385    1.26030722116   0.991600974872   0.698202964055    0.456331269934
              0       0.0248938874134     0.453724180460     1.35543926159    2.03890540259    2.20219490711    1.93455428276     1.45660923676
              0    -0.000559114024252     0.101207623371    0.716292108758    1.94548221867    3.15424069454    3.69849457008     3.40094743897
              0     0.000132927042876    0.0119615040464    0.219667721568    1.11333664588    2.97146364527    5.09318298067     6.20122030134
              0    0.0000114543791108   0.00113904404837   0.0431682924811   0.396012087313    1.78179566161    5.01623097009     9.19538628629
              0  -0.00000540376918712  0.000111767358395  0.00661697871499  0.0933915529553   0.665946396103    3.15521910471     11.3987254320


Of course, to make this more flexible for varying fractional powers of M you'll need diagonalization - but then the required "realprecision" becomes exorbitant for dim=32 and more. For reference, I call this method the "polynomial method" because by the matrix being of finite size this is a polynomial approximation and no attempt is done to produce N in a way, that it basically maintains the structure of a Carlemanmatrix when fractional powers are computed. If this is wanted, the conjugacy using the complex fixpoint is needed before the diagonalization and the generation of a power series with complex coefficients to have the famous Schröder-function by the eigenvectors-matrices. After that, Sheldon has the method to proceed backwards to a real-to-real solution after H. Kneser (which seems to be possibly the limit of the above construction when the matrix size goes to infinity).


Gottfried
Pages: 1 2