03/25/2008, 10:36 PM (This post was last modified: 03/26/2008, 04:08 PM by Ivars.)
I was little nervous that in Henryk's formula for tetration with real/complex heights ( see here:Post ) any time logarithm takes negative value, iteration has to be stopped or complex values of logarithm used. Especially for all numbers < 1 iteration stops without even beginning.
So I constructed the following function to see what happens when its iterated:
Since it is obvious that is the only starting point that will not move since every iteration will return to as argument for and will be next one to converge to after first iteration.
So may play a special role in this, and as we will see later, it does.Firstly, all iterations from 1 to pass via this point .
Then I found iterates in the region x = ]0.001:0.001:2.71[ (for a starter, it can be done for negative /positive x>1 as well), via formula:
It is clear that starting with all relatives to e like e, 1/e, e^(1/e), e^-e, e^e ; ((e^-e)^-e)^-e.......... etc iteration stops rather soon via sequence f(f(e) or f(f(f(1/e).
The questions remains, if iterations are infinite, are there many numbers left on real axis which are not in some way related to iterated e-powers of smaller e parent? In case of infinite logarithmic iteration, some numbers would start at infinitesimal seed of e-sequence relative or parent, while some trees will stop at finite number of steps. Excuse me for obvious choice of names. Are there any numbers left? Probably yes.
Well, this is how first 5 iterations look like:
And iterations 6-10 obviuously increase the frequency of spikes.
I did 8522 iterations in Excel with ordinary precision and what happens is interesting:
The limit of average when n-> infinity of all points per iteration seems to converge to .
The same limit value is reached as average value of iteration at each point x (Average of 8522 iterations in this case)
This means that iterated sequences are More negative than positive, and in fact, the maximum postive value at any iteration over all points is always smaller than maximum negative, and their ratio (negative lower bound/positive upper bound) in limit tends to something very close to
This last value has to be checked with better precision iteration, but about convergence to for averages mentioned above I have no doubt as it is logical.
So first conclusion from all this is very interesting:
This was so long, excuse me for mistakes but I hope tex and pictures explain.
03/25/2008, 10:45 PM (This post was last modified: 03/26/2008, 07:55 AM by Ivars.)
I would like to add that the total span of averages of Maximum value per iteration and Minimum value per iteration lies in the range of 6,5 +- - I have to check it if does not depend on the number of points in interval ]0:1[, and if it does, in what way.
03/26/2008, 11:51 AM (This post was last modified: 03/26/2008, 01:15 PM by Ivars.)
The character of convergence of the mean to
can be seen from this graph where first 150 iterations of 100 points x=0.01 step 0.01 = 0.99 are plotted on top of each other.
The limited spread (max-min) of each iteration is also visible with such linear choice and size of steps.
I guess this is somehow related to tree functions (which are related to Lambert function) and iterated logarithms. Just to remember always that studying constant is studying Lambert Wo(1)= so generalizations are possible if things are positioned correctly. So far I do not see them, but I see what to read next.
That seems indeed true. Basicly it has to do with that is a (the only) fixed point of . Proof: this can not be satisfied for as has no fixed point. So we consider the case , which is equivalent to which is by your definition .
However that fixed point is repelling , thatswhy does not converge. On the other hand the iterated value is always thrown back into the negative domain by for positve . And there it oscillates around the fixed point until it is again repelled into the positive domain. So most time it is oscillating around , thatswhy the average is just this.
03/27/2008, 03:38 PM (This post was last modified: 03/28/2008, 07:45 AM by Ivars.)
Yes!
I learned how to iterate something.
THANKS. I wonder what I can do next with this. I have some ideas, like:
There must exist x for which iteration will never reach infinity. Question is- which numbers are those, and which are ones that will have infinite oscillating iterations having the average . It seems these numbers will make holes in a real line if that is considered as input /output line from this iteration.
The more iterations are done, the less points will be left from starting input interval [0:1] . Guess how many points will reach infinite oscillations? (or may be how many will NOT reach).
I think that ratio of finishing points at infinity to starting is again. I have no proof yet, nor even numerical test.
I think for any interval of input x ]-infinity<xo:x1<infinity[ the share of points reaching infinite iterations will be
And most likely I am wrong as such function ln(mod(ln(x)) can be constructed for any base,and each will have points with infinite oscillations, and points that will terminate to 1 and 0 after finite number of iterations, so if my conjecture is true, all other bases vs . e will have only points left on unit interval of real axis that can be infinitely iterated.
e.g for base 4 similar fixed point is 1/2. f(f(f(............1/2) = 1/2; points that will attract to this fixed point are 2, -2, -1/2, etc..
Just to note down an idea for further investigation, for this average of log(log(mod(x)), may be for b>1, fixpoint= ssroot(b) ;
for b <1 b = (1/fixpoint)^(1/fixpoint) but fixpoint = - 1/ssroot(b) = -h(b).
04/06/2008, 11:45 AM (This post was last modified: 04/06/2008, 02:30 PM by Ivars.)
I obtained the distributions around of integer positive iterations of these functions using module each time argument gets negative.
The distributions are symmetric against ordinate axis, point x=0, but they overlap. I am not sure of the type of the distributions, nor how does standard accuracy of Excel influence them- obviously, the values float away from where they should be, but, since we obtain correctly the mean values may be the distribution principially is the same regardless of accuracy given enough iterations are made. They seemed to be log normal but they are not. Weibull is too difficult for me to compare to data set.
Anyway, here are histograms- probability density functions and data parameters from excel:
ln(mod(x)):
Mean -0,569047639
Standard Error 0,012051586
Median -0,372148702
Mode #N/A
Standard Deviation 1,206784434
Sample Variance 1,456562737
Kurtosis 2,838543218
Skewness -1,189194674
Range 11,75801951
Minimum -9,512327352
Maximum 2,245692157
ln(mod(1/x)):
Mean 0,568939972
Standard Error 0,011986562
Median 0,372020783
Mode #N/A
Standard Deviation 1,205767153
Sample Variance 1,454070305
Kurtosis 2,858638726
Skewness 1,190184673
Range 11,83018032
Minimum -2,251410718
Maximum 9,578769599
In principle, all parameters are the same, just the ones who can, change sign. Obviously, location parameter needs to be added (its NOT 0) and mean and maximum does not coincide.
the value of median is Interesting. Could it be ? Probably not.
But ... I have a feeling that this distribution has to be iterated (tetrated?) as well to obtain some limit distribution, since I have been applying ln many times to those variables x which were quite regularly distributed at the beginning and obtained some peculiar chaos.
04/06/2008, 06:14 PM (This post was last modified: 04/06/2008, 06:17 PM by Ivars.)
By little research , reasonable fit was obtained by Gumbel max distribution for ln(mod(1/x), Gumbel min for ln(mod(x) , also Frechet, but the value of parameters in Frechet is around 2*10^8.
Gumbel max and min , however , behave rather reasonably. They seem to be in the region indincated by Skewness/curtosis analysis, while Frechet is just close. See picture.
Of course, with current accuracy these numbers may be false, but interestingly, involving Gumbel distributions as first approximation leads to establishing connection between Euler-Macheroni Constant 0,5772156649 and constant=0,567143 since in Gumbel distribution:
According to my limited accuracy possibilites, for Gumbel max (corresponding to ln(mod(1/x)) iterations))
This distribution has interesting variance which only depends on :
and median
Points of inflection:
And fixed Skewness= 1.1395.. and Curtosis =3+ 12/5 = 5.4
The last 2 I can not match exactly from my data , I am few percent off, but I guess since it is continuous distribution more accurate calculations with much more points are needed to decide.
The other statistical parameters of this interesting distribution involve Gamma and digamma functions, while obviosuly implies Zeta(2).
I find this exercise interesting, definitely worth iterating. I just wonder what continuous /negative iteration of it might mean.
04/06/2008, 06:55 PM (This post was last modified: 04/06/2008, 08:51 PM by Ivars.)
If we analyize the distribution of Average over 100 points per Iteration (over 5000 iterations) , then distribution in both cases ln(mod(x) and ln(mod(1/x)) is 3-parameter LogNormal with parameters:
While if we analyze distribution of Average per point x]0.01:0.01:0.99[ (10 000 iterations) at 100 points , we obtain the best fit by Johnson SU distribution , second best by log logistic distribution . This of course requires more points to be made decisive, but both are interesting, especially log logistic because of its many connections and relation to Lambert W function used in solutions of some growth models.
But these things I have to read on...As many others.
The distribution of values including far rare jumps based on change of sign with mod under logarithm has to be log-Poisson or negative log-Poisson, because the mean value of such infinite iteration is entropy of such process .
Unofortunately, I could not find the parameters of log-Poisson distribution (mean, variance, skewness, kurtosis etc.) to compare with my numeric results.
The hidden mean of 2 such distributions, log Poisson and negative log Poisson is and