Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Improving convergence of Andrew's slog
#11
bo198214 Wrote:[...]
In the above formula let for some base , then the equation is .
Now we can perturb in the following way let be a periodic analytic function (for example ) and consider

then
is another analytic function satisfying this equation (and every other solution is a perturbation by a periodic function).
If you choose low in amplitude the function even remains strictly monotonic increasing.
Interesting. This argument also works if applied to exponential functions, using the property that . In other words, this property is not sufficient to uniquely determine a unique (up to scalar multiple of the exponent) exponential function.

Now, the traditional way (or at least, one of the usual ways) of defining the exponential function is by its inverse: . This gives us a logarithm function unique up to change of base. This seems to suggest that perhaps we need to discover a property of the tetralog (tetrational inverse) function that will lead us to a unique (up to change of base) tetration. Unfortunately, the relationship of tetration to lower operations is non-trivial at best, and it's not obvious which property should be used as our basis.
Reply
#12
quickfur Wrote:This argument also works if applied to exponential functions, using the property that . In other words, this property is not sufficient to uniquely determine a unique (up to scalar multiple of the exponent) exponential function.

Now, the traditional way (or at least, one of the usual ways) of defining the exponential function is by its inverse: . This gives us a logarithm function unique up to change of base. This seems to suggest that perhaps we need to discover a property of the tetralog (tetrational inverse) function that will lead us to a unique (up to change of base) tetration. Unfortunately, the relationship of tetration to lower operations is non-trivial at best, and it's not obvious which property should be used as our basis.

I does not so much matter whether you use the logarithm or the exponential itself, what makes exponentiation and multiplication unique are the "distributive laws":
and which are an extension of
and , however we know it doesnt work for tetration. Thatswhy I decided to develop a new more general number domain, arborescent numbers, in which we have this distributivity a[[n+1]](x+y)=(a[[n+1]]x)[[n]](a[[n+1]]y) for arbitrary high operations, as we just discussed here.

However it would be good if you have a read through the Tetration FAQ, as uniqueness of exponentiation and multiplication are already explained there.
Reply
#13
bo198214 Wrote:[...]
I does not so much matter whether you use the logarithm or the exponential itself, what makes exponentiation and multiplication unique are the "distributive laws":
and which are an extension of
and , however we know it doesnt work for tetration.
Hmm, I wasn't aware of this property being referred to as the "distributive law"... I am more familiar with distributive laws being of the form (and/or vice versa for non-commutative operators). But anyway, it's good to clarify what you meant by "distributive law".

Quote:Thatswhy I decided to develop a new more general number domain, arborescent numbers, in which we have this distributivity a[[n+1]](x+y)=(a[[n+1]]x)[[n]](a[[n+1]]y) for arbitrary high operations, as we just discussed here.
Right. I'm not entirely convinced, though, that it's not possible to find some kind of "natural" uniqueness property that must hold for tetration, which would uniquely determine it on the reals. It may require operations (or rather, binary functions) other than addition, multiplication, or exponentiation to be expressed, but surely there must be some such property that would allow us to have uniqueness.

Quote:However it would be good if you have a read through the Tetration FAQ, as uniqueness of exponentiation and multiplication are already explained there.
Thanks, I just looked over it quickly. I think the uniqueness of exponentiation via what you call the "distributive laws" is quite a well-known property of exponentiation, and indeed, it's probably the reason exponentiation was so well studied in the past (before the advent of computers, converting multiplications to additions via logarithms was a very handy method of speeding up computations). It directly results from multiplication being associative.

I sorta referred to this in my thread on higher-order operations on ordinals... multiplication is the last associative operator in the Grzegorczyk hierarchy, so exponentiation (i.e., a multiplication chain) is the last operation where you can "append to the chain" at either end easily. I.e., since x*x*x*...*x can be bracketed in any order (multiplication is associative), we can easily add another instance of x to the end of the chain by computing x*x*...*x (n times), and then multiplying the result by x. This makes it easy to generalize exponentiation to ordinals: to define exponentiation by , you just compute , and then right-multiply the result by .

But for operators with n=4 and above (tetration and above), the previous operators are not associative, so this trick cannot work: there is no algebraic way to "reach the top of the tower" in the expression x^x^x^...^x (assuming right-to-left bracketing). So it's non-trivial to define what is meant by tetrating to . Non-associativity also causes the negation of the so-called "distributive laws", which are essentially a consequence of the associativity of multiplication in the case of exponentiation. There is no algebraic way for us to "reach into" the top exponent of the tower x^x^...^x and add more to it, even though we can right-multiply the tower. If we could reach into the top exponent somehow, then we could have a relation between adding to the tower from the bottom, and adding to it from the top; then that may give us some kind of property that may help determine uniqueness for tetration over the reals. But as far as we know, there is no way to do this, and so no straightforward property we can exploit to determine uniqueness.
Reply
#14
quickfur Wrote:
bo198214 Wrote:[...]
what makes exponentiation and multiplication unique are the "distributive laws":
and which are an extension of
and , however we know it doesnt work for tetration.
Hmm, I wasn't aware of this property being referred to as the "distributive law"... I am more familiar with distributive laws being of the form (and/or vice versa for non-commutative operators).
right, thatswhy I put it into quotation marks. These laws are generally not referred to as distribution laws. However I also dont know any other term for them.

Quote:
Quote:I'm not entirely convinced, though, that it's not possible to find some kind of "natural" uniqueness property that must hold for tetration, which would uniquely determine it on the reals. It may require operations (or rather, binary functions) other than addition, multiplication, or exponentiation to be expressed, but surely there must be some such property that would allow us to have uniqueness.
Nobody said, that it is not *possible*. The situation is simply that there *is* no yet.

It directly results from multiplication being associative.
...
Non-associativity also causes the negation of the so-called "distributive laws", which are essentially a consequence of the associativity of multiplication in the case of exponentiation.

But there are also non-associative operations, that satisfy the "distributive laws", however not on the natural numbers. Because the addition on the natural numbers is associative and hence
a#(x+y)=a#x * a#y (* and # arbitrary operations) would cause
a#(x+(y+z))=a#x * (a#y * a#z) the operation * also to be associative.
However if we start on a domain with an addition that is not associative, for example the domain of real functions f together with exponentiation as addition then we can easily define higher operations
f[1]g=f^g
f[2](g^h)=f[2]g ^ f[2]h, f[2]x=f
f[3](g^h)=(f[3]g) [2] (f[3]h), f[3]x=f
etc.

as long as you can decompose every function into exponentiations of x (identity function) which is the 1-Element of that domain, for example:

x^x [1] x^x = (x^x)^(x^x)=x^(xx^x)
x [2] x^x = (x [2] x) ^ (x[2]x) = x^x
x^x [2] x^x = (x^x[2]x) ^ (x^x[2]x)= x^x ^ (x^x)
x^x [2] x^(x^x) = (x^x[2]x) ^ (x^x[2]x ^ x^x[2]x) = (x^x)^((x^x)^(x^x))

It is easily to prove that the operation [2], though not commutative, is associative, and .

In that way you can start a hierarchy of operations with "distributive law" on the set of real functions, that are made up of exponentiations of the identity function.
Reply
#15
(09/28/2007, 03:31 AM)jaydfox Wrote: In order to improve convergence, what I've done is taken the coefficients for the two known singularities at 0.31813... + 1.3372357i and its conjugate, and put them in the B vector, rather than 0's. These coefficients aren't the true values from A, but they're much, much, much closer to the true values. This reduces the errors in the final solution by several orders of magnitude.

Does anybody know what JayD meant here with "coefficients for the two known singularities", and how he arranges them and their conjugates in the B vector?
Reply
#16
(06/26/2010, 11:54 AM)bo198214 Wrote:
(09/28/2007, 03:31 AM)jaydfox Wrote: In order to improve convergence, what I've done is taken the coefficients for the two known singularities at 0.31813... + 1.3372357i and its conjugate, and put them in the B vector, rather than 0's. These coefficients aren't the true values from A, but they're much, much, much closer to the true values. This reduces the errors in the final solution by several orders of magnitude.

Does anybody know what JayD meant here with "coefficients for the two known singularities", and how he arranges them and their conjugates in the B vector?
Add the power series for the two known singularities (i.e., the singularities assumed to be closest to the origin, and hence whose terms would limit the radius of convergence): and

If my math isn't wrong, the power series for is

UPDATE: Yeah, my math was right, but I wrote it down wrong. Here's what I meant (and which I later confirmed):
The power series for is

The first few terms of which are:

Code:
a_0 +
(0.472565386708 + 0.238338175183*I) x +
(0.124126845264 - 0.147164806491*I) x^2 +
(-0.0555043196549 - 0.075086914615*I) x^3 +
(-0.0468665210317 + 0.0199804139488*I) x^4 + ...

Ignoring the value of for now, we can add this series and its conjugate to get the following series. This is the combined taylor series of the two logarithmic singularities at the primary fixed points (i.e., added together):

Code:
a_0 +
0.945130773416 x +
0.248253690529 x^2 +
-0.11100863931 x^3 +
-0.0937330420633 x^4 +
0.0100000104867 x^5 +
0.0358794547132 x^6 +
0.00657595348982 x^7 +
-0.0123046860018 x^8 +
-0.00639023591838 x^9 +
0.00327323081386 x^10 +
0.00376926734556 x^11 +
-0.000280141200757 x^12 + ...

The SAGE code I used to generate these (low-precision) numbers is:

Code:
var('a, x');
f(a) = taylor(log(x-a)/a, x, 0, 13);
A = f(0.3181315052047641353 + 1.337235701430689409*I) + f(0.3181315052047641353 - 1.337235701430689409*I);
print A.coefficients()[1:13];

Note that I have log(x-a)/a, rather than log(x-a)/log(a), since a=log(a) and the former computes (slightly) faster.
~ Jay Daniel Fox
Reply
#17
(06/29/2010, 07:54 PM)jaydfox Wrote: If my math isn't wrong, the power series for is

UPDATE: Yeah, my math was right, but I wrote it down wrong. Here's what I meant (and which I later confirmed):
The power series for is

The first few terms of which are:

Code:
a_0 +
(0.472565386708 + 0.238338175183*I) x +
(0.124126845264 - 0.147164806491*I) x^2 +
(-0.0555043196549 - 0.075086914615*I) x^3 +
(-0.0468665210317 + 0.0199804139488*I) x^4 + ...

Code for confirming this:
Code:
d = 0.3181315052047641353 + 1.337235701430689409*I;
C = [[-1/(k*d^(k+1)), k] for k in xrange(1, 13)];
print C;

Adding the two series then yields the following code:
Code:
d = 0.3181315052047641353 + 1.337235701430689409*I;
C = [[-1/(k*d^(k+1)) + -1/(k*d.conjugate()^(k+1)), k] for k in xrange(1, 13)];
print C;

A simple way to increase efficiency, noting that the imaginary parts cancel:

Code:
d = 0.3181315052047641353 + 1.337235701430689409*I;
C = [[(-2/(k*d^(k+1))).real(), k] for k in xrange(1, 13)];
print C;
~ Jay Daniel Fox
Reply
#18
(06/29/2010, 07:54 PM)jaydfox Wrote:
bo198214 Wrote:Does anybody know what JayD meant here with "coefficients for the two known singularities", and how he arranges them and their conjugates in the B vector?
Add the power series for the two known singularities (i.e., the singularities assumed to be closest to the origin, and hence whose terms would limit the radius of convergence): and

The power series for is

Ya in the mean time I looked into your code and guessed something like that.

So this is based on your observation that the slog behaves like log (which is like rslog) at the fixed points (which you uttered somewhere if my memory is right)?! And based on your assumption that the slog should be constructed somehow from both fixed points (though not in the simple way i proposed by just adding the two regular slogs at both fixed points, which anyway is not analytic at all points on the real axis).

Now that I know what you are doing, I am a bit skeptical that it indeed converges toward the same solution that the original system would. Do you have some justification that it is not biased (towards your dream solution) by choosing this initial guess of the coefficients, I mean other than numerical justification?

Dont get me wrong I think that your accelerated solution is the best in the sense that it satisfies the uniqueness criterion in my (in the meantime published) article [1] (which I found out later matches some known criteria in holomorphic dynamics, see this thread), particularly that it is equal to Kneser's solution (which I think is in turn equal to Kouznetsov's solution); however for me there are just different approaches and I would like to know whether they are equal or not and what are the benefits and drawbacks of the methods.

In this sense, I also would like to see a numerical comparison of the regular slog at base sqrt(2) with the islog for that base. I would think that your way of acceleration also works with that base, could you make a picture of acclerated islog vs rslog? (I surely dont want to steal your time, and I anyway surprised to see a reply here by you; on the other hand you would be most efficient with it, as you considered both methods already in detail)

[1] H. Trappmann, D. Kouznetsov. Uniqueness of Holomorphic Abel Functions at a Complex Fixed Point Pair. Aequationes Mathematicae, 2010.
Reply
#19
(07/01/2010, 03:37 AM)bo198214 Wrote: Now that I know what you are doing, I am a bit skeptical that it indeed converges toward the same solution that the original system would. Do you have some justification that it is not biased (towards your dream solution) by choosing this initial guess of the coefficients, I mean other than numerical justification?

Well, I started with the assumption that there are logarithmic singularities in the islog at the primary fixed points. I've posted graphs and such to show how this structure could exist (thought it might not be the only valid structure).

If this assumption is used, then at worst, convergence to the same solution should not be hindered.

For example, if there are other singularities further from the origin than these two, their influence will eventually become negligible, due to the radius of convergence being limited by the (closer) fixed points.

On the other hand, if there are singularities that are closer to the origin than the two primary fixed points, the influence of these other singularities would be greater, eventually making the power series terms for the primary fixed points negligible. In this situation, those closer singularities would ultimately determine what the solution converges towards, and my negligible power series should not alter that.

The only situation where I can see my acceleration technique having any potential to cause the solution to converge to a different result, is if there are other singularities at exactly the same distance from the origin (i.e., same distance as the primary fixed points), which I have not accounted for. Or, if there are singularities at the primary fixed points, but they are not the exact type that I'm expecting (logarithmic, with a base of the fixed point itself).

However, I don't see any reason to expect such singularities to exist. Do I have a proof that they don't exist? No, not at the moment. I take it for granted that they don't, because I don't know of any way that such would be predicted to exist.

Now, if my original assumption is wrong, i.e., if the islog based on the unaccelerated matrix solution does not have singularities at the primary fixed points, then I would expect the solutions to be different (since my accelerated solution effectively forces the islog to have those singularities).

However, I'm not even sure what such an islog would look like. It would mean that a path from a primary fixed point back to itself in one iteration of exponentiation involves a path around another singularity (other than one of the two primary fixed points). The regular slog developed at one of the non-primary fixed points (and its conjugate) might have this feature.

Note: the islog based on the matrix solution has real coefficients, so if it is developed as a regular slog at a non-primary fixed point, it must actually be developed in such a way that it is regular at the complex conjugate of that fixed point as well. I've tried to develop such as islog, just on paper and in my head, without any success.

Quote:In this sense, I also would like to see a numerical comparison of the regular slog at base sqrt(2) with the islog for that base. I would think that your way of acceleration also works with that base, could you make a picture of acclerated islog vs rslog? (I surely dont want to steal your time, and I anyway surprised to see a reply here by you; on the other hand you would be most efficient with it, as you considered both methods already in detail)

I actually don't have SAGE installed anymore, and my source code is in disarray. I also don't really have much time. I haven't had the time to work on this stuff in almost a year. I didn't even finish my attempts at computing a kslog. I was getting promising results, though with far too little precision to say whether it was converging to the islog. I didn't see any indication that it wasn't, but we've seen with the rslog of base sqrt(2), developed at the upper and lower fixed points, that solutions can look very similar and be quite different nonetheless.

But this question looked important for you to be able to pick up where I left off, and anyway, didn't take much time to answer.
~ Jay Daniel Fox
Reply
#20
(07/01/2010, 06:54 PM)jaydfox Wrote: But this question looked important for you to be able to pick up where I left off, and anyway, didn't take much time to answer.

Ya, true. Thanks Jay.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Revisting my accelerated slog solution using Abel matrix inversion jaydfox 21 5,397 02/09/2019, 02:25 PM
Last Post: sheldonison
  sum(e - eta^^k): convergence or divergence? Gottfried 6 9,508 08/17/2010, 11:05 PM
Last Post: tommy1729
  A note on computation of the slog Gottfried 6 8,971 07/12/2010, 10:24 AM
Last Post: Gottfried
  intuitive slog base sqrt(2) developed between 2 and 4 bo198214 1 3,347 09/10/2009, 06:47 PM
Last Post: bo198214
  sexp and slog at a microcalculator Kouznetsov 0 2,965 01/08/2009, 08:51 AM
Last Post: Kouznetsov
  Convergence of matrix solution for base e jaydfox 6 8,232 12/18/2007, 12:14 AM
Last Post: jaydfox
  SAGE code implementing slog with acceleration jaydfox 4 6,676 10/22/2007, 12:59 AM
Last Post: jaydfox
  Dissecting Andrew's slog solution jaydfox 15 16,727 09/20/2007, 05:53 AM
Last Post: jaydfox
  Computing Andrew's slog solution jaydfox 16 17,011 09/20/2007, 03:53 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)