UPDATE:
Please see the last post for the most up to date code on the Abel function associated with the multiplier \( \lambda \). It works much better than what is here; and in this post.
Hey, everyone!
I've hit a crossroads in my proposed method of constructing tetration, which can be found in this thread. As far as I can see, there are no errors in the construction; though there may be some elements of the paper which leave things to be desired. There may be some opaque arguments, which could use better phrasing--but, I can't see what the rephrasing would be at the moment.
So, instead, I've decided to try and create more experimental evidence. But, as to my capacity with a compiler; I'm nothing like I used to be. I haven't coded in a very long time; and even then, I only coded a very small niche of problems. I've been trying to think of ways to effectively compute this tetration function better; but I can't quite get a grasp of MatLab's inhouse functions; and I'm having trouble even thinking of manners of approach. The ideal is to construct a Taylor series, but, I'm not sure how one could recover a Taylor series. My approach so far has been entirely analytical; and me fiddling with transforms trying to see if I can pull out a convenient manner of determining Taylor coefficients... but every method I've thought of encounters the same problem as the code I have now.
To begin, I point the reader to my Github repository for this tetration.
I'll include the README file I added there here,
Now, I can produce a Taylor series for the function \( \beta_\lambda(s) \); but it's really not needed. The function beta_function works very well; and creates very accurate results. And as it's a backwards iteration, we never really have any overflow errors unless we increase the argument. We can do 100 iterations and everything still works; and it converges rapidly so there's no problem there. The trouble is, when we increase the real part of the argument \( s \) in \( \beta_\lambda(s) \); then this starts to looks like orbits \( \exp^{\lfloor \Re(s)\rfloor}(z) \); which grow very very fast; especially near the real line \( z \approx \mathbb{R}^+ \).
So if you want to calculate, for instance, the value \( \beta_\lambda(10) \); how do you say... shit out of luck? It works fine upto about \( \Re(s) = 5 \); but after that we'll just short out and everything falls apart. This wouldn't be much of a problem if we only needed \( \beta_\lambda \)'s behaviour for small values; but the trouble is, we need \( \beta_\lambda \)'s behaviour for very large values. Using a Taylor series won't help us here either. Because, well, \( \beta_\lambda(10) \) is ASTRONOMICALLY large, and probably looks about \( \exp^{\circ 8}(1) \) give or take; depending on \( \lambda \). So, in this world of finite computing resources; there's really nothing we can do that's better.
As to why this matters is pretty simple. If you've been paying attention to how I've introduced this function, or if you read the paper; you'll know that the magic happens for large values. The function,
\(
\log(\beta_\lambda(s+1)) = \beta_\lambda(s) - \log(1+e^{-\lambda s})\\
\)
Which, typographically can look like,
\(
\log(\beta_\lambda(s+1)) = \beta_\lambda(s) - \mathcal{O}(e^{-\lambda s})\\
\)
And the idea is to add a sequence of convergents \( \tau_\lambda^n(s) \) such that,
\(
\log(\beta_\lambda(s+1) + \tau_\lambda^n(s+1)) = \beta_\lambda(s) + \tau_\lambda^{n+1}(s)\\
\)
Much of the theory, mathematically, proving that \( \tau_\lambda^n \) converges follows similarly to Ecalle's construction of an Abel function about a neutral fixed point in Milnor's book Dynamics in One Complex Variable. Where in this construction, there's an intermediary step where we solve,
\(
F(f(s)) = F(s) + 1 + \mathcal{O}(1/s)\\
\)
There's something similar here; where then Ecalle makes a specific limit process to construct the actual Abel function. I'm doing something similar here, but discovering the inverse-abel function; and there's a bit more massaging that goes into it. I make a similar mapping argument he makes though; however it reduces into solving a Schroder equation (at least how I do it).
So the function I call is tau_K(z,l,n,k) which follows the rule,
The trouble is; when we set \( k \) to large values; somewhere in the process we eventually produce values like \( \beta_\lambda(10) \); which are astronomic in size. You can see it clearly in the recursion process where we have to shift the variable forward in each iteration. Again, the facile way of viewing this, is that,
\(
\tau_\lambda^n(s) = \log^{\circ n} \beta_\lambda(s+n) - \beta_\lambda(s)\\
\)
And obviously, we'll begin to overflow by the time we hit \( n=10 \). The absolute cosmic irony of all this though, is that,
\(
\tau_\lambda^n(s) = -\log(1+e^{-\lambda s}) + o(e^{-\lambda s})\\
\)
We can make a pretty good guess for large \( s \) by just using \( -\log(1+e^{-\lambda s}) \). But we need something very large to produce something very small. So for that, we can rewrite our iteration (which I haven't uploaded on to Github yet, because I'm trying to work out the kinks). Instead we'll be focusing on very small values.
So, in this we define a function,
And this will work for large iterations of \( k \); it's equivalent to our previous code mathematically; but at least we're underflowing rather than overflowing. This gives us the graphs for the function
where \( -1 \le \Re(z) \le 2 \) and \( |\Im(z)| \le 3 \). Which mathematically looks like,
\(
\log^{\circ 100} \beta_{\log(2)}(z+100)\\
\)
We can see that this algorithm works horribly near the real line. This is because our multiplier is \( \log(2) \) and so the beta function grows the fastest on the real line. Away from the real line it looks good in this graph. This is about the size of the period of this tetration, which is \( 2\pi i / \log(2) \); so just expect this to repeat off to infinity in either direction. You can clearly see (we're at 100 iterations deep), that this converges for imaginary arguments. The trouble is when we start to overflow and short-circuit.
You can perfectly see the graph of what this tetration looks like on the real line by the contour Matlab draws. But as you move away, it all short circuits and just plain overflows.
And this is how it overflows, in a nice Leau petal looking thing.
So, my idea, and what I especially need help with; is understanding how to reduce this into computing Taylor coefficients. Now, I'm perfectly capable of providing the Taylor coefficients \( a_k = \beta_\lambda^{(k)}(s_0) \) that,
\(
\beta_\lambda(s) = \sum_{k=0}^\infty a_k\frac{(s-s_0)^k}{k!}\\
\)
If I could have an effective way at computing the Taylor coefficients of,
\(
1/\beta_\lambda(s) = \sum_{k=0}^\infty b_k\frac{(s-s_0)^k}{k!}\\
\)
Since \( \beta_\lambda(s) \) is non-zero; this is always discoverable. Again, though, I'm effectively naive at how the hell you program that. After this, we're talking about creating the taylor series,
\(
\sum_{k=0}^\infty c_k^n \frac{(s-s_0)^k}{k!} = -\log(1+e^{-\lambda s}) + \log(1+\sum_{k=0}^\infty \sum_{j=0}^k \binom{k}{j}c_{j}^{n-1}b_{k-j} \frac{(s+1-s_0)^k}{k!})\\
\)
The trouble is, I have not a clue how to program this. Not necessarily because it's difficult to program; but, because I'm just that bad a programmer. How would one even begin to do this? Any clues, or points to what kind of how-to's or literature; or if it's just so obvious. I prefer using MatLab; largely because it's just that much easier of a gui. And raw coding using pari-gp would be too much of a leap for me. I need baby steps.
So I'm asking mostly, if there are any resources you as a programmer can suggest for handling Taylor Series; in a, speak to me like I'm in kindergarten kind of manner. Also, if anyone has any suggestions, or different methods of approach; it is greatly appreciated. I am at a loss on how to program this. But the more I fiddle, the more it's patently obvious this function is holomorphic. And that this solves tetration. And furthermore; It's not Kouznetsov or Kneser's. In the basic priniciple that the math is telling me the final tetration diverges as \( |\Im(z)| \to \infty \). Where as in both Kouznetsov's and Kneser's \( \Im(z) \to \infty \) tends to a fixed point.
Now, it's important to remember that this isn't the tetration we want. The tetration we want is,
\(
\lim_{k\to\infty} \log^{\circ k} \beta_{\sqrt{1+s+k}}(s+k)\\
\)
Solving this coding problem doesn't solve the problem for the actual tetration we want. I need a much better grasp on how the hell you code in an efficient, let's say, Taylor series grabbing code; before I approach coding the actual tetration.
Again, this is to code the intermediary tetrations which we effectively paste together to get the right tetration.
Regards, James.
It's also important to remember that these graphs solve the equation \( e^{F_\lambda(s)} = F_\lambda(s+1) \). If we make a similar graph using the code:
Which calculates the value for, say,
\(
\log^{\circ 100} \beta_\lambda(s+100)\,\,\text{for}\,\,-1< \Re(s) \le 0\\
\)
And simply applies \( \exp \) to increase the real argument; we still get pretty much the same graph.
I don't know how the hell Sheldon does it!
Lmao!
Regards, James
Please see the last post for the most up to date code on the Abel function associated with the multiplier \( \lambda \). It works much better than what is here; and in this post.
Hey, everyone!
I've hit a crossroads in my proposed method of constructing tetration, which can be found in this thread. As far as I can see, there are no errors in the construction; though there may be some elements of the paper which leave things to be desired. There may be some opaque arguments, which could use better phrasing--but, I can't see what the rephrasing would be at the moment.
So, instead, I've decided to try and create more experimental evidence. But, as to my capacity with a compiler; I'm nothing like I used to be. I haven't coded in a very long time; and even then, I only coded a very small niche of problems. I've been trying to think of ways to effectively compute this tetration function better; but I can't quite get a grasp of MatLab's inhouse functions; and I'm having trouble even thinking of manners of approach. The ideal is to construct a Taylor series, but, I'm not sure how one could recover a Taylor series. My approach so far has been entirely analytical; and me fiddling with transforms trying to see if I can pull out a convenient manner of determining Taylor coefficients... but every method I've thought of encounters the same problem as the code I have now.
To begin, I point the reader to my Github repository for this tetration.
I'll include the README file I added there here,
Code:
This is the code I used in the paper "The Limits of a Family; of Asymptotic Solutions to The Tetration Equation" The paper is available on arXiv at the link https://arxiv.org/abs/2104.01990 All code is written for MatLab, but works fine for other languages once translated. Nothing specific to MatLab is especially called.
This repository consists of 5 functions:
beta_function(z,l,n)
Which is the asymptotic solution to tetration, written in a simple recursive manner. This is not the most efficient code, as it will over flow for large numbers. But, that's precisely its purpose. The variable z is the main argument. The value l is the multiplier. The value n is the depth of the iteration.
tau_K(z,l,n,k)
This function is the error term between beta_function and the actual tetration function associated to the value l (the multiplier). The value n is the inherited depth of iteration of beta_function; the value k is the new depth of iteration we use to construct tau_K. As an important disclaimer, keeping n and k closer together, keeps the iteration stable. If n is 100 and k is 5; we'll see a lot more anomalies and overflows. Where as n is 10 and k is 6 will produce the same results (where it converges), but will produce more correct results where there were anomalies and overflows.
beta2(z,n)
This function is the pasted together version of beta_function. This is when we combine all our multipliers into an implicit function. The variable z is the main argument, and n is the depth of iteration.
tau2(z,n,k)
This function is the error term between beta2 and the final tetration function. The value n is the inherited depth of iteration from beta2; and k is the new depth of iteration.
TET(z,n,k)
This function is an optimized version of beta2(z,n) + tau2(z,n,k); where it works more exactly as iterating the exponential.
Now, I can produce a Taylor series for the function \( \beta_\lambda(s) \); but it's really not needed. The function beta_function works very well; and creates very accurate results. And as it's a backwards iteration, we never really have any overflow errors unless we increase the argument. We can do 100 iterations and everything still works; and it converges rapidly so there's no problem there. The trouble is, when we increase the real part of the argument \( s \) in \( \beta_\lambda(s) \); then this starts to looks like orbits \( \exp^{\lfloor \Re(s)\rfloor}(z) \); which grow very very fast; especially near the real line \( z \approx \mathbb{R}^+ \).
So if you want to calculate, for instance, the value \( \beta_\lambda(10) \); how do you say... shit out of luck? It works fine upto about \( \Re(s) = 5 \); but after that we'll just short out and everything falls apart. This wouldn't be much of a problem if we only needed \( \beta_\lambda \)'s behaviour for small values; but the trouble is, we need \( \beta_\lambda \)'s behaviour for very large values. Using a Taylor series won't help us here either. Because, well, \( \beta_\lambda(10) \) is ASTRONOMICALLY large, and probably looks about \( \exp^{\circ 8}(1) \) give or take; depending on \( \lambda \). So, in this world of finite computing resources; there's really nothing we can do that's better.
As to why this matters is pretty simple. If you've been paying attention to how I've introduced this function, or if you read the paper; you'll know that the magic happens for large values. The function,
\(
\log(\beta_\lambda(s+1)) = \beta_\lambda(s) - \log(1+e^{-\lambda s})\\
\)
Which, typographically can look like,
\(
\log(\beta_\lambda(s+1)) = \beta_\lambda(s) - \mathcal{O}(e^{-\lambda s})\\
\)
And the idea is to add a sequence of convergents \( \tau_\lambda^n(s) \) such that,
\(
\log(\beta_\lambda(s+1) + \tau_\lambda^n(s+1)) = \beta_\lambda(s) + \tau_\lambda^{n+1}(s)\\
\)
Much of the theory, mathematically, proving that \( \tau_\lambda^n \) converges follows similarly to Ecalle's construction of an Abel function about a neutral fixed point in Milnor's book Dynamics in One Complex Variable. Where in this construction, there's an intermediary step where we solve,
\(
F(f(s)) = F(s) + 1 + \mathcal{O}(1/s)\\
\)
There's something similar here; where then Ecalle makes a specific limit process to construct the actual Abel function. I'm doing something similar here, but discovering the inverse-abel function; and there's a bit more massaging that goes into it. I make a similar mapping argument he makes though; however it reduces into solving a Schroder equation (at least how I do it).
So the function I call is tau_K(z,l,n,k) which follows the rule,
Code:
beta_function(z,l,n) + tau_K(z,l,n,k) = log(beta_function(z+1,l,n) + tau_K(z+1,l,n,k-1))
The trouble is; when we set \( k \) to large values; somewhere in the process we eventually produce values like \( \beta_\lambda(10) \); which are astronomic in size. You can see it clearly in the recursion process where we have to shift the variable forward in each iteration. Again, the facile way of viewing this, is that,
\(
\tau_\lambda^n(s) = \log^{\circ n} \beta_\lambda(s+n) - \beta_\lambda(s)\\
\)
And obviously, we'll begin to overflow by the time we hit \( n=10 \). The absolute cosmic irony of all this though, is that,
\(
\tau_\lambda^n(s) = -\log(1+e^{-\lambda s}) + o(e^{-\lambda s})\\
\)
We can make a pretty good guess for large \( s \) by just using \( -\log(1+e^{-\lambda s}) \). But we need something very large to produce something very small. So for that, we can rewrite our iteration (which I haven't uploaded on to Github yet, because I'm trying to work out the kinks). Instead we'll be focusing on very small values.
So, in this we define a function,
Code:
function f = tau_K(z,l,n,k)
if k == 1
f = -log(1+exp(-l*z));
return
end
f = log(1 + tau_K(z+1,l,n,k-1)./beta_function(z+1,l,n)) - log(1+exp(-l*z));
end
And this will work for large iterations of \( k \); it's equivalent to our previous code mathematically; but at least we're underflowing rather than overflowing. This gives us the graphs for the function
Code:
beta_function(z,log(2),100) + tau_K(z,log(2),100,100)
where \( -1 \le \Re(z) \le 2 \) and \( |\Im(z)| \le 3 \). Which mathematically looks like,
\(
\log^{\circ 100} \beta_{\log(2)}(z+100)\\
\)
We can see that this algorithm works horribly near the real line. This is because our multiplier is \( \log(2) \) and so the beta function grows the fastest on the real line. Away from the real line it looks good in this graph. This is about the size of the period of this tetration, which is \( 2\pi i / \log(2) \); so just expect this to repeat off to infinity in either direction. You can clearly see (we're at 100 iterations deep), that this converges for imaginary arguments. The trouble is when we start to overflow and short-circuit.
You can perfectly see the graph of what this tetration looks like on the real line by the contour Matlab draws. But as you move away, it all short circuits and just plain overflows.
And this is how it overflows, in a nice Leau petal looking thing.
So, my idea, and what I especially need help with; is understanding how to reduce this into computing Taylor coefficients. Now, I'm perfectly capable of providing the Taylor coefficients \( a_k = \beta_\lambda^{(k)}(s_0) \) that,
\(
\beta_\lambda(s) = \sum_{k=0}^\infty a_k\frac{(s-s_0)^k}{k!}\\
\)
If I could have an effective way at computing the Taylor coefficients of,
\(
1/\beta_\lambda(s) = \sum_{k=0}^\infty b_k\frac{(s-s_0)^k}{k!}\\
\)
Since \( \beta_\lambda(s) \) is non-zero; this is always discoverable. Again, though, I'm effectively naive at how the hell you program that. After this, we're talking about creating the taylor series,
\(
\sum_{k=0}^\infty c_k^n \frac{(s-s_0)^k}{k!} = -\log(1+e^{-\lambda s}) + \log(1+\sum_{k=0}^\infty \sum_{j=0}^k \binom{k}{j}c_{j}^{n-1}b_{k-j} \frac{(s+1-s_0)^k}{k!})\\
\)
The trouble is, I have not a clue how to program this. Not necessarily because it's difficult to program; but, because I'm just that bad a programmer. How would one even begin to do this? Any clues, or points to what kind of how-to's or literature; or if it's just so obvious. I prefer using MatLab; largely because it's just that much easier of a gui. And raw coding using pari-gp would be too much of a leap for me. I need baby steps.
So I'm asking mostly, if there are any resources you as a programmer can suggest for handling Taylor Series; in a, speak to me like I'm in kindergarten kind of manner. Also, if anyone has any suggestions, or different methods of approach; it is greatly appreciated. I am at a loss on how to program this. But the more I fiddle, the more it's patently obvious this function is holomorphic. And that this solves tetration. And furthermore; It's not Kouznetsov or Kneser's. In the basic priniciple that the math is telling me the final tetration diverges as \( |\Im(z)| \to \infty \). Where as in both Kouznetsov's and Kneser's \( \Im(z) \to \infty \) tends to a fixed point.
Now, it's important to remember that this isn't the tetration we want. The tetration we want is,
\(
\lim_{k\to\infty} \log^{\circ k} \beta_{\sqrt{1+s+k}}(s+k)\\
\)
Solving this coding problem doesn't solve the problem for the actual tetration we want. I need a much better grasp on how the hell you code in an efficient, let's say, Taylor series grabbing code; before I approach coding the actual tetration.
Again, this is to code the intermediary tetrations which we effectively paste together to get the right tetration.
Regards, James.
It's also important to remember that these graphs solve the equation \( e^{F_\lambda(s)} = F_\lambda(s+1) \). If we make a similar graph using the code:
Code:
function f = TET_L(z,l,n,k)
if (-1<real(z)<=0)
f=beta_function(z,l,n) + tau_K(z,l,n,k);
return
end
f = exp(TET_L(z-1));
end
Which calculates the value for, say,
\(
\log^{\circ 100} \beta_\lambda(s+100)\,\,\text{for}\,\,-1< \Re(s) \le 0\\
\)
And simply applies \( \exp \) to increase the real argument; we still get pretty much the same graph.
I don't know how the hell Sheldon does it!
Lmao!
Regards, James