I recommend reading James his paper first, in particular to see that everything is well defined and the crucial things are proven.
Ok so let's quickly evaluate what James Nixon has achieved and considered
The main (auxiliary) function is
or
So that,
And then uses it to define tetration like :
First I would like to remark 2 things.
There is a simple second way to define or view his auxiliary function :
Let

and notice they are all positive !
Also note that exp(x) takes any real x to a positive real.
For some this might be easier to comprehend or use in proofs or not.
But the main thing is that this shows :
1) every truncation can be written as a maclaurin series ( taylor series expanded at 0) with strictly nonnegative coefficients.
2) Since this function has been proven to be entire ( in the paper ) this implies not only the truncation can be written with nonnegative coefficients , but also the limit.
Thus
Where the p_n are all positive reals.
All the derivatives of phi are thus also strictly nonnegative and increasing and have the same taylor property for positive real s.
***
For those intrested this implies James has also solved at least one problem from fake function theory and fake function theory is somewhat involved.
We could say the same about laplace transformations , bernstein theorems etc.
Or in other words ; this is good. We have some understanding of these type of functions both from classical math as ideas from this forum.
For numerical algorithms and convergeance this is also nice.
***
Secondly this function phi is periodic with the same period as exp(x).
This was already noted in the paper ofcourse but together with the above and the property that phi is an entire function this is quite powerful.
Ok.
When trying to " visualize " or " understand " an analytic function or analytic continuation, I always (also) think in terms of copies or multiplicities.
How many solutions does f(z) = y ( for fixed y) have in a certain "modest" area ? What are the local invariants ? Symmetry ?
The equation f(z) = y (for fixed y) always has a finite number of solutions within a nonzero radius if it is analytic within the radius (and on the boundary).
( IF it does not, it cannot be analytic everywhere within the radius and on the boundary of the radius ! )
When we consider Riemann surfaces this matters.
We also know that the inverse of a locally analytic function is also locally analytic.
Now let's consider what I call " relativity " in math.
Take for example the square root.
a^2 = b
now a is an analytic function of b and vice versa.
We can define or represent a (or b) as
1) a series expansion
2) an integral
3) an iterative algorithm
4) an equation
etc
They might not converge the same. But when they do converge, they should always give one of the correct solutions. And they do.
And the point is that the analytic continuation , second solution (minus square root) or other branch are equal+equivalent in meaning &value for each representation,
AND the same for ALL representations of them.
This is very important because "objections" are often in the form of this iteration is chaotic or divergent or not well defined , this sum diverges , this is only defined for reals etc.
Consequently this relativity is an important concept and a powerful one.
In case you wonder about divergent sums , there are summability methods that rely on algebraic equations, analytic continuations etc.
If we have convergeance in a nonzero radius we could apply " relativity " and other representations.
***
If we have no convergeance we might add a parameter and then later try to apply relativity/other representations/continuations. For instance a sum that diverges everywhere. This is related to perturbation theory.
HOWEVER in this case the equivalence is no longer garanteed.
THIS IS A REMARK THOUGH AND IRRELEVANT FOR THIS POST )
***
Another way to put it and define "relativity" informally is this :
" analytic continuation = algebraic continuation "
***
I often say that.
I feel this connects all branches of math, but I guess that is perhaps philosophy.
***
Anyways it thus makes sense to consider this :
And then by recursion and induction we get
by using log(a + b) = log(a) + log(1 + b/a) we get
( notice up to here we could have done the analogue with the sinh method and in fact we could try the things below too. But for phi things work out nicer. )
by using the fundamental equation of phi :
 = e^{s+\phi(s)})
we can simplify without any problems ;
and finally
NOW apply the banach fixed point theorem and relativity to notice that the recursion of these functions r_1(s),r_2(s),... actually converges to a constant rather than a non-constant function.
The conditions for the banach fixed point theorem are clearly met for positive real s since we have an analytic contraction.
NOW we can therefore consistently define
We arrive at
Notice this equation contains functions that are analytic almost everywhere. [1]
And remember the inverse of a locally analytic function is also locally analytic.[2]
Now from the concept of relativity we know we only need to solve this equation for V and that is defined and analytic for most complex numbers too by [1] and [2].
We continue ;
We split the problem in 2 cases.
1) V = 0
if V = 0 then 0 = s + ln(1 + 0) since phi(s+1) is never 0.
Therefore V = 0 = s + ln(1) = s.
2) if V and s are nonzero we get :
Where t = t(s) = phi(s+1)/s. ( notice s is nonzero ! )
t(s) is at worst a meromorphic function since it can have only a pole at s = 0 because phi(s+1) is never zero and entire.
SO we are left to solve this :
Using the Lambert-W function we get
But ts is just \phi(s+1) so
So we have a closed form using phi and lambert-W as claimed :
Notice this also gives us a way to compute the difference operator over phi , although I am uncertain how practical it is.
But let us continue
 = e^{s+\phi(s)})
therefore
and thus
So if we take
 = - \phi(s+1) + \phi(s) )
then we get
Now let L = x - W( - exp(x) )
then
x = L - exp(L)
and
M(s) is also an entire function so we also have a more or less practical way to compute an slog ( the functional inverse of this kind of tetration ) too !
notice the integer solution L = 0 and x = 1 for x = L - exp(L).
***
also
let
then
and exp(s) = phi(s+1) exp(-phi(s))
so
 = M(s) - W(-exp(M(S) - \phi(s)) \phi(s+1) ) = M(s) - W(- exp(-\phi(s+1)) \phi(s+1) ))
.
hence
 = - W ( - exp(- \phi(s+1) ) \phi(s+1)))
.
***
Ofcourse branches of logarithms , inverse functions of phi or M and ofcourse lambertW are not considered in this sketchy overview here.
So be careful with those issues.
But I think if your branches are correct for the real case they will not pose a problem for the complex numbers close to the real line,
and by continuation a solid formal solution to the complex plane will be achieved.
None of these computations are super hard compared to many complicated solutions or riemann mappings (like kneser's tetration solution).
A deeper study of phi(s) would be desired. In particular the riemann surface of its functional inverse.
pictures would be nice too.
Some day students might have a psi button on their calculators

I would like that.
tommy1729
truth is what does not go away when you stop believing in it
Tom Marcel Raes
sorry for the delay of posting this.
feel free to ask or comment.