[edit 25.8, minor textual corrections]
Hmm, my thoughts may be amateurish, but I've time for a bit of discussion of it currently.
That, what I called "my matrix method" is nothing else than a concise notation for the manipulations of exponential-
series, which constitute the iterates of x, s^x, s^s^x .
The integer-iterates.
It comes out,that the coefficients for the series, which has to be summed to get the value for
}\left(x\right) = s^{s^{\dots^s^x}}<br />
\text{with m-fold occurence of } s )
have a complicated structure, but which is anyway iterative with a simple scheme.
It is a multisum, difficult to write, so I indicate it for the first three iterates m=1,m=2,m=3
m=1:
m=2:
m=3:
In m>2 the binomial-term expands to a multinomial-term.
When denoting

then the notation for the general case m>1 can be shortened:
m=arbitrary, integer>0:
(hope I didn't make an index-error).
This formula can also be found in
wrong reference removed (update)
Convergence:
Because we know, that powertowers of finite height are computable by
exponential-series we know also, that for the finite case this sum-
expression converges.
This is due the weight, that the factorials in the denominators accumulate.
Bound for base-parameter s:
However, in the case of infinite height we already know,
that log(s) is limited to -e < log(s) < 1/e
Differentability:
In the general formula above we find the parameter for s as n'th power
of its logarithm (divided by the n'th factorial). This simply provides
infinite differentiablity with respect to s
Differentiation with respect to the height-parameter m could be described
when the eigensystem-decomposition is used (see below)
(but may be I misunderstood the ongoing discussion about this topic
completely - I kept myself off this discussion due to lack
of experience of matrix-differentiation :-( )
In all, I think, that this description is undoubtedly the unique definition
for the tetration of integer heights in terms of powerseries in s and x.
------------------------------------------------
For the continuous case my idea was, that it may be determined
either by eigenvalue-decomposition of the involved matrices, or,
where this is impossible due to inadmissible parameters s,
by the matrix-logarithm.
The eigenvalue-decomposition has the diasadvantage, that its
properties are not yet analytically known, while matrix-logarithm
is sometimes an option, if the eigenvalue-decomposition
fails since it seems, that the resulting coefficients could be
easier be determined.
An example is the U-iteration U_s^(m)(x) = s^U_s^(m-1)(x) - 1,
where s=e, which we discuss here as "exp(s)-1" - version. For this
case Daniel already provided an array of coefficients for a series
which allows to determine the U-tetration U_s^(m)(x) for fractional,
real and even complex parameters m.
These coefficients seem to be analytically derived/derivable, only
the generating scheme was not documentd. However I provided a method
how to produce these coefficients for finitely, but arbitrary many
terms of the series for U_s^(m)(x)) by symbolic exponentiation of
the parametrized matrix-logarithm, so at least that is a *basis* for
the analysis of convergence and summability of the occuring series.
Except for such borderline-cases I would prefer the definition of
the continuous tetration based on the eigenanalysis of the involved
matrix Bs. From the convergent cases -e <log(s) <1/e we have good
numerical evidence, that the eigenvalues are the powerseries of log(h(s)),
where h(s) is Ioannis Galidakis' h-function.
If that hypothesis can be verified, then differentiation with respect
to the height-parameter m means then only analysis with respect to
the fact, that m occurs as power in the exponent of the set of eigenvalues,
and that this is then also infinitely often differentiable.
The structure of the eigenvector-matrices are also not yet analytically known,
although the numerical approximations are already usable, so that the
approximations for real or complex parameters m (for s in the admissible
range) can simply be done based on the real or complex powers of the
set of eigenvalues.
Well, I found already the structure of two eigenvectors - they are
solutions of
[ edited 2.time ]
V(x)~ * Bs = V(x)~ * 1 // where V(x) is of the type [1,x,x^2,x^3,...]
A(x)~ * Bs = A(x)~ * eigenvalue_2 // where A(x) is of the type [0,1*x,2*x^2,3*x^3,...]
A(x) may be written as A(x) = x * V(x)' = x * dV(x)/dx so
V(x)'~ * Bs = V(x)'~ (w.r. to x)
X ~ * Bs = X ~ * eigenvalue_k // structure of X unknown yet
and as such the eigenvector to eigenvalue_1=1 seems simply to reflect
the concept of "fixpoints". (fixpoint for the eigenvalue_k=1).
For s= sqrt(2) we have interestingly 2 possible eigenvectors for the same eigenvalue 1
(which actually occurs only once)
V(2)~ * Bs = V(2)~ * 1
V(4)~ * Bs = V(4)~ * 1
so this is another point for further discussion...
The eigenvector for the second eigenvalue has a simple structure,
but for higher indexes I've not been successful yet.
So far my opinions/thoughts
Gottfried