Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
andydude Wrote:Have you tried computing or plotting or yet? I would do this but I don't have any code yet for AS(x), and I'm lazy.

Andrew Robbins

Yes, I fiddled with this a bit.

First, it is obvious, that in

Sb(x) = x - b^x + b^b^x - ... + ...

there is some "self-similarity", if x itself is a powertower of base b

Sb(0) = s0
Sb(1) = -s0
Sb(b) = 1 - s0
Sb(b^b) = b - 1 + s0
and so on.

For the fixpoints, let t0,t1,t2,... be the fixpoints for base b
t_k = h_k(b)

Sb(t) = t - t + t - t +... -... = t*(eta(0)) = t/2

I've added some graphs for different bases for varying x, marking x=t0,t1 and x=0, x=1, x=b

It would be interesting to show graphs of negative heights, aka iterated logarithms...

Gottfried Helms, Kassel
I'm extending a note, which I posted to sci.math.research today.
I hope to find a solution of the problem by a reconsideration of the structure of the matrix of stirlingnumbers 1'st kind, which may contain "Infinitesimals" which become significant if infinite summing of its powers are assumed.

------------------------------- (text is a bit edited) -----------------
Due to a counterexample by Prof. Edgar (see sci.math) I have to withdraw this conjecture.

The error may essentially be due to a misconception about the matrix of Stirling-numbers 1'st kind and the infinite series of its powers.

It is perhaps similar to the problem of the infinite series of powers of the pascal-matrix P, which could be cured by assuming a non-neglectable infinitesimal in the first upper subdiagonal

Assuming for an entry of the first upper subdiagonal in row r (r beginning at zero) in the pascal matrix P

which is an infinitesimal quantity and appears as zero in all usual applications of P. But if applied in a operation including infinite series of consecutive powers, then in the matrix Z, containing this sum of all consecutive powers of P we get the entries

By defining

assuming this leads to the non-neglectable rational quantities in that subdiagonal of the sum-matrix

With that correction the infinite series of powers of the pascal-matrix leads then to a correct matrix,

including the above definitions for the first upper subdiagonal, which provides the coefficients of the (integrals of) the bernoulli-polynomials, and can be used to express sums of like powers as expected and described by H.Faulhaber and J.Bernoulli.
(for more details see powerseries of P, page 13 ff )

This suggests then to reconsider the matrix of Stirling-numbers of 1'st kind with the focus of existence of a similar structure in there.


Does this sound reasonable? It would require a description of the matrix of Stirling-numbers 1'st kind, which allows such an infinitesimal quantity.
But there is one important remark: this matrix contains the coefficients of the series for logarithm and powers of logarithms. A modification of this matrix would then introduce an additional term in the definition of these series. Something hazardeous...
Gottfried Helms, Kassel
I've made a little progress with the problem of deviance of the serial summed alternaing series of powertowers of increasing heights ("Tetra-series"). Since T- and U-tetration can be mutually converted by shift of their parameter x, I can concentrate on the U-tetration here, which has the advantage, that its operator is a triangular matrix, whose integer-powers and eigensystem are easily and exact (within the unavoidable size-truncation for the actual computation) computable.
Denote the fixed base for

lb(x) = log(1+x)/log(b)
ub(x) = b^x - 1

The iterates

ltb(x,h) = ltb(lb(x),h-1)     ltb(x,0)= x
utb(x,h) = utb(ub(x),h-1)     utb(x,0)= x

The infinite alternating sums

ASLb(x) = x - ltb(x,1) + ltb(x,2) - ltb(x,3) + ... - ...
ASUb(x) = x - utb(x,1) + utb(x,2) - utb(x,3) + ... - ...
The matrix-approach suggests to compute the AS-series using the geometric-series of the U-tetration-matrices S1b and S2b, which I shall call MLb and MUb here. From here the conjecture was derived, that
ASLb(x) + ASUb(x) = x    // matrix-computation
MLb + MUb = I            // matrices

However, the computation of ASLb(x) along its partial sums and summation using Cesaro- or Euler-summation gives a different result:
ASLb(x) + ASUb(x) = x + db(x)  // serial-computation

The difference of the matrix- and serial-computation may then be expressed by db(x) only.


In my previous posts I already mentioned, that the deviance between the two methods seem to be somehow periodic, wrt x as the variable parameter.

The first useful result was, that I found periodicity for db(x)
db(x) = - db(ltb(x,1)) = db(ltb(x,2)) = - db(ltb(x,3)) ...

for few bases and numerical accessible range for x. The plot showed also a sort of sinus-curve, but where the frequency was somehow distorted.

Today I could produce a very good approximation to a sinus-curve, using fractional tetrates for x.
I computed the k=0..32 fractional U-tetrates of x for height 1/16
x_k = ltb(1,k/16)
for base b=sqrt(2), and instead of x I used the index k as x-axis for the plot.
This provided a very good approximation to a sinus-curve for db(x_k)

In the plot I display the two near-lines for ASLb(x_k), ASUb(x_k) and db(x_k) and overlaid a sin-curve, whose parameters I set manually by inspection.
The curve for db(x_k) and sin() match very good; I added also a plot for their difference.

If I can manage to make my procedures more handy, I'll check the same for more parameters. The near match of the curves give apparently good hope that this line of investigation may be profitable...

[updated image]


Error-curve( deviation of db(x) from overlaid sinus-curve)

Gottfried Helms, Kassel
I started rethinking the inconsistency about the tetraseries with increasing negative heights.
First - I don't remember whether I've already linked my compilation concerning this problem. Here is the link tetraseries-problem

Then I realized, that using a fixpoint we'll have a special case.
Recall the definition
ALU (b,x) = x - log(1+x)/log(b) + log(1+log(1+x)/log(b))/log(b) - ... + ...
which means the infinite alternating sum of towers of negative heights.

If x is a fixpoint of b, where b^t-1=t or b=(1+t)^(1/t), then clearly this simplifies to
ALU (b,t) = t - t + t - ... + ... = t * eta(0) = t/2 //by Euler-or Cesaro-summation. and the conjecture
ASU(b,t) + ALU(b,t) - t =0
holds for this case.

Don't know how to make this helpful for reparature of my conjecture, but perhaps it gives an idea.


@Andrew (I just also reread your note in the other thread): the needed matrix is simple to compute. Just construct the triangular Bell-matrix U (for decremented iterated exponentiation for a base b), and compute
MU = (I + U)^-1
ML = (I + U^-1)^-1
Gottfried Helms, Kassel
Assume for the following a fixed base t for the dexp- or "U"-tetration, so that dxp°h(x):=dxp_t°h(x) and for shortness let me replace dxp by U in the following. Also I change some namings from the previous posts for consistency.

Denote the alternating series of U-powertowers of increasing positive heights

and of increasing negative heights

then my conjecture, based on diagonalization was

which was wrong with a certain systematic error

I have now a description for the error d(x), which fits very well.

First, let's formally write as the two-way-infinite series

Then it is obvious, that as(x) is periodic with 2h, if x is expressed as powertower

where h is integer and r is the fractional remainder of a number y (mod 2)

and define

Then we may discuss as_r = as(x) as

where r = y (mod 2)

We can then rewrite the formula


My observation was, that d_r is sinusoidal with r, with a very good fit (I checked also various bases t).

I got now

where "a" indicates the amplitude and "w" a phase-shift (depending on base t).

The following fits the result very well:

so we could rewrite as functional equation for asp

and also can determine all as_r using asn_r for r=0 and r=0.5 (the integer and half-integer-iterate U°0(1) and U°0.5(1)) only.

Note, that the computation of asp_r is exact using the appropriate matrix (I + Ut)^-1 of the diagonalization-method.
The diagonalization-method deviates only for the part asn_r; it gives asn_r - d_r instead of asn_r ; unfortunately, this does not allow to determine d_r correctly with the diagonalization-method only (yet).

The benefit of the diagonalization-method is here, that its matrix (I+Ut)^-1 provides the coefficients for a powerseries for asp_r, which seems to be the analytic continuation for bases t, where the series asp would diverge.
Gottfried Helms, Kassel
Hi -

few days ago I collected my current results in a msg in sci.math and sci.math reasearch; I think, it fits well here, although I stated most of it in earlier posts already, so this may appear boring. However, since the readers of the newsgroups are not familiar with the matrix-/diagonalization concept I explained the problem in terms of sums of the formal powerseries, which may be interesting also here.
Additionally I append here some more notes, which are new, and possibly focus the problem in a more fruitful way.
Because I'm a bit lazy today I don't convert it into latex - just plain text.

I'd like to put it also into the "open problems" section, but I'm a bit unsure how I could shorten the exposition for that thread by appropriate referencing.


I have a new result for the Tetra-series here, which points
to a more fundamental -but general- effect in summing of this
type of series.

I discuss the U-tetration instead of the usual T-tetration here,
because the effect under consideration apparently is the same with
T-tetration, and U-tetration is easier to implement using the

I usuallly denote

    Tb  (x)  = b^x   // base-parameter b  
    Tb°0(x)  = x     // base b occurs 0 times
    Tb°1(x)  = b^x   // base b occurs 1 times
    Tb°h(x)  = Tb°(h-1)(Tb(x))
             = b^b^b^...^b^x   // base b occurs h times
    Tb°-1(x) = log(x)/log(b)

    Ut  (x) = t^x -1   // base-parameter t  
    Ut°0(x) = x        
    Ut°1(x) = t^x -1  
    Ut°h(x) = Ut°(h-1)(Ut(x))
    Ut°(-1)(x) = log(1+x)/log(t)

For the discussion of the series we assume a fixed base-parameter t
here, so I omit it in the notation of the U-tetration-function in
the following.
Also I restrict myself to bases t where all of the following series
are conventionally summable using Cesaro/Euler-summation.

The series under discussion are

           (U-powertowers of increasing positive heights)
asup(x) = x - (t^x -1) + (t^(t^x - 1) -1) - ... +...
         = sum {h=0..inf} (-1)^h * U°h(x)

           (U-powertowers of increasing negative heights)
asun(x) = x - log(1+x)/log(t) + log(1+ log(1+x)/log(t))/log(t) -...+...
         = sum {h=0..inf} (-1)^h * U°-h(x)

           (all heights)
asu(x)  = asp(x) + asn(x) - x
         = sum {h=-inf..inf} (-1)^h * U°h(x)

The acronyms mean here:
  a(lternating)    s(ums of)   u(-tetration with increasing)
  p(ositive heights)  
  n(egative heigths)  


Using u=log(t) = 1/2 , t=exp(u)~ 1.648721... the series asup and asun
have bounded terms with alternating signs, so they can be Cesaro- or
If they are summed this way, evaluated term by term, I call
this "serial summation" in contrast to my matrix-approach.

Values for asup(1) and asun(1), found by serial summation are

  asup(1) =  0.596672423492...        // serial summation
  asun(1) =  0.403327069976...        // serial summation
  asu(1)  = -0.000000506531563910...  // serial summation

My earlier conjecture, based on consideration of the matrix-method,
was, that asu(1) = 0 for each base t, but which was wrong.

Computations give this results:

  asup(1) =  0.596672423492...  // matrix method
  asun(1) =  0.403327576508...  // matrix method
  asu(1)  =  0                  // matrix method

where in all checked cases asup(x) appeared as identical for both
methods, and only asun(x) differed (begin of differences of
digits marked by vertical line):

  asun(1) =  0.403327 | 069976...  // serial summation
  asun(1) =  0.403327 | 576508...  // matrix method

The difference of the two methods occurs systematically, so there is
reason to study this difference systematically as well.

Since most readers here are unfamiliar with the the matrix-method,
I'll give the examples below in more conventional description using
the explicit powerseries representation of the problem.


But we need some prerequisites.

First note, that if x is seen as U-powertower to base t itself,
then the results in asu(x) are periodic with the integer-height
part of x; so if

  x = U°h(1)

then the results for asu(x) occur periodically with k in

  x_k = U°(2*k*h)(1) = U°(2*k*floor(h))(x_r)

where x_r is the remaining part of fractional height; so we may
standardize our notation to

  x_r = U°r(1)

where r means the fractional part of h and reduce our notations
for asup, asun and asu to

  asup_r = asup(x) = asup(U°r(1))
  asun_r = asun(x) = asun(U°r(1))
  asu_r  = asu (x) = asu (U°r(1))

Second: what we also need is the half-iterate U°0.5(1), such that

    U°0.5(U°0.5(1)) = t - 1

The powerseries for U°1(x) = t^x - 1 is simple; it is just the
exponential series reduced by its constant term (use u = log(t))

   U°1(x) = ux + (ux)^2/2! + (ux)^3/3! + ...

Using the matrix-/diagonalization-method one can find the
coefficients a,b,c,... for the U°0.5-function as well:

   U°0.5(x) = a x + b x^2 + c x^3 + ...
            = 0.707107...*x + 0.103553...*x^2 + 0.00534412...*x^3
                - 0.000124330...*x^4 + 0.0000201543...*x^5 + O(x^6)

If I use this function (actually with 96 terms and higher precision)
then I get

  U°0.5(1)           = 0.815903...
  U°0.5(0.815903...) = 0.648721...

which is very well approximated

  U°1(1) = t^1 -1    = exp(1/2) - 1
                     = 0.648721...

using 96 so-determined terms of the powerseries of U°0.5(x).

So we may assume, U°0.5(1)= 0.815903... is determined with
sufficient (and principally with arbitrary) precision.

Now we compute asup_0.5 and the other series by serial summation

  asup_0.5 = asup(U°0.5(1)) = asup(0.815903...) = 0.497542...       // serial
  asun_0.5 = asun(U°0.5(1)) = asun(0.815903...) = 0.318354...       // serial
  asu_0.5  = asu (U°0.5(1)) = asu (0.815903...) = -0.00000690039... // serial

  asu_0.5 = asup_0.5    + asun_0.5    - x_0.5
          = 0.497542... + 0.318354... - 0.815903...

while by the matrix-method we get

  asup_0.5 = asup(U°0.5(1)) = asup(0.815903...) = 0.497542...     // matrix
  asun_0.5 = asun(U°0.5(1)) = asun(0.815903...) = 0.502458...     // matrix
  asu_0.5  = asu (U°0.5(1)) = asu (0.815903...) = 0               // matrix


The first result is now, that for any r apparently we may describe
the difference between the matrix-computed results and the serial
results using

  d_r = asu_r (//serial) - asu_r (//matrix)
      = asu_r (//serial)

computable by

  d_r = ampl * sin(2*pi*r + w)

where the amplitude is

     ampl = sqrt(d_0^2 + d_0.5^2)

and the constant phase-shift w

     w = arg(d_0.5 + d_0*I)

Here we find
     d_0   =  -0.00000050653156391...
     d_0.5 =  -0.00000690038760124...

           d_0^2+d_0.5^2    = 4.78719232725 E-11
     ampl  =   0.00000691895391461...

     w = arg(d_0*I + d_0.5)    = -3.06831783019...
       = atan(d_0/d_0.5) - Pi  =  0.0732748233988... - Pi

so we may as well say, that the error in computing asn(x)=asn_r by the
matrix-method is the sinusoidal function d_r.

So the matrix-method must be reconsidered for the case of infinite
series of negative heights.


As I promised in the above, we need not go into details of the
matrix-method itself; it can be shown, that the coefficients for
the powerseries of asn(x) determined by the matrix-method and the
following conventional method are the same.

Consider the sequence of powerseries for U°0(x), U°-1(x), U°-2(x)
which must be alternating summed to give the powerseries for asn(x)

U°0(x) =   0    1 x            
-U°-1(x)=   0   -2 x   +2/2! x^2     -4/3! x^3      +12/4! x^4        -48/5! x^5 +...
+U°-2(x)=   0   +4 x  -12/2! x^2    +64/3! x^3     -496/4! x^4      +5072/5! x^5 -...
-U°-3(x)=   0   -8 x  +56/2! x^2   -672/3! x^3   +11584/4! x^4    -262176/5! x^5 +...
+U°-4(x)=   0  +16 x -240/2! x^2  +6080/3! x^3  -220160/4! x^4  +10442816/5! x^5 -...
-U°-5(x)=   0  -32 x +992/2! x^2 -51584/3! x^3 +3825152/4! x^4 -371146880/5! x^5 +...
           ...   ...     ...          ...             ...                  ...
  asn(x)=   0   a1 x     +a2 x^2       +a3 x^3         +a4 x^4           +a5 x^5 +...

then, when we collect like powers of x, we get divergent sums of
coefficients at each power of x.

However, the second column indicates, that these sums may be
computed by the given analytical continuation of the geometric
series - unfortunately, the composition of the following columns
from geometric series are not obvious.

But if we want to resort to -for instance- Euler-summation, which gives
regular results if some conditions on the growthrate of the terms
of a infinite sum/a series are given, we may assign values to all a_k.

    One of these conditions is, that the growthrate is eventually
    geometric, thus the quotient of absolute values of two subsequent
    terms must converge to a constant.
    I checked this condition and it is satisfied (also backed by
    inspection of the general description of terms as given in [1])

    Quotients of absolute values of subsequent row-entries for the
    leading five columns:

    2.     6.00000    16.0000    41.3333    105.667
    2.     4.66667    10.5000    23.3548    51.6909
    2.     4.28571    9.04762    19.0055    39.8313
    2.     4.13333    8.48421    17.3744    35.5409
    2.     4.06452    8.23325    16.6588    33.6885
    2.     4.03175    8.11453    16.3227    32.8250
    2.     4.01575    8.05675    16.1597    32.4078
    2.     4.00784    8.02825    16.0795    32.2028
    2.     4.00391    8.01409    16.0396    32.1011
    2.     4.00196    8.00704    16.0198    32.0505
    2.     4.00098    8.00352    16.0099    32.0252
    2.     4.00049    8.00176    16.0049    32.0126
    2.     4.00024    8.00088    16.0025    32.0063
    2.     4.00012    8.00044    16.0012    32.0032
    ...    ...    ...    ...    ...

    We see empirically, that the quotients converge to powers of u^-1
    (where u=1/2 for all computations in this examples)

    So the column-wise summation of coefficients using Euler-summation
    should give valid results for the final powerseries asn(x)

What I get is, for

   ...        ...     ...       ...                ...             ...      
  asn(x)=   a1 x +  a2 x^2    +a3 x^3           +a4 x^4           +a5 x^5 +...

the explicite values for coefficients a_k:

    asn(x)=   1/3 x +1/15 x^2 +2/405 x^3 -0.0010893246 x^4  -0.000457736 x^5 +...

      //   by matrix-method (= collecting coefficients at like powers of x;
      //   Euler-sums. The coefficients are rational multiples of integer
      //   polynomials in u.

So, by comparision of the results, we know, that this powerseries
is *false* and needs correction by a component d_r, which follows a
sinuscurve according to the fractional height r of x . Where x is
assumed as U-powertower

  x = x_r = U°r(1)


This effect of a sinusoidal component in the determination of
asn(x) when computed by collecting like powers of x of all involved
powerseries seems somehow fundamental to me, and I would like
to find the source of this effect.

May be, it is due to the required *increasing* order of Euler-summation,
where a column c needs order of u^-c, and the implicite binomial-
transform in Euler-summation with infinite increasing order must
be reflected by special considerations.

Gottfried Helms

    see page 21



I'm adding some more checks here which deal with the special problem
in asn(x) only.

First note, that if x is a fixpoint such that

   U°h(x) = x

then the series asn(x) changes to the alternating series

   x - x + x - x + ... - ... = x * eta(0) = 1/2 x  

and it is interesting, what serial and matrix-summation do, if the
fixpoint is given as parameter.

One fixpoint x0 = 0, since t^0 - 1 = 0; however, this is not of
interest here
The second fixpoint, which can be computed as limit

   xoo = lim{h->inf} U°(-h)(1)  

is, using u=0.5, t = exp(u)~ 1.648721..., h=400

   xoo = ut(1,-400) = 2.51286241725233935396547523322...


direct check: (is xoo a fixpoint?)

   (t^xoo - 1)  - xoo = -4.10903766646035512593597628782 E-113
   log(1+xoo)/u - xoo =  2.33942419508380691068587979906 E-113

   so we have a very well aproximated fixpoint (I used float-precision
   of 1200 decimal digits)

Serial summation: ----------------------------------------------------
check: is asn(xoo) = xoo/2   ?

   asn(xoo)           =  1.25643120862616967698273761661
   asn(xoo)- xoo/2    = -7.45354655251552020322193384272 E-114

so indeed, the serial summation behaves as expected.

Matrix: (dim=96) -------------------------------------------------

direct check: (is xoo a fixpoint?)
%box ESum1(1.2485)*dV(xoo)*Mat(UtI[,2]) - xoo*Mat(V(1))
      V(xoo)~*UtI[,2] - xoo =  -3.78989 E-28

   Well, I've to check, whether the precision increases with increasing
   number of terms.

check: is asn(xoo) = xoo/2 ?

      UtMI = I - UtI + UtI^2 - UtI^3 - ... + ...
           = (I + UtI)^-1 // geometric series

      asn(xoo) =  V(xoo)~ * UtMI[,1]

   The coefficients in second column of UtMI with powers of xoo form
   a divergent series, so I have to apply Euler-summation; but since
   Euler-summation may be too weak for this series, I also append
   a check with a stronger (however still experimental method PkPow)

      %box ESum1(1.32)*dV(xoo)*Mat(UtMI[,2])
      %box PkPowSum(1.7,1.1)*dV(xoo)*Mat(UtMI[,2])
      asn(xoo) =  V(xoo)~* UtMI[,2]  =  1.26414...  // ESum  1.32  
      asn(xoo) =  V(xoo)~* UtMI[,2]  =  1.26487...  // PkPow 1.7,1.1  

      result - xoo/2     =  0.0077142...  // ESum  1.32
      result - xoo/2     =  0.0084459...  // PkPow 1.7,1.1

So, as expected we get the error also if xoo is used.


Well, so I get *some* numbers here. What would be interesting,
is how these numbers can be related to a correction-factor d_r
of the asn-matrix/asn-powerseries coefficients to give correct
results for any x and base t in asn_t(x).

It is clear, that the powerseries (including the correction component

    asn(x) =    a1 x + a2 x^2  + a3 x^3 + a4 x^4 + a5 x^5 +...
               + d_r

cannot be corrected by a constant a0 = d_r, since d_r is a scaled
sine-function dependent on x (and also on t). But I've no idea,
how to proceed here.

Gottfried Helms, Kassel
A curious result from study of the tetra-series. (text updated)

I considered the "reverse" of the tetra-series problem.

Instead of asking for the a_lternating s_um of powertowers of increasing p_ositive heights (asp)
asp(x,dxp) = dxp°0(x) - dxp°1(x) + dxp°2(x) - ... + ...
where dxp(x) = exp(x)-1 and dxp°h(x) is the h'th integer-iterate.

I asked for a function tf(x) where
asp(x,tf) = (e^x-1)/2 = tf°0(x) - tf°1(x) + tf°2(x) - ... +...
so I ask: can (e^x - 1)/2 be represented by a tetra-series of a function tf(x) and what would that function look like?

Using the matrix-operator-approach I got the result
tf(x) = x - x^2 + 2/3*x^3 - 3/4*x^4 + 11/15*x^5 - 59/72*x^6 + 379/420*x^7 - 331/320*x^8
       + 1805/1512*x^9 - 282379/201600*x^10 + 3307019/1995840*x^11 - 6152789/3110400*x^12
       + 616774003/259459200*x^13 - 3212381993/1117670400*x^14 + 54372093481/15567552000*x^15
       - 594671543783/139485265920*x^16 + 58070127447587/11115232128000*x^17
       - 1209735800444267/188305108992000*x^18 + 26776614379573099/3379030566912000*x^19
       - 209181772596680209/21341245685760000*x^20 + 1034961114326994557/85151570286182400*x^21
       - 80852235077445729119/5352384417988608000*x^22
       + 2210690796475549862239/117509166994931712000*x^23
       - 18624665294361841906483/793412278431252480000*x^24
       + 379264261780067802109819/12926008369442488320000*x^25
       - 6584114267874407529534167/179240649389602504704000*x^26
       + 5046681464320089079803469/109576837040799744000000*x^27
       - 326480035696597942691643978259/5646080455772478898176000000*x^28
       + 327920863401689931801359966641/4511103058030460180889600000*x^29
       - 418419411682443365665393881223739/4573325169175707907522560000000*x^30
       + 15798888070625404329026746075454779/137047310902965380295426048000000*x^31
       + O(x^32)
which can be determined to arbitrary many coefficients by a recursive process on rational numbers.

The float-representation is
tf(x) = 1.00000000000*x - 1.00000000000*x^2 + 0.666666666667*x^3 - 0.750000000000*x^4
      + 0.733333333333*x^5 - 0.819444444444*x^6 + 0.902380952381*x^7 - 1.03437500000*x^8
      + 1.19378306878*x^9 - 1.40068948413*x^10 + 1.65695596841*x^11 - 1.97813432356*x^12
      + 2.37715218038*x^13 - 2.87417649515*x^14 + 3.49265533084*x^15 - 4.26332874559*x^16
      + 5.22437379435*x^17 - 6.42433870711*x^18 + 7.92434807834*x^19 - 9.80176020073*x^20
      + 12.1543397362*x^21 - 15.1058348510*x^22 + 18.8129220299*x^23 - 23.4741329327*x^24
      + 29.3411740841*x^25 - 36.7333765543*x^26 + 46.0560972611*x^27 - 57.8241911808*x^28
      + 72.6919467774*x^29 - 91.4912883306*x^30 + 115.280540468*x^31 + O(x^32)

so I assume, that this series tf(x) has radius of convergence limited to about |x|<0.7

Moreover, the iterates of this function seem always to be of a similar form, so the alternating sum of the found coefficients of the iterated functions at like powers of x is divergent for each coefficient (but may be Euler-summed). So this result must be considered in more detail next, since I had inconsistency of the matrix-method with serial summation either for increasing positive or for increasing negative heights.

However, for x=1/2 or x=1/3 or smaller we can accelerate convergence of asp() by Euler-summation such that I get good (?) approximation to the six'th digit for x=1/3 using the truncated series with 31 terms only.

The process for the generation of these coefficients is a bit tedious yet; so I don't have -for instance - the function, whose iterations must be non-alternating summed to get the exp(x)-1 value (or (exp(x)-1)/2 or some other scalar multiple) which -as I guess- could have better range of convergence.

I'll post the result, if I got it.

Gottfried Helms, Kassel
An asnwer in the newsgroup sci.math:

Dear Gottfried,

the function tf(x) verifies

exp(tf(x)) + exp(x) = 2*x +2


tf(x)=ln( -exp(x) +2*x+2)

and series near 0,

x - x^2 + 2/3x^3 - 3/4 x^4 + 11/15 x^5 ... [I corrected a missing term]


So I took the long way... Smile

Gottfried Helms, Kassel
A new result, which I just posted in sci.math; I'll improve the formatting later (a bit lazy...)


Am 30.06.2008 17:00 schrieb
>> >> Gottfried
> >
> > Well, inverse function of tf(x) using lambertW()
> > is  -Lambert(-1/2exp(1/2e^x - 1)) +1/2e^x - 1
> > series near 0 , x +x^2 +4/3x^3 +29/12x^4 +51/10x^5 ...
> >
> >
Yepp, I got the same series - good!

But now - iterations and especially the sum of iterations
of this functions in the Lambert-representation should be
intractable, so I don't assume this can be helpful to get
more insight in the source of the inconsisteny-problem;

by the formal application of the matrix-approach
      asn(x)+asp(x)-x = 0  // expected
      asn(x,f(x)) = x - asp(x,f(x)) // expected
      asp(x,f°(-1)(x)) = x - asp(x,f(x)) // expected

is not true, at least for the function f(x) = exp(x)-1

Well - it was a try...


Meanwhile I refined my computation-process, so I've
now even the function fz(x) with the condition

  e^x - 1 = fz(x) + fz(fz(x)) + fz(fz(fz(x))) + ..
          = sum{h=1..inf} fz°h(x)

I got
fz(x) = 2*(x/4)/1! + 6*(x/4)^2/2! + 10*(x/4)^3/3! - 46*(x/4)^4/4! - 554*(x/4)^5/5!
       - 1690*(x/4)^6/6! + 27882*(x/4)^7/7! + 505986*(x/4)^8/8! + 2529590*(x/4)^9/9!
       - 61918794*(x/4)^10/10! - 1726391798*(x/4)^11/11! - 14268435022*(x/4)^12/12!
       + 352044609814*(x/4)^13/13! + O(x^14)

where the integer parts of the coefficients are
which have to be divided py powers of 4 and by factorials to give the
coefficients of the function.

The float representation of this function is
fz(x) =   0.500000000000*x^1 + 0.187500000000*x^2 + 0.0260416666667*x^3 - 0.00748697916667*x^4
         - 0.00450846354167*x^5 - 0.000573052300347*x^6 + 0.000337655203683*x^7 + 0.000191486449469*x^8
         + 0.0000265917660278*x^9 - 0.0000162726971838*x^10 - 0.0000103115449999*x^11
         - 0.00000177549511523*x^12 + 0.000000842437121499*x^13 + 0.000000632647393830*x^14
         + O(x^15)

Can we give a range for x where this converges ?
The quotients of subsequent coefficients give the following sequence



Using 32 coefficients for the function and 60 iterates for the sum
I could approximate e^1 -1 relatively well. I got

    sum(h=1,60,fz°h(1.0)) - ( exp(1)-1 ) =  -3.24385306514 E-13

where the quality of approximation increased when terms of the
function and numbers of iterates are increased.

Fun... :-)

Gottfried Helms
Gottfried Helms, Kassel
This is probably an old result. For the function

I found using Carleman matrices that

is this related to the series above?

Possibly Related Threads...
Thread Author Replies Views Last Post
  Perhaps a new series for log^0.5(x) Gottfried 3 3,780 03/21/2020, 08:28 AM
Last Post: Daniel
Question Taylor series of i[x] Xorter 12 21,286 02/20/2018, 09:55 PM
Last Post: Xorter
  An explicit series for the tetration of a complex height Vladimir Reshetnikov 13 22,256 01/14/2017, 09:09 PM
Last Post: Vladimir Reshetnikov
  Complaining about MSE ; attitude against tetration and iteration series ! tommy1729 0 2,994 12/26/2016, 03:01 AM
Last Post: tommy1729
  2 fixpoints , 1 period --> method of iteration series tommy1729 0 3,041 12/21/2016, 01:27 PM
Last Post: tommy1729
  Taylor series of cheta Xorter 13 23,262 08/28/2016, 08:52 PM
Last Post: sheldonison
  Derivative of E tetra x Forehead 7 14,098 12/25/2015, 03:59 AM
Last Post: andydude
  [AIS] (alternating) Iteration series: Half-iterate using the AIS? Gottfried 33 61,664 03/27/2015, 11:28 PM
Last Post: tommy1729
  [integral] How to integrate a fourier series ? tommy1729 1 4,416 05/04/2014, 03:19 PM
Last Post: tommy1729
  Iteration series: Series of powertowers - "T- geometric series" Gottfried 10 23,739 02/04/2012, 05:02 AM
Last Post: Kouznetsov

Users browsing this thread: 1 Guest(s)