Here is another view into a polynomial interpolation for the logarithm which does not provide the true mercator-series for the logarithm. It is just a q&d reproduction of some analysis I've done recently but which I didn't yet document properly. It deals much with that (b-1)^m and (b^m-1) terms - so maybe you can see a/the relation to your own procedure...

We want, with some vector of coefficients X, a quasi-logarithmic solution for a base b,

V(b^m)~ * X = m

We generate a set of interpolation-points for the b^m-parameters

V(b^0)~

V(b^1)~

V(b^2)~

...

Make this a matrix, call it VV. Note that this matrix is symmetric!

VV = matrix{r,c=0..inf}(b^(r*c))

We generate a vector for the m-results

Z=[0,1,2,3,...]

Then the classical ansatz to find the coefficientsvector X by vandermonde interpolation is:

VV * X = Z

X = VV^-1 * Z

But VV cannot be inverted in the case of infinite size. So we factorize VV into triangular and diagonal factors and invert that factors separately

[L,D,U] = LU(VV)

Here VV is symmetric, thus U is the transpose of L. So actually

[L,D,L~] = LU(VV)

Moreover, we have the remarkable property, that L is simply the q-analoguous of the binomial-matrix to base b and D contains q-factorials. Thus we neither have actually to calculate the LU-factorization nor the inversion - the entries of the inverted factors can directly be set.

So we have LI = L^-1, DI = D^-1 just by inserting the known values for the inverse q-binomial-matrix (see description of entries at end)

Then, formally, the coefficientsvector X could be computed by the product

X = (LI~ * DI * LI) * Z

But LI~ * DI * LI could imply divergent dot-products, (I didn't actually test this here) so we leave it with two separate factors:

W = LI~ * DI // upper triangular

WI = LI * Z // columnvector, see explicte description of entries at end

At this point -since we know explicitely the entries of W and WI- we could dismiss all the matrix-stuff and proceed to the usual notation with the summation-symbol and the known coefficients and have very simple formulae...

But well, since we are just here, let's proceed that way... :-)

I'll denote a formal matrix-product, which cannot be evaluated, by the operator <*>. Then we expect (at least for *integer m*) this to be a correct formula:

V(b^m)~ * W <*> WI = m

We compute the leftmost product first, and actually the result-vector Y~ in

V(b^m)~ * W = Y~

becomes rowfinite - it just contains the q-binomials (m:k)_b for k=0..m

So in the formula

( V(b^m)~ * W ) <*> WI = m

we have actually

( [1,(m:1)_b, (m:2)_b,...,1,0,0,0,...]) * WI = m

thus the product with WI can be done and we get an exact (and correct) solution for integer m.

So far, so good.

However, this does not apply for fractional m. The vector Y is no more finite and approximations suggest, that for all fractional values the formula is false.

Additional remark: because by the factor DI we get the same denominators as shown in the article on Euler's false logarithms and the overall-structure is very similar I assume, that this procedure provides simply the taylor-coefficients of that Eulerian series.

Gottfried

Code:

`// description of entries in LI,DI and WI `

A quick inspection of an actual example gives the following (please crosscheck this!);

the symbol (r:c) means the binomial r over c

the symbols x!_b and (r:c)_b denote the according q-analogues to base b

LI = matrix {r=0..inf,c=0..r} ( (-1)^(r-c)*b^(r-c:2)*(r:c)_b)

DI = diagonal(vector{r=0..inf}( 1/ ( r!_b*(b-1)^r * b^(r:2) ))

WI = vector{r=0..inf}( if(r==0) : 0 )

( if(r >0) : (-1)^(r-1) (r-1)!_b *(b-1)^(r-1) )