08/16/2007, 08:28 PM

Change of base, a view from the matrix approach

There is already a lot of discussion about change of base here.

Without having read it all, I thought I'd try my matrix-approach

and post the results.

With the

a) eigensystem-approach,

b) assumption about the set of eigenvalues (set of powers of parameter, see below)

I can apparently approximate a solution for the base-change problem.

Formulae:

Assume the constant operator-matrix for tetration as usual B,

the parametrized version

Assume s in the range 1/e^e<s<1 or 1<s<e^(1/e) and for convenience

simply take

Then, for nonnegative integer y

or

Denote the eigensystem-composition of B(s)

Assume assumption b) true, then for the set of eigenvalues of B(s) is

and also

thus

also for noninteger y

----------------------------------------------------------

Now the problem is how to compute

We take an admissible

With the above apparatus we can write the following:

Decomposing 2) into its eigensystem Determine the first part

so

Determine the second part

Combine

Since diagonal-matrices commute we may reorder diag(R1):

and since these are all diagonal matrices we can omit the V(1)~

summing-vectors and it must be for all diagonal-entries

Now the assumption is, that the entries of Ds2 are powers of L2,

so from the second entries alone we have the scalar equation:

from where

Using Ioannis' notation of the h()-function, where

============================================

I tried this numerically, with a surprising result. ;-)

I got

Having V(z) I can compute according to the previous formulae

from where finally y2 can be determined:

So, theoretically it should be

But with my small matrices of dim=24 (reduced because of eigenanalysis)

the assumption about the eigenvalues (and thus of the diagonal-matrix Ds2 )

is aproximated only with high relative error. Errors occur also with the

eigenvector-matrices (but in sum they seem to mutually cancel out for

many cases)

If I didn't use the theoretical eigenvalue L2, but the "empirical", L2emp,

from the empirical diagonal-matrix Ds2

I get

and actually, using this value, empirically

which agrees with the version V(1)~ * B(s1)^3 = V(z) as shown above.

------------------------------

My eigensystem-analyses are unfortunately restricted to small dimensions,

so deviations from theoretical expectations can be relatively high without

obvious contradiction to the assumptions. The range of admissible parameters

is unfortunately relatively small, it is in the range 1/e^e < s < e^(1/e),

but additionally with relative wide epsilon areas at the limits and also

around 1. So the methods should be improved.

A bit of convenience allows the Euler-summation, which accelerates the

convergence of oscillating series-terms down to such small number of

accessible terms (which is needed in many sum-formulae) and gives acceptable

approximates.

Further interesting should be, what is, if one base s1<1 and the other base s2>1.

Then I expect logarithms of negative numbers and tetration with complex exponent

(but I didn't try this yet due to the numeric instability so far)

Gottfried

There is already a lot of discussion about change of base here.

Without having read it all, I thought I'd try my matrix-approach

and post the results.

With the

a) eigensystem-approach,

b) assumption about the set of eigenvalues (set of powers of parameter, see below)

I can apparently approximate a solution for the base-change problem.

Formulae:

Assume the constant operator-matrix for tetration as usual B,

the parametrized version

Code:

`: B(s) = dV(s) * B`

simply take

Code:

`: s = t^(1/t) 1<t<e`

Code:

`: V(1)~ * B(s)^y [,1]= s^^y`

Code:

`: sum(r=0..inf) B(s)^y[r,1] = s^^y`

Denote the eigensystem-composition of B(s)

Code:

`: B(s) = Qs * Ds * Qs^-1`

Assume assumption b) true, then for the set of eigenvalues of B(s) is

Code:

`: Ds = diag([1,log(t),log(t)^2,log(t)^3, .... ])`

Code:

`: B(s)^y = Qs * Ds^y * Qs^-1`

thus

Code:

`: V(1)~ Qs * Ds^y * Qs^-1 [,1] = s^^y`

also for noninteger y

----------------------------------------------------------

Now the problem is how to compute

Code:

`: s1^^y1 = z `

and s2^^y2 = z

where s1,y1,s2 is given, z is computed and y2 is sought.

We take an admissible

Code:

`: t1, then s1 = t1^(1/t1), L1 = log(t1) `

t2, then s2 = t2^(1/t2), L2 = log(t2)

With the above apparatus we can write the following:

Code:

`: 1) V(1)~ * B(s1)^y1 = V(z)~ // where in the second column of the result is z `

2) V(1)~ * B(s2)^y2 = V(z)~ // where V(z) is computed by the previous

and y2 is sought

Code:

`: 2.1) V(1)~ * Qs2 * Ds2^y2 * Qs2^-1 = V(z) ~`

--> V(1)~ * Qs2 * Ds2^y2 = V(z) ~ * Qs

Code:

`: 2.2 V(1)~ * Qs2 = R1 ~ = V(1) * diag(R1)`

so

Code:

`: 2.3 V(1) * diag(R1) * Ds2^y2 = V(z) ~ * Qs`

Code:

`: 2.4 V(z)~ * Qs = R2~ = V(1)~ * diag(R2)`

Code:

`: 2.5 V(1)~ * diag(R1) * Ds2^y2 = V(1) ~ * diag(R2)`

Code:

`: 2.6 V(1)~ * Ds2^y2 = V(1) ~ * diag(R2)*diag(R1)^-1`

summing-vectors and it must be for all diagonal-entries

Code:

`: 2.7 Ds2[r,r]^y2 = R2[r]/R1[r]`

so from the second entries alone we have the scalar equation:

Code:

`: 2.8 L2^y2 = R2[1]/R1[1]`

Code:

`: 2.9 y2 = log(R2[1]/R1[1])/log(L2)`

= log(R2[1]/R1[1])/log(log(t2))

Code:

`: from t^(1/t) = s ==> t = h(s)`

Code:

`: 2.9.1 y2 = log(R2[1]/R1[1])/log(log(h(s2))`

============================================

I tried this numerically, with a surprising result. ;-)

Code:

`: Using t1 = 2 ==> s1 = sqrt(2) ~ 1.414... `

y1 = 3

==> z = 1.414...^^3

t2 = 2.5 ==> s2 = 2.5^(2/5) ~ 1.44269990591

y2 = ?? (unknown, sought)

L2 = log(t2) ~ 0.916290731874

Code:

`: V(1)~ * B(s1)^3 = V(z)~ `

where

V(z) = [1.00000000000, 1.76083955588, 3.10055594155, 5.45958154710, ... ]

and thus

z = V(z)[1] = 1.76083955588

Having V(z) I can compute according to the previous formulae

Code:

`: R1~ = [1.00000000000 -675168330336. 107316405043. -16669350847.3 ... ]`

R2~ = [1.00000000000 -858097840871. 204571124661. -60399411879.7 ... ]

log(R2[1]/R1[1])= log(-858097840871./-675168330336.)= log(1.27093911594) ~ 0.239756088578

from where finally y2 can be determined:

Code:

`: y2 = log(R2[1]/R1[1])/log(L2) = -0.239756088578 / -0.0874215717908 = 2.74252777280`

So, theoretically it should be

Code:

`: a) (2^(1/2))^^3 = z = 1.76083955588`

b) (2.5^(2/5))^^2.74252777280 = z = 1.76083955588

But with my small matrices of dim=24 (reduced because of eigenanalysis)

the assumption about the eigenvalues (and thus of the diagonal-matrix Ds2 )

is aproximated only with high relative error. Errors occur also with the

eigenvector-matrices (but in sum they seem to mutually cancel out for

many cases)

If I didn't use the theoretical eigenvalue L2, but the "empirical", L2emp,

from the empirical diagonal-matrix Ds2

Code:

`: L2 = 0.916290731874`

L2Emp = D2[1,1] = 0.902698529882 log(L2Emp)= -0.102366635258

Code:

`: y2 = log(R2[1]/R1[1])/log(L2emp) = -0.239756088578 / -0.102366635258 = 2.34213118340`

and actually, using this value, empirically

Code:

`: V(1)~ * B(s2)^2.34213118340 = V(z)`

where

V(z) = [ 1.00000000000 1.76084065618 3.10055981646 5.45959178175 ... ]

which agrees with the version V(1)~ * B(s1)^3 = V(z) as shown above.

------------------------------

My eigensystem-analyses are unfortunately restricted to small dimensions,

so deviations from theoretical expectations can be relatively high without

obvious contradiction to the assumptions. The range of admissible parameters

is unfortunately relatively small, it is in the range 1/e^e < s < e^(1/e),

but additionally with relative wide epsilon areas at the limits and also

around 1. So the methods should be improved.

A bit of convenience allows the Euler-summation, which accelerates the

convergence of oscillating series-terms down to such small number of

accessible terms (which is needed in many sum-formulae) and gives acceptable

approximates.

Further interesting should be, what is, if one base s1<1 and the other base s2>1.

Then I expect logarithms of negative numbers and tetration with complex exponent

(but I didn't try this yet due to the numeric instability so far)

Gottfried

Gottfried Helms, Kassel