the extent of generalization
#1
I may have missed something or watched it go over my head, but I can't find how to find what I want so I'm going to make it unambiguous right here.

Call tetration a 3-tuple (a,b,c) meaning take a to the bth power c times.

I want (i,i,i).

Can anyone please guide me through this?
#2
Hello Matt! Welcome to the Forum!

Well, if I got your thoughts correctly, you are thinking of:
"i" to the "i-yh" power "i" times, with "i" positive integer.
With the adopted priority rules, this means:
1 = 1#1; 2^2 = 2#2; 3^(3^3) = 3#3; 4^(4^(4^4)) = 4#4, and ... so on! Is it so? In this case, I suppose that you were thinking of:
(i,i,i) = i-tower-i = i-penta-2 !!!! Terrifically ... explosive matter. In fact, (1,1,1) = 1; (2,2,2) = 4; (3,3,3) = 3^27 = 7.62559 x 10^12, ... and so on.
Let us try and solve the tetration problems, before those concerning "pentation". However, it is a very interesting ... future dream. Unless you were thinking of "i" as the imaginary unit. In that case, I must drink a good glass of wine, before answering.

All the best.
GFR
#3
Thanks GFR -
It's true that I'm refering to the imaginary unit with the is. I think this could also be expressed ack(i,i,3) I'm sure the way to do this is out there, but related papers I've found so far are too difficult for me

for example,
Continuous Iteration of Dynamical Maps
Aldrovandi, R.; Freitas, L. P.
http://arxiv.org/abs/physics/9712026

I have a feeling that the Bell matrix approach will work to find (i,i,i) but I have to go very slowly (I'm an undergrad)
#4
Matt D Wrote:I may have missed something or watched it go over my head, but I can't find how to find what I want so I'm going to make it unambiguous right here.

Call tetration a 3-tuple (a,b,c) meaning take a to the bth power c times.

I want (i,i,i).

Can anyone please guide me through this?

Matt,

do you mean (a^b)^((a^b)^(a^b)...) (c-times repetition)
or (a^b)^b)^b... (c-times repetition)
or (a^b)^c ?

Gottfried
Gottfried Helms, Kassel
#5
Gottfried Wrote:Matt,

do you mean (a^b)^((a^b)^(a^b)...) (c-times repetition)
or (a^b)^b)^b... (c-times repetition)
or (a^b)^c ?

Gottfried

Gottfried,
I mean (a^(b^(b^...c "times"...))) with a,b,c complex, but specifically (i^(i^... i "times"...)).
Thanks for your interest,

Matt
#6
Dear Matt! I got you!

If you mean (i,i,3), the outcome is rather ... civilized (but, nevertheless, complex):
In fact, (i,i,3) = i^(i^i) = i#3 = i-tower-3 and we may proceed as follows:
i#1 = i;
i#2 = i^(i#1) = i^i = e^(Pi*i*i/2) = e^(-Pi/2) = 0.207879572..;
i#3 = i^0.207879572..= e^(i*0.326536474..) = 0.947158998.. + i . 0.320764449..;
Then: (i,i,3) = 0.947158998.. + i . 0.320764449.. . So far, so good!
On the contrary, if you actually mean:
(i,i,i) = i ^ .... (i^i)[i times] = i-tower-i = i-penta-2, then the problem is "really complicated", instead of simply being only ... complex.
Nevertheless, we are not afraid of anything, since we even found an "infinite tower" with the height equal to the imaginary unit. In fact, we have:
i = e^(Pi*i/2), which is self-explanatory, as far its value is concerned. However, we can write it as: i = (e^(Pi/2))^i = k^i , which defines an infinite tower (i = k#oo), with "base" k=4.810477381.. (real), and with a height equal to ... "i".
I go and drink another one. See you soon!

GFR
#7
Matt D Wrote:
Gottfried Wrote:Matt,

do you mean (a^b)^((a^b)^(a^b)...) (c-times repetition)
or (a^b)^b)^b... (c-times repetition)
or (a^b)^c ?

Gottfried

Gottfried,
I mean (a^(b^(b^...c "times"...))) with a,b,c complex, but specifically (i^(i^... i "times"...)).
Thanks for your interest,

Matt
Hi Matt -

it seems, I can only be of partial help here (if at all).
The matrix method, which I employ, implements the opposite view of things. Let me say it this way:
your definition asks for an operator, which
* assumes a start value v0= a
* applies an operation which makes it v1 = v0^b
* and then repeatedly v_{k+1} = (v_{k})^b, and this (for the integer case of c) repeatedly c-times. Of course, then also the question of fractional or complex iterations occur.

However, my method works differently. It
* assumes a startvalue w0 = b (of your example)
* applies an operation which makes it w1 = a^w0
* and then repeatedly w_{k+1} = a^w_k

So we may say, there is the difference in that
with your idea the same exponent is appended at top of a newly computed base,
and
with my idea the same base is appended to a newly computed exponent.

I don't have an idea, which (in my favorite view of things) matrix-operator would implement your idea, but -may be- we may find one.

So this way I cannot be of help here for your general question.



But at least for the detail-aspect, of "what is a fractional iteration" or even "what is complex iteration" (indicated by the parameter c) I can add some remarks.

Since I think in terms of a matrix-operator Bs or Ba (where the index s or a indicate the used parameter), which implements the c'th iteration by its matrix-power Bs^c for my method, there is a "canonical" way to implement fractional or complex powers of Bs - just use the matrix-logarithm or the c'th power by eigensystem-decomposition/"diagonalization".

Your actual question seems, with all three parameters identical, to be one-to-one reproducable with my matrix-operator, so I could compute my solution

V({i,i}^^i)~ = V(i)~ * (dV(log(i))*B)^i = V(i)~ * B_i ^i .........// see Andrew Robbins definition-pages for interpreting the {a,x}^^h-notation

by the following Eigensystem-decomposition:

Let W and D be the components (where D is diagonal), such that
B_i = (dV(log(i))*B) = W^-1 * D * W
then
V({i,i}^^i)~ = V(i)~ * W^-1 * D^i * W
and
D^i = diag ( d0^i, d1^i, d2^i, .... )
where these entries can be computed by simple scalar complex exponentiation.

But I'm not sure, whether the expected values for our different interpretation of tetration should be equal as well, since they simply define different "processes".

Well - to compute numerical approximations (for my method) one can simply use any eigensystem-solver applied to the matrix B_i , which is parametrized with the base-variable a=i (see my overview articles concerning this in this forum)

In fact, I have computed some examples for complex iteration-parameter c; I have some crude/sketchy graphs, and may upload them, if this is of interest.

The problem is, that the numerical approach by naive use of an EIgensystem-solver (or matrix-logarithm) is unknown in his approximation-quality besides of some "safe" ranges for the parameters. One has - for instance- at least to show, that not only the computed values converge with higher sizes of the used matrix, but also, that the entries of the matrix itself stabilize, if the dimension is increased.

Then the parameters of a problem {a,x}^^h (compare Andrew Robbins' notation-definition here) occur in the matrix equation
V(x)~ * B_a ^h = V(y)~

a -> in the construction of the matrix B_a
h -> in the exponents for the diagonal-matrix D^h of eigenvalues
x -> in the first column of V(x)~
the result {a,x}^^h in the first column of V(y)~
and can be computed by a powerseries in x, where the coefficients of the second column of B_a^h are used
y = {a,x}^^h = sum{k=0,inf} x^k * b_k
where the b_k are the entries of the 2'nd column of B_a^h

Again: this is computable, but to answer your question definitely (or at least as reasonable approximation) one should first find a matrix-operator, which implements your idea as a model for general parameters.

Gottfried
Gottfried Helms, Kassel
#8
It sounds to me like you refer to i-tetra-i, or \( {}^{i}i \). in your notation, I think you mean \( (a, b, c) = \exp^c_a(b) \) which would actually make \( (i, i, i) = \exp^i_i(i) = \exp^{i+1}_i(1) = {}^{i+1}i \) which is probably not what you were trying to say.

I have never tried to calculate i-tetra-i before. All of the bases that I've tried have been real-valued, but I'll see what I can do for a complex base. I cannot guarantee any results, but I can see what our methods can do for base i.

Andrew Robbins
#9
Well, given my model of tetration.
Remember, that I doubt that it is reasonable to assume the results of my definition of tetration and yours should be equal, although by the sheer notation with equal parameters the formulae look identical. But it may be instructive anyway.

I just hacked a dimension-32 approximation accroding to the general formula {a,x}^^h given your parameters according to your problem a=I,x=1,h=I.

The first three columns of Bs = dV(log(I)) * B are

\( \hspace{24}
\begin{matrix} {rrr}
1.00000000000 & 1.00000000000 & 1.00000000000 \\
0 & 1.57079632679*I & 3.14159265359*I \\
0 & -1.23370055014 & -4.93480220054 \\
0 & -0.645964097506*I & -5.16771278005*I \\
0 & 0.253669507901 & 4.05871212642 \\
0 & 0.0796926262462*I & 2.55016403988*I \\
0 & -0.0208634807634 & -1.33526276885 \\
0 & -0.00468175413532*I & -0.599264529321*I \\
0 & 0.000919260274839 & 0.235330630359 \\
0 & 0.000160441184787*I & 0.0821458866111*I \\
0 & -0.0000252020423731 & -0.0258068913900 \\
0 & -0.00000359884323521*I & -0.00737043094571*I \\
0 & 0.000000471087477882 & 0.00192957430940 \\
0 & 0.0000000569217292197*I & 0.000466302805768*I \\
0 & -0.00000000638660308379 & -0.000104638104925 \\
0 & -0.000000000668803510981*I & -0.0000219153534478*I \\
0 & 6.56596311498E-11 & 0.00000430306958703 \\
0 & 6.06693573110E-12*I & 0.000000795205400148*I \\
0 & -1.00000000000E-12 & -0.000000138789524622 \\
0 & 0 & -0.0000000229484289973*I \\
0 & 0 & 0.00000000360473079746 \\
0 & 0 & 0.000000000539266466261*I \\
0 & 0 & -7.70070713060E-11 \\
0 & 0 & -1.05184717169E-11*I
\end{matrix} \)

To get the (trivial) value of I = {I,1}^^1 the terms of the second column must be summed as powerseries in x with x=1. Here I show the partial sums, which nicely converge to the expected result y=I:

\( \hspace{24}
\begin{matrix} {rrr}
0.909090909091 \\
0.991735537190+1.29817878248*I \\
0.0723512020014+1.53421128838*I \\
-0.179756007234+1.12519536891*I \\
-0.0681469673065+0.968659579810*I \\
-0.00351032851781+0.977624337253*I \\
0.00436018003194+0.995580017724*I \\
0.00156814228411+1.00015357146*I \\
0.000218934491618+1.00030493211*I \\
-0.0000162526513523+1.00008325134*I \\
-0.0000151845153453+1.00001025056*I \\
-0.00000375533374982+0.999999385186*I \\
-0.000000479000915892+0.999999405083*I \\
0.00000000440878968775+0.999999851342*I \\
0.0000000185010573405+0.999999978737*I \\
0.00000000512110205692+0.999999999027*I \\
0.000000000844337715795+1.00000000042*I \\
7.67095502124E-11+1.00000000015*I \\
-4.07641970796E-12+1.00000000003*I \\
-3.36962348090E-12+1.00000000000*I \\
-1.00000000000E-12+1.00000000000*I \\
1.00000000000*I \\
1.00000000000*I \\
1.00000000000*I
\end{matrix} \)

Ok, no obvious error up to here



Now to construct the I'th power of Bs I do numerically an eigendecomposition. The computed eigenvalues are

\( \hspace{24}
\begin{matrix} {rrr}
-5.47974865102E17-2.37151639248E18*I \\
-1.40947398634E16+1.11407483052E16*I \\
2.46479084446E14+1.28514676152E14*I \\
558111745700.-6.74160002940E12*I \\
-220611525960.+65636453372.0*I \\
6058523291.41+8279445712.41*I \\
337747205.316-464618483.215*I \\
-36794031.6180-13535297.2580*I \\
-358363.072707+3168208.25176*I \\
302209.998163-27260.0213372*I \\
-8395.97431462-32222.7753783*I \\
-3862.15905348+1553.24166668*I \\
267.408153438+522.463483061*I \\
79.9487788351-46.6197593935*I \\
-8.44497709779-13.8323341991*I \\
-2.69468450558+1.59521959157*I \\
-0.153139121702-0.779903738266*I \\
-0.434207067913-0.0300570687829*I \\
-0.566417336767+0.688453222928*I \\
-0.584898170382+0.238979921688*I \\
0.166722386790-0.538414193535*I \\
0.253868227388-0.289262673366*I \\
0.0621911678425+0.0678908098416*I \\
0.327850649567+0.623039459463*I \\
0.271875740531+0.406308595179*I \\
0.623667274970+0.336312783860*I \\
1.00000000000 \\
0.000214033167476+0.00585473360451*I \\
-0.000171842783002+0.000232593700346*I \\
-0.00000000174878768422-0.00000000220727624373*I \\
-0.000000233830386869-0.0000000768729721930*I \\
-0.0000100176583851+0.00000302704107609*I
\end{matrix} \)

Note, that these eigenvalues are *not* constant with higher dimesions. They are just constructed from the empirical (truncated wwith dim=32) matrix-operator for base I

Anyway, this reproduces the basic matrixoperator with h=1 perfectly.

Taken to the I'th power (by exponentiating the eigenvalues) Pari/GP gives this matrix as Bs^I (only the finally relevant second column is given here)

\( \hspace{24}
\begin{matrix} {rrr}
2.06145217150-1.02926728869*I \\
5.81300594285+19.5645999807*I \\
-126.124737459-30.4110746405*I \\
470.620324532-351.161359046*I \\
-224.452590018+1917.76776945*I \\
-3037.77318290-3791.12832155*I \\
9716.74257623+1145.91813696*I \\
-13067.3202330+9858.26388713*I \\
4092.40268289-23076.9591360*I \\
15598.5659208+24912.6967922*I \\
-31488.4281518-9341.69517397*I \\
30138.7445858-13658.7995071*I \\
-12822.8086718+27435.3139921*I \\
-6715.89914642-24385.0097082*I \\
16020.8287524+10805.4109019*I \\
-13423.5109290+1685.41825964*I \\
5663.37890805-6617.80298683*I \\
328.580050705+5202.17407548*I \\
-2283.97893527-1905.01648008*I \\
1663.72812201-239.296020348*I \\
-554.785800953+780.887552013*I \\
-58.9789199840-530.120309375*I \\
185.617911945+203.010458380*I \\
-120.661742137-35.9711040163*I \\
48.9129882830-8.69936241104*I \\
-13.8034038806+9.04489016501*I \\
2.68261460172-3.61800295858*I \\
-0.316133256749+0.928004747822*I \\
0.00895779796132-0.163077449477*I \\
0.00364944523306+0.0191872578491*I \\
-0.000587487763491-0.00137114979404*I \\
0.0000308282641305+0.0000451105887704*I
\end{matrix} \)

The terms used as coefficients for a powerseries in x with x=1 has to be evaluated to get y={I,1}^^I. I approximate this by the partial-sums using Euler-summatiion (order 1.3 suffices)

\( \hspace{24}
\begin{matrix} {rrr}
1.58573243962-0.791744068222*I \\
5.39131918181+10.6022321413*I \\
-50.3444128154+2.06106295233*I \\
75.2580631596-128.633476610*I \\
148.738829992+270.527508926*I \\
-469.621461225+14.0551655027*I \\
197.268710310-509.011209276*I \\
382.544924169+325.914251348*I \\
-319.220819985+210.623903212*I \\
-86.6444368232-236.149488090*I \\
147.094103735-24.8153704764*I \\
1.87967578010+83.3028710164*I \\
-45.5253002777-3.29677555442*I \\
2.07384793001-24.3565866239*I \\
12.1754155038+0.365227776672*I \\
-0.0797957239470+5.77003813168*I \\
-3.21904307851-0.0361632958617*I \\
-0.617106650046-1.91201801927*I \\
0.489205540885-0.837213872318*I \\
0.102947554342-0.210707955957*I \\
-0.231195060161-0.311521742399*I \\
-0.232100049827-0.476259731040*I \\
-0.159094876028-0.502092239406*I \\
-0.135412161873-0.474546607718*I \\
-0.142918083910-0.459322945360*I \\
-0.150850150122-0.459521341773*I \\
-0.152427969793-0.462866323691*I \\
-0.151401051174-0.464246402460*I \\
-0.150603568878-0.464150373228*I \\
-0.150448471688-0.463811394143*I \\
-0.150542606888-0.463666152744*I \\
-0.150623503560-0.463664450950*I
\end{matrix} \)

So the result given here is the last row with

y={I,1}^^I ~ 0.1506... -0.4636...*I

But I don't trust this solution, since in the third column (not documented here) we should have the square, which doesn't appear:
0.873379283287-2.89383467237*I
but y^2 = -0.192297283250 + 0.139677528157*I

So, before one could say, y is a reasonable approximation one had to show

a) that the solutions y for values h->I are consistent and continuously approximating the given value

b) the result makes sense if consistent with further algebraic operations.

The remaining problem for my matrix-method is simply the missing hypothese, how to deal with the multivalued logarithms of complex eigenvalues and their complex powers. (Note that it may be interesting to read Jay's recent post, which adresses the analoguous problem when applying his method (I think, it is essentially the same))

Gottfried
Gottfried Helms, Kassel
#10
Just another hack: I applied my analytical description to the problem-definition.

I searched for a fixpoint t for the base-parameter b=I, such that t^(1/t)=b=I

I found:

t= 0.438282936727 + 0.360592471871*I

Check:
t^(1/t) = -4.98388377612 E-45 + 1.00000000000*I ~ 0.0 + 1.0*I

is good.
Then u= log(t):
u = log(t) = -0.566417330285 + 0.688453227108*I

Hypothesis: Eigenvalues are consecutive powers of u , beginning at u^0 .
We'll find such powers in the empirical set of eigenvalues with more or less good approximations
Code:
.                                   1.00000000000   (u^0)
                  -0.566417336767+0.688453222928*I   (u^1)
                  -0.153139121702-0.779903738266*I   (u^2)
                   0.623667274970+0.336312783860*I   (u^3)
                  -0.584898170382+0.238979921688*I   (u^4)
here approximations become worse:
                   0.166722386790-0.538414193535*I
                   0.271875740531+0.406308595179*I
                 -0.434207067913-0.0300570687829*I
                   0.253868227388-0.289262673366*I
                 0.0621911678425+0.0678908098416*I


The full set of eigenvalues, as reported before, but reordered are

\( \hspace{24}
\begin{matrix} {rrr}
1.00000000000 \\
-0.566417336767+0.688453222928*I \\
-0.153139121702-0.779903738266*I \\
0.623667274970+0.336312783860*I \\
-0.584898170382+0.238979921688*I \\
0.166722386790-0.538414193535*I \\
0.271875740531+0.406308595179*I \\
-0.434207067913-0.0300570687829*I \\
0.253868227388-0.289262673366*I \\
0.0621911678425+0.0678908098416*I \\
---- \\
-5.47974865102E17-2.37151639248E18*I \\
-1.40947398634E16+1.11407483052E16*I \\
2.46479084446E14+1.28514676152E14*I \\
558111745700.-6.74160002940E12*I \\
-220611525960.+65636453372.0*I \\
6058523291.41+8279445712.41*I \\
337747205.316-464618483.215*I \\
-36794031.6180-13535297.2580*I \\
-358363.072707+3168208.25176*I \\
302209.998163-27260.0213372*I \\
-8395.97431462-32222.7753783*I \\
-3862.15905348+1553.24166668*I \\
267.408153438+522.463483061*I \\
79.9487788351-46.6197593935*I \\
-8.44497709779-13.8323341991*I \\
-2.69468450558+1.59521959157*I \\
0.327850649567+0.623039459463*I \\
---\\

0.000214033167476+0.00585473360451*I \\
-0.000171842783002+0.000232593700346*I \\
-0.00000000174878768422-0.00000000220727624373*I \\
-0.000000233830386869-0.0000000768729721930*I \\
-0.0000100176583851+0.00000302704107609*I
\end{matrix} \)

The eigenvalues according to my hypothese (still not adapted to the problem of complex values) should be:
\( \hspace{24}
\begin{matrix} {rrr}
1 \\
-0.566417330285+0.688453227108*I \\
-0.153139253867-0.779903677850*I \\
0.623667931186+0.336321745566*I \\
-0.584798115648+0.238867734628*I \\
0.166790524665-0.537904974464*I \\
0.275849371850+0.419506174539*I \\
-0.445056244417-0.0477061771754*I \\
0.284931041419-0.279378802200*I \\
0.0309493581638+0.354406690248*I \\
-0.261522682435-0.179434905821*I \\
0.271663519562-0.0784110943693*I \\
-0.0998925545269+0.231441029468*I \\
-0.102755449572-0.199863561558*I \\
0.195799181354+0.0424638640984*I \\
-0.140138433849+0.110746309732*I \\
0.00313318324561-0.159207386122*I \\
0.107832149466+0.0923348727257*I \\
-0.124646239322+0.0219373191843*I \\
0.0554989719204-0.0982387834742*I \\
0.0361972280012+0.0938525957857*I \\
-0.0851158596893-0.0282396383155*I \\
0.0676527681408-0.0426028677382*I \\
-0.00898961853835+0.0707067691561*I
\end{matrix}
\)

Gottfried
Gottfried Helms, Kassel


Possibly Related Threads…
Thread Author Replies Views Last Post
  Superroots and a generalization for the Lambert-W Gottfried 22 49,186 12/30/2015, 09:49 AM
Last Post: andydude



Users browsing this thread: 1 Guest(s)