Tetration Forum

Full Version: How to force precision in SAGE?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I've been trying to remove some inaccuracy from my tetration library, and I keep getting precision limited to about 10-15 digits or so. I've finally figured out that despite using a RealField(256), which should provide at least a good 80 digits of precision, I'm getting truncations to double precision at some point. The net result is that I can perform a reversible series of transformations on 0 and get back numbers like -0.00000000718...

Needless to say, it's very frustrating. I haven't found a global setting in SAGE for the precision of real arithmetic, so I'm stuck having to push all my reversible transformations (like e^z-1 and ln(z+1)) through a RealField variable. Even then, I'm still losing precision somewhere. What's the "right" way to use arbitrary precision math in SAGE?

Please Help!
Never mind, I figured it out. Apperently, e^z and exp(z) are not equivalent. The former uses double precision, as if it were doing the following internally:

, with b = 2.7182818284590458..., where that final 8 should be a 2.

In other words, it treats e as any other base, and solves e^z by performing exp(z)/ln(e), where e is only accurate to double precision. Now I know not to use e^z.
Yep, I just verified, if I type e.base_ring(), I get:
Real Field with 53 bits of precision

Well, whatever, I'm using exp(z) now, so onwards and upwards.