On Thu, Mar 12, 2009 at 6:45 AM, John Tapsell <johnflux@xxxxxxxxx> wrote: > 2009/3/11 Ealdwulf Wuffinga <ealdwulf@xxxxxxxxxxxxxx>: >> On Wed, Mar 11, 2009 at 9:35 AM, John Tapsell <johnflux@xxxxxxxxx> wrote: >> What I use is the multiprecision floating point number class. doubles >> don't seem to be long enough. > > Hmm, really really? Sometimes this sort of thing can be fixed by just > readjusting the formulas. What formulas are you using that require > more precision than doubles? I'll have to reply to this later when I have more time. However, there is a (rather verbose) file in the doc directory which describes them - in texmacs format, but I've just uploaded a pdf version as well. It is BayesianSearch_Debugging.pdf. The description of this code starts in section 2.2 (since I wrote that, I have generalised it to the DAG case as in git). > A little bit of math trickery helps here :-) > > y = x^b > > log(y) = log(x^b) = b * log(x) > e^log(y) = e^(b log(x)) > > y = exp(b * log(x)) > > So as long as you have 'exp' and 'log' functions, you can raise x to > the power of b, even if b is fractional. Sadly gmp does not have log or exp. mpfr does, but it does not have a python interface. Alex -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html