On 2011-12-20 17:20:32 +0100, Segher Boessenkool wrote: > *Any* specific rounding mode is a bad idea *in general*. No, rounding to nearest will give the best accuracy in average (what is preferable for most algorithms). That's why it's the default rounding mode. > It all depends on your algorithm. Algorithms that use directed rounding generally need to mix different modes. So, setting one particular mode won't help in such a case. > The same is true for flush-to-zero: for some algorithms it is great, > for others it is disastrous. I would say that in general, algorithms that works with FTZ, also work with the usual default rounding. The main advantage of FTZ is that it is faster. > For the OP's example, rounding towards zero does not give less > precise results. I don't think this is true (or can you explain why?). Also note that in general, with rounding to nearest, the errors will tend to compensate, partly. This is not true for directed rounding, except for particular algorithms. > To get accurate results requires (much) more work. If one wants accurate results to around 1 ulp, yes. However, without this work, rounding to nearest is a bit better than directed rounding. > In either case, the OP has his answer: the big slowdown he is > seeing is because his CPU handles calculations with denormals > much more slowly than it handles normal numbers. And in this > thread various ways to avoid denormals have been pointed out. > Which of those is best for his particular actual problem is not > something we can answer. Trapping the underflow exception may be another solution, e.g. to branch to specific code when the first underflow occurs, but this requires some work. -- Vincent Lefèvre <vincent@xxxxxxxxxx> - Web: <http://www.vinc17.net/> 100% accessible validated (X)HTML - Blog: <http://www.vinc17.net/blog/> Work: CR INRIA - computer arithmetic / Arénaire project (LIP, ENS-Lyon)