Re: Getting greatest decimal accuracy out of G_PI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Feb 04, 2007 at 12:17:18PM +0200, Tor Lillqvist wrote:
>  > Well I'm wondering why the header defines G_PI to 49 places
>  > (if I counted right), if the biggest Gtk2 data type only holds precision
>  > to 15? ...
> 
> The only reason why G_PI is defined in a GLib (not GTK+) header in the
> first place is to aid portability, as there is no standard macro for
> pi. The macro M_PI, although common on Unix, is not mandated by the C
> standard, and does not exist in the Microsoft C headers, for
> instance.

This was explained in the very first reply in this thread.
However the questions raised are:
1. Why it is defined with 166bit precision that is not
   only way too much for IEEE double (52bit mantissa) but
   even for Cray (96bit mantissa)?
2. Does any trick exist to get the extra precision from G_PI
   when one works with more precise floating point numbers
   than doubles?

If the anwsers are
1. Why not, it does not hurt.
2. No.
I'm fine with it, I just expected some deeper purpose which
we overlooked.

Yeti


--
Whatever.
_______________________________________________
gtk-list mailing list
gtk-list@xxxxxxxxx
http://mail.gnome.org/mailman/listinfo/gtk-list

[Index of Archives]     [Touch Screen Library]     [GIMP Users]     [Gnome]     [KDE]     [Yosemite News]     [Steve's Art]

  Powered by Linux