On 07/30/2013 07:59 AM, hemant wrote:
I have written a std code for ARM 32-bit platform using math.h library and
float=powf(float,float) function. When I give input to my system as 100 ^
4.4 it gives me answer as 630957632.0000 (as float) whereas calculator in
WindowsXP gives answer as 630957344.48019324943436013662234.
I just want to know which one is more accurate ???? why is difference ?????
on what things accuracy depends????
Also what do we mean by "arbitrary"-precision??? I have read such word on
MSDN help forums.
how much accurate is my system???? how to improve accuracy??????
Thanks in advance!!!!!!!!!!!
--
View this message in context: http://gcc.1065356.n5.nabble.com/powf-float-float-function-from-math-h-on-ARM32-bit-platform-tp956727.html
Sent from the gcc - Dev mailing list archive at Nabble.com.
The question is not at all appropriate for gcc list; switching to gcc-help.
In standard C, you have data types double, supporting nearly 16 decimal
digits precision, and float, supporting nearly 7 decimal digits in
32-bit data, as your result indicates. <float.h> should contain the
relevant parameters for your implementation.
gcc uses multiple precision packages gmp, mpfr, mpc in its own build.
A very traditional basic multiple precision package (with C source code)
is included in the bc interpreter, on which the old command line utility
dc is based.
--
Tim Prince