Part of your confusion is over the difference between "precision" and
"accuracy".
A floating point format has a FINITE number of specific values that can
be stored with infinite accuracy. You happen to have chosen values that
are in that finite set.
You can set the precision of your output to whatever (finite value) you
like. It may be much more than the accuracy of your number. It may be
much less than the accuracy of your number (especially if the accuracy
of your number is infinite).
Even if the accuracy of a stored floating point number is infinite, it
may be possible for the algorithm that converts it from binary to
decimal to introduce some inaccuracy. I'm not sure of those details.
Your results seem to indicate that translation is done surprisingly well.
To help you understand, instead of outputting just d each time, output
both d and d+1. At some point d will still be 100% accurate but d+1
will not be accurate, in fact it will be exactly equal to d.
Jonathan wrote:
Please have a look at my query posted at the following 3 forums.
It has not received any explanations yet.
On the one hand it appears not to be a problem but on the other hand
could be a spectacular bug.
http://forums.debian.net/viewtopic.php?f=8&t=49627
http://www.linux.com/community/forums?func=view&catid=17&id=4589
http://www.linuxformat.com/forums/viewtopic.php?t=11688