On 5/21/2019 9:48 PM, Paul Dale wrote:
Double makes sense. Entropy is often estimated as a real value.
Having a human readable calculation using floating point doesn't (to me)
mean that an API argument has to be a double.
From what I see in the code, the parameter 'double entropy' is used
to increment a value that eventually reaches # define ENTROPY_NEEDED 32.
Couldn't the number have been an unsigned long? If more precision was
needed, make the units 1/64k and make ENTROPY_NEEDED 32 * 64k. It's a
bit more work for the caller, but removes the (perhaps only) place
floating point is needed.