Re: About Algorithms

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David Adler wrote:

> AFAIK everything Jack (including Ardour) uses single precision 32 bit
> floating point samples. (Not 64 bit double precision as Erik suggests
> - or am I wrong here?)

The actual data values are 32 bit but they are converted to 64 bit
before they the arithmetic is done.

For instance in Secret Rabbit Code (my code), all data entering
and leaving the converter plus the actual filter coefficients are
stored as 32 bit floats. However, the inner loop which does the
multiply accumulate (similar to what is done when mixing) does:

    double result = 0.0 ;

    for ( ..... )
        sum += coeff [k] * data [k] ;

Specifically all the inputs are 32 bit floats, but all intermediate
results are 64 bit.
 
> 32 bit floating point gives a dynamic range of ~192dB, well above the
> dynamic range of our hearing or any analog audio hardware, leaving
> ample headroom for rounding errors to disappear.

Floating point calculations have problems. Specifically, if you take
a long list of numbers with both very large values and very small 
values, you will get different results depending on whether you
add them the smallest to largest vs largest to smallest. For the
most accurate results, add from the smallest to largest.

This is probably the best known paper on the issues surrounding
floating point:

    http://download.oracle.com/docs/cd/E19422-01/819-3693/ncg_goldberg.html


However, the problems of floating point are almost non-existant
in comparison to the problems of fixed point.
 

> I would not speak of inferiority or superiority when comparing this
> and 48 bit integer calculations of pro tools. Single precision floats
> as jack uses them will not be the bottleneck of SN ratio or any other

I would be almost certain that Jack works on single presicion float
data, but does all the intermediate calculations in double precision.

If we assume that the 48 bit arithmetic only represents values in
the range (-1.0, 1.0) (this is usually the case when doing audio
processing on DSP processors).

Consider two values that are to be stored in a 48 bit fixed point
register:

     va = 1.0 / pi
     vb = 1.0 / (pi * 0x10000000000)

In the case of the value va, nearly all of the 48 register bits
will be used and we will get close to 48 bits of precision.

For the case of vb, a number very much smaller than 1.0, about
40 of the most significant bits will be zeros, leaving only about
8 bits of precision.

Now compare the above fixed point prepresentation with the floating
point representation where the mantissa would have the same number
of bits for both numbers and only the exponents would differ.

> Giving this[1] paper a quick look, they use the term "double
> precision" for 48 bit integer, probably relating it to the 24bits of
> the DA/AD converters.

No, this is much more likely the double precision mode of the Motorola
56000 family of 24 bit fixed point DSP chips.

> All that
> bit-shifting/truncation/extra-headroom-bits-stuff mentioned there is
> related to the integer format and does not apply to floats.
> 
> [1] http://akmedia.digidesign.com/support/docs/48_Bit_Mixer_26688.pdf

Exactly. Floating point, especially double floating point makes
it easier to code, because there's much less of this faffing about
required.

Erik
-- 
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user


[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux