Re: jack/oversampling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 16, 2014 at 05:45:22PM +0100, tim wrote:
 
> a) any non-linearity introduces harmonics, some non-linearities
> introduce an infinite amount of harmonics, which will cause foldover
> distortion. the large the sampling-rate, the lower the foldover.

You should not have any non-linearities, except those introduced
on purpose, i.e. distortion plugins and the like. And then it
all depends on how these are designed. If done well, they will
not add any aliased components. One way to avoid that is using
higher sample rates internally, but it's not the only one.
 
> b) delay-lines have a higher precision at higher sampling-rates

Fractional delays are possible at any rate, to any precision.
The only limit is that you can't have very short ones (as the
output would depend on future samples).
 
> c) the tuning of digital filters is more precise at higher
> sampling-rates due to the frequency warping in the blt

Assuming the filter is _tuned_ correctly (e.g. the centre
frequency for a parametric is corrected for warping), there
will be a difference in the actual shape of the FR. But there
is _no_ reason to assume that the original 'analog' shape is
any better (or worse) than the warped one. 

> iir filters may have a higher quantization noise, but that is the
> reason, why a good filter implementation is done in double-precision.

No. If a filter requires double precision to avoid problems
then you made a bad choice of filter architecture. Lots of 
plugins, (usually using 'textbook' biquads) fail in this way.
It's perfectly possible to create audio filters that work
perfectly even in 16-bit fixed point format (with a higher
precision multiply). A lot of research went into this in
the early years of digital audio - just look up the AES
journals from the 1970s. The solution is to understand the
problem and use the correct filter architecture, not the
brute force method of using doubles blindly.
  
> frankly, 48k may be a good enough for distribution, but it is
> sub-optimal not for production ... and it is horrible for digital
> synthesis.

Only if you use 'primitive' algorithms. Unfortunately there's
a lot of those around.

In summary, 96 or 192 kHz will allow you to use simpler algorithms.
That may be a good reason for higher sampler rates, but it doesn't
mean you can't have the same performance at 48 kHz.

Another good reason for higher sampling rates is that the
antialising filters in the converters can have a much wider
transition band (assuming you don't actually use the higher
bandwidth), leading to much reduced latency. It's the reason
why 'digital snakes' used in PA system usually work at 96 kHz.
By starting the transition band at 24 kHz or so they can use
very short filters, a fraction of a millisecond for some.

The same matter makes all the difference between 44.1 and 48 kHz.

Ciao,

-- 
FA

A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)

_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user




[Index of Archives]     [Linux Sound]     [ALSA Users]     [Pulse Audio]     [ALSA Devel]     [Sox Users]     [Linux Media]     [Kernel]     [Photo Sharing]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux