Obsolete frames being played at the end of a sample

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Hello,

I am currently working with a PCI audio card using the mainline Xilinx
XDMA controller driver through the PCM DMAengine abstraction. The
driver for the audio card is out-of-tree but does not matter much in
this case, as the card is always sampling the data.

During playback, the DMA engine is configured in cyclic mode, and raises
an interrupt at the end of each period. The core is made aware of the
elapsed period with a very standard vchan_cyclic_callback(). This is
all very usual, I believe.

The problem I see is related to the end of the playback. When the very
last period is being played, the following periods in the buffer are
not being written anymore. When the last interrupt fires, the core is
made aware of the playback being done and consequently stops the DMA
engine. But in practice, there is always a small delay between the
interrupt firing and the DMA engine being actually stopped. If this
delay is bigger than the time between two frames (let's say 20µs at
48kHz), the very next (obsolete) frame will be played.

As long as it is a single frame, it is probably not really noticeable,
but Xilinx DMA engine behaves slightly differently. The controller can
only stop between transfers, so if a transfer (a period in this case)
was started, it will be processed until the end. A full period being
played with old audio samples is really bad and can be eared very
distinctly as a repetition of the end of the end of the sound file.

Maybe this DMA engine is a bit too greedy? But in any case just the
latency of the system (in particular on server installations) should
lead to regular glitches on many systems. Maybe the fact that the audio
card cannot really "stop" is also a root cause? But I would definitely
expect a way to tell the upper layer that the DMA controller needs some
more time to stop. For instance, knowing that a file would last 17
periods (the sample lasts 16.5 periods, and there is 0.5 period
of silence automatically added), I've tried the following hack. When
the 16th period has been played, I actually tell the core that 17
periods elapsed. The core mandates the termination of the DMA transfer,
which is being processed by the DMA engine during the 17th period. When
the 17th period has been fully processed, the DMA engine knows it has
to stop and no parasitic sound can be heard.

I've looked in the sound subsystem, there are some "silencing" and
"draining" mechanisms but none of them seems to be linked to the DMA
engine layer. I would expect either one of these two capabilities to be
forwarded to the upper layers:
- "I cannot stop immediately, please fill the whole buffer with silence
  after the last useful period because some more samples will be
  played."
- "I cannot stop immediately, please tell me earlier when to stop."

Any hints on how to properly handle situations like this and avoid
underruns?

Thanks a lot!
Miquèl





[Index of Archives]     [Pulseaudio]     [Linux Audio Users]     [ALSA Devel]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux