Re: Need help fixing pop/click artifacts in an ASOC driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/6/18 10:37 AM, Dimitris Papavasiliou wrote:
Hi,

A (commercial) DAC board, I use with a Raspberry Pi is plagued by clicks and pops and I'm trying to fix it at the driver level.  I have mostly fixed it, but I could use some guidance on the problem described below. I apologize in advance for the long read, but the situation is a bit convoluted (or my understanding of it is).

The board (a HifiBerry DAC+ Pro) uses a PCM5122 chip which is driven by the pcm512x codec driver, but the Pro version also includes a set of external clocks on the board (one each for 48Khz and 44.1Khz sample rates and multiples thereof).

There are two kinds of clicks and pops: on the one hand the board clicks when the device is opened/closed, mostly because the pcm512x codec driver doesn't support the digital_mute operation. These are relatively mild clicks, which are an annoyance at most and I have a separate patch for the pcm512x, that addresses them, which I'm currently testing and will (hopefully) submit soon.

A separate case is a pop that's generated when the clock source is changed.  This pop is independent of the digital volume setting and very loud.  As far as I could determine, when the hw_params callback specified in the snd_soc_dai_link struct is called, it figures out what clock is needed for the giver sample rate and powers the appropriate clock through a GPIO on the PCM5122.  This essentially means that the external clock source fed into the 5122 is stopped and restarted at a different rate, potentially while the DAC chip is operating.  Later the hw_params callback defined in the pcm512x driver is called and it reconfigure the clock dividers for the new rate.  It is at this point that the pop seems to be generated, provided that the card is not powered down or muted.  If I comment out either the clock power-up/down in the dai_link hw_params, or the clock divider reconfiguration in the codec's hw_params, no pop is generated.

Now, the digital_mute patch described before, also happens to fix this loud pop.  I'm not sure whether that's a happy coincidence that might cease to be the case in the future though and was considering fixing the pop problem separately, at its source: the clock change.  One way to fix the problem is to switch the 5122 into standby before changing the clock source in the dai_link hw_params callback and switch it back into normal operation afterwards.  This seems to work regardless of whether digital_mute is implemented or not, but has the following problem: The register that's used to put the chip into/out of standby, is also used by the pcm512x driver, in the set_bias_level operation.  Since the hw_params operation is, as far as I can tell, not atomic, I'm concerned that race conditions may arise, causing the card to be left powered up, when it should be off and vice-versa.

Is there some way to synchronize the use of common chip registers in the DAI and codec drivers?  Is the digital_mute somehow guaranteed to fix the pop originating at the DAI/machine driver level (whatever the hw_params operation in the dai_link struct is supposed to be)? Am I perhaps missing something entirely?

Any help or guidance is welcome.

Thanks for starting this thread. You are not the only one fighting to make those Hifiberry DAC+ PRO boards work without convoluted hacks. Our team uses them for validation of the Sound Open Firmware on the UP^2 boards and we are not quite happy with the clock definitions and don't really have a line of sight to upstream our changes.

The current implementation we have is ripped from the Raspberry kernel:

The PCM512x codec uses the clock framework to request an "sclk", and if it's not available will fall back to slave mode (SoC master). The codec driver uses this clock handle to request the current rates with clk_get_rate() and program the various dividers.

In the Raspberry kernel there is a pretend clock which does nothing but store a value on clk_set_rate() and provides the stored value with clock_get_rate().

The machine driver hw_params calls clk_set_rate(), but the silly part is that it then directly plays with the codec GPIO registers to change the clock sources without any sort of stabilization time/mute.

My view is that it's a broken model, and your point about the clock transitions not taking into account the codec state reinforces my diagnosis. Ideally we'd need a clk API implementation which is closely tied with codec driver implementation for the cases where the codec GPIOs are used to select clock sources.

For now our code uses a hack which bypasses the clock API completely. I shared it on this list [1] but was told the clock API was the way to go. I don't have a clear vision though on how to deal with a case where the codec driver is both client and provider of a clock.

I am told that Suse supports the Raspberry Pi now, maybe Takashi will chime in :-)?

-Pierre

[1] https://patchwork.kernel.org/patch/10444387/


_______________________________________________
Alsa-devel mailing list
Alsa-devel@xxxxxxxxxxxxxxxx
http://mailman.alsa-project.org/mailman/listinfo/alsa-devel




[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux