Re: [PATCH 0/3] ASoC: SOF: pcm/Intel: Handle IPC dependent sequencing correctly
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: peter.ujfalusi@xxxxxxxxxxxxxxx
- Subject: Re: [PATCH 0/3] ASoC: SOF: pcm/Intel: Handle IPC dependent sequencing correctly
- From: Sam Edwards <cfsworks@xxxxxxxxx>
- Date: Fri, 30 Jun 2023 00:33:38 -0600
- Cc: alsa-devel@xxxxxxxxxxxxxxxx, broonie@xxxxxxxxxx, kai.vehmanen@xxxxxxxxxxxxxxx, lgirdwood@xxxxxxxxx, pierre-louis.bossart@xxxxxxxxxxxxxxx, ranjani.sridharan@xxxxxxxxxxxxxxx, yung-chuan.liao@xxxxxxxxxxxxxxx
- In-reply-to: <20230322094346.6019-1-peter.ujfalusi@linux.intel.com>
- References: <20230322094346.6019-1-peter.ujfalusi@linux.intel.com>
- User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0
Hi folks,
When I upgraded my system to 6.4.0, I encountered a regression in audio
output. In regression testing, I found that patch 1/3 here was the
culprit, and the regression goes away entirely (on 6.4.0 final) when
applying a patch that reverts this whole patchset. The problem is
currently still unresolved even in broonie/sound.git.
The regression is an intermittent (few minutes on, few minutes off)
distortion in audio output on my Tigerlake->ALC298 path. When playing a
440 Hz test tone, the output spectrum is distorted into 440 Hz, 560 Hz,
1440 Hz, 1560 Hz, 2440 Hz, 2560 Hz, and so on. Since this is the exact
spectrum one would get if the output were modulated with a 1000 Hz Dirac
comb, I interpret this to mean that the audio subsystem is dropping
(zeroing) 1 sample every 1ms.
There seem to be conditions for this problem to come and go
spontaneously -- in particular, it won't happen if my nvidia driver is
unloaded. However, I can make it occur (even with no out-of-tree modules
loaded) by sending several SIGSTOP->10ms->SIGCONT sequences to my
pipewire daemon while it's playing audio. The distortion then continues
until I send several more signals of that same sequence.
Now, aside from having some DSP background, I'm a total outsider to the
ALSA and SOF world, so what follows is mere speculation on my part: I
believe the problem has some probability of being "toggled" by a buffer
underrun, which happens either deliberately by briefly interrupting
pipewire, or accidentally due to bus contention from having my GPU
active. Something (userspace? ALSA?) tries to restart the stream in
response to that underrun, but this patchset makes stream stop+start
more of a "warm reset," in that it doesn't clean up DMA. As a result, an
off-by-one error somehow creeps into the DMA size, thus omitting the
final sample of every 1ms transfer.
I am not sure if this is a regression introduced with this patchset, or
merely a different bug that became apparent now that DMA isn't being
reset when underruns happen. If it's the latter case, I'm happy to open
an issue on Bugzilla instead. In either case, let me know if I can
provide any additional troubleshooting information.
Cheers,
Sam
[Index of Archives]
[ALSA User]
[Linux Audio Users]
[Pulse Audio]
[Kernel Archive]
[Asterisk PBX]
[Photo Sharing]
[Linux Sound]
[Video 4 Linux]
[Gimp]
[Yosemite News]