On Wed, 05 Apr 2023 22:12:19 +0200, Oswald Buddenhagen wrote: > > ... in wait_for_avail() and snd_pcm_drain(). > > t was calculated in seconds, so it would be pretty much always zero, to > be subsequently de-facto ignored due to being max(t, 10)'d. And then it > (i.e., 10) would be treated as secs, which doesn't seem right. > > However, fixing it to properly calculate msecs would potentially cause > timeouts when using twice the period size for the default timeout (which > seems reasonable to me), so instead use the buffer size plus 10 percent > to be on the safe side ... but that still seems insufficient, presumably > because the hardware typically needs a moment to fire up. To compensate > for this, we up the minimal timeout to 100ms, which is still two orders > of magnitude less than the bogus minimum. > > substream->wait_time was also misinterpreted as jiffies, despite being > documented as being in msecs. Only the soc/sof driver sets it - to 500, > which looks very much like msecs were intended. > > Speaking of which, shouldn't snd_pcm_drain() also use substream-> > wait_time? Yes, and unifying the code might make more sense. > As a drive-by, make the debug messages on timeout less confusing. > > Signed-off-by: Oswald Buddenhagen <oswald.buddenhagen@xxxxxx> I applied this patch as is now to for-next branch. thanks, Takashi