On Fri, 04 Mar 2022 09:35:28 +0100, S.J. Wang wrote: > > > > > > > > > Hi Takashi Iwai, Jaroslav Kysela > > > > > > We encountered an issue in the pcm_dsnoop use case, could you > > > please help to have a look? > > > > > > *Issue description:* > > > With two instances for dsnoop type device running in parallel, > > > after suspend/resume, one of the instances will be hung in memcpy > > > because the very large copy size is obtained. > > > > > > #3 0x0000ffffa78d5098 in snd_pcm_dsnoop_sync_ptr > > (pcm=0xaaab06563da0) > > > at pcm_dsnoop.c:158 dsnoop = 0xaaab06563c20 slave_hw_ptr = 64 > > > old_slave_hw_ptr = 533120 avail = *187651522444320* > > > > > > * Reason analysis: * > > > The root cause that I analysis is that after suspend/resume, one > > > instance will get the SND_PCM_STATE_SUSPENDED state from slave pcm > > device, > > > then it will do snd_pcm_prepare() and snd_pcm_start(), which will > > > reset the dsnoop->slave_hw_ptr and the hw_ptr of slave pcm device, > > > then the state of this instance is correct. But another instance may > > > not get the SND_PCM_STATE_SUSPENDED state from slave pcm device > > > because slave device may have been recovered by first instance, so > > > the dsnoop->slave_hw_ptr is not reset. but because hw_ptr of slave > > > pcm device has been reset, so there will be a very large "avail" size. > > > > > > *Solution:* > > > I didn't come up with a fix for this issue, seems there is no easy > > > way to let another instance know this case and reset the > > > dsnoop->slave_hw_ptr, could you please help? > > > > Could you try topic/pcm-direct-resume branch on > > > > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub > > .com%2Ftiwai%2Falsa- > > lib&data=04%7C01%7Cshengjiu.wang%40nxp.com%7C95f97de3f2c840d > > 9853508d9fd2e79ea%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C > > 637819198319430045%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM > > DAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdat > > a=WWX1ZlcQhJF3pHJdHPIH%2B0xG9o%2FjQnHG5fHDbKXwQwE%3D&r > > eserved=0 > > > > Thanks, I push my test result in https://github.com/alsa-project/alsa-lib/issues/213 > Could you please review? Please keep the discussion on ML. Takashi