Re: [PATCH 1/1] ASoC: soc-dai: export some symbols

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2022/9/26 23:33, Mark Brown 写道:
On Mon, Sep 26, 2022 at 09:52:34AM +0200, Pierre-Louis Bossart wrote:
On 9/26/22 03:34, Jason Zhu wrote:
在 2022/9/23 20:55, Mark Brown 写道:
The data can not be lost in this process. So we attach VAD & PDM
in the same card, then close the card and wake up VAD & PDM again
when the system is goto sleep. Like these code:
This sounds like a very normal thing with a standard audio stream -
other devices have similar VAD stuff without needing to open code access
to the PCM operations?
At present, only VAD is handled in this way by Rockchip.
The point here is that other non-Rockchip devices do similar sounding
things?

No.  Usually, the vad is integrated in codec, like rt5677, and is linked with DSP to

handle its data. If DSP detects useful sound, send an irq to system to wakeup and

record sound.  Others detect and analysis sound by VAD itself, like K32W041A.

Generally things just continue to stream the voice data through the same
VAD stream IIRC - switching just adds complexity here, you don't have to
deal with joining the VAD and regular streams up for one thing.
Yes, this looks complicated. But our chip's sram which is assigned to VAD

maybe used by other devices when the system is alive.  So we have to copy

sound data in sram firstly, then use the DDR(SDRAM) to record sound data.
There are other devices that requires a copy of the history buffer from
one PCM device and a software stitching with the real-time data coming
from another PCM device. It's not ideal but not uncommon either, even
for upcoming SDCA devices, combining data from 2 PCM devices will be an
allowed option (with additional control information to help with the
stitching).
If this is something that's not uncommon that sounds like an even
stronger reason for not just randomly exporting the symbols and open
coding things in individual drivers outside of framework control.  What
are these other use cases, or is it other instances of the same thing?

Maybe in this case: One PDM is used to record sound, and there is two way

to move data. Use the VAD to move data to sram when system is sleep and

use DMA to move data when sytem is alive. If we seperate this in two audio

streams, we close the "PDM + VAD" audio stream firstly when system is alive

and open "PDM + DMA" audio stream. This process maybe take long time

that PDM FIFO will be full and lost some data. But we hope that data will not

be lost in the whole proces. So these must be done in one audio stream.

TBH this sounds like at least partly a userspace problem rather than a
kernel one, as with other things that tie multiple audio streams
together.

Yes, userspace can tie multiple audio stream together to avoid doing

complicated things in kernel. This is good method!




[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux