Re: [PATCH v15 00/16] Add audio support in v4l2 framework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15. 05. 24 11:17, Hans Verkuil wrote:
Hi Jaroslav,

On 5/13/24 13:56, Jaroslav Kysela wrote:
On 09. 05. 24 13:13, Jaroslav Kysela wrote:
On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is
one of the components in the pipeline.

I was thinking of loopback with endpoints using compress streams,
without physical endpoint, something like:

compress playback (to feed data from userspace) -> DSP (processing) ->
compress capture (send data back to userspace)

Unless I'm missing something, you should be able to process data as fast
as you can feed it and consume it in such case.


Actually in the beginning I tried this,  but it did not work well.
ALSA needs time control for playback and capture, playback and capture
needs to synchronize.  Usually the playback and capture pipeline is
independent in ALSA design,  but in this case, the playback and capture
should synchronize, they are not independent.

The core compress API core no strict timing constraints. You can eventually0
have two half-duplex compress devices, if you like to have really independent
mechanism. If something is missing in API, you can extend this API (like to
inform the user space that it's a producer/consumer processing without any
relation to the real time). I like this idea.

I was thinking more about this. If I am right, the mentioned use in gstreamer
is supposed to run the conversion (DSP) job in "one shot" (can be handled
using one system call like blocking ioctl).  The goal is just to offload the
CPU work to the DSP (co-processor). If there are no requirements for the
queuing, we can implement this ioctl in the compress ALSA API easily using the
data management through the dma-buf API. We can eventually define a new
direction (enum snd_compr_direction) like SND_COMPRESS_CONVERT or so to allow
handle this new data scheme. The API may be extended later on real demand, of
course.

Otherwise all pieces are already in the current ALSA compress API
(capabilities, params, enumeration). The realtime controls may be created
using ALSA control API.

So does this mean that Shengjiu should attempt to use this ALSA approach first?

I've not seen any argument to use v4l2 mem2mem buffer scheme for this data conversion forcefully. It looks like a simple job and ALSA APIs may be extended for this simple purpose.

Shengjiu, what are your requirements for gstreamer support? Would be a new blocking ioctl enough for the initial support in the compress ALSA API?

						Jaroslav

--
Jaroslav Kysela <perex@xxxxxxxx>
Linux Sound Maintainer; ALSA Project; Red Hat, Inc.





[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux