Re: [PATCH v15 00/16] Add audio support in v4l2 framework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09. 05. 24 12:44, Shengjiu Wang wrote:
mem2mem is just like the decoder in the compress pipeline. which is
one of the components in the pipeline.

I was thinking of loopback with endpoints using compress streams,
without physical endpoint, something like:

compress playback (to feed data from userspace) -> DSP (processing) ->
compress capture (send data back to userspace)

Unless I'm missing something, you should be able to process data as fast
as you can feed it and consume it in such case.


Actually in the beginning I tried this,  but it did not work well.
ALSA needs time control for playback and capture, playback and capture
needs to synchronize.  Usually the playback and capture pipeline is
independent in ALSA design,  but in this case, the playback and capture
should synchronize, they are not independent.

The core compress API core no strict timing constraints. You can eventually0 have two half-duplex compress devices, if you like to have really independent mechanism. If something is missing in API, you can extend this API (like to inform the user space that it's a producer/consumer processing without any relation to the real time). I like this idea.

					Jaroslav

--
Jaroslav Kysela <perex@xxxxxxxx>
Linux Sound Maintainer; ALSA Project; Red Hat, Inc.




[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux