Re: [PATCH v3] ALSA: compress_offload: introduce passthrough operation mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]





>> I honestly find the state machine confusing, it looks like in the SETUP
>> stage tasks can be added/removed dynamically, but I am not sure if it's
>> a real use case? Most pipeline management add a bunch of processing,
>> then go in the 'run' mode. Adding/removing stuff on a running pipeline
>> is really painful and not super useful, is it?
> 
> This I/O mechanism tries to be "universal". As opposite to the standard
> streaming APIs, those tasks may be individual (without any state
> handling among multiple tasks). In this case, the "stop" in the middle
> makes sense. Also, it may make sense for real-time operation (remove
> altered/old data and feed new).

I must be missing something on the data flow then. I was assuming that
the data generated in the output buffer of one task was used as the
input buffer of the next task. If that were true, stopping a task in the
middle will essentially starve the tasks downstream, no?

If the tasks are handled as completely independent entities, what usages
would this design allow for?

Also I don't fully get the initial/final stages of processing. It seems
that the host needs to feed data to the first task in the chain, then
start it. That's fine for playback, but how would this be used if we
wanted to e.g. enable an ASRC on captured data coming from an audio
interface?

It's similar for the final stages on the playback, the memory model is
fine, but at some point the audio data will have to be fed to a regular
audio interface, and that point seems to have been overlooked, or I
missed it entirely.

In the existing "compress" framework, that connection to audio
interfaces is typically left as an exercise for the DSP engineers, and
typically requires the presence of sample-rate conversion and mixer. But
for a memory-to-memory model what is the direction to tie the input or
output buffers to the rest of the audio subsystem?

I forget btw that some processing consumes audio data but does not
generate anything in the output buffer, for example when analyzing
captured data to signal specific patterns or triggers (vad, hot wording,
presence detection, etc).




[Index of Archives]     [Pulseaudio]     [Linux Audio Users]     [ALSA Devel]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux