On 6/25/24 13:48, Jaroslav Kysela wrote:
> On 25. 06. 24 8:06, Pierre-Louis Bossart wrote:
>
>>>> I honestly find the state machine confusing, it looks like in the SETUP
>>>> stage tasks can be added/removed dynamically, but I am not sure if it's
>>>> a real use case? Most pipeline management add a bunch of processing,
>>>> then go in the 'run' mode. Adding/removing stuff on a running pipeline
>>>> is really painful and not super useful, is it?
>>>
>>> This I/O mechanism tries to be "universal". As opposite to the standard
>>> streaming APIs, those tasks may be individual (without any state
>>> handling among multiple tasks). In this case, the "stop" in the middle
>>> makes sense. Also, it may make sense for real-time operation (remove
>>> altered/old data and feed new).
>>
>> I must be missing something on the data flow then. I was assuming that
>> the data generated in the output buffer of one task was used as the
>> input buffer of the next task. If that were true, stopping a task in the
>> middle will essentially starve the tasks downstream, no?
>>
>> If the tasks are handled as completely independent entities, what usages
>> would this design allow for?
>
> The usage is for the user space. It allows to accelerate the audio data
> processing in hardware, but input is from user space and output is
> exported to user space in this simple API. The purpose of this API is
> just "chaining" to reduce the user space context switches (latency).
I am still very confused between the notion of "chaining" and
adding/removing tasks dynamically at run-time. The former is fine, the
latter is very hard to enable in a glitch-free manner, usually all
filters have an internal history buffer. Inserting, stopping or removing
a filter is likely to add audible discontinuities.
>> Also I don't fully get the initial/final stages of processing. It seems
>> that the host needs to feed data to the first task in the chain, then
>> start it. That's fine for playback, but how would this be used if we
>> wanted to e.g. enable an ASRC on captured data coming from an audio
>> interface?
>
> There are no stream endpoints in kernel (no playback, no capture). It's
> just about we have some audio data, do something with them and return
> them back.
>
> For an universal media stream router, another API should be designed. I
> believe that using dma-buf buffers for I/O is nice and ready to be
> reused in another API.
Humm, how would this work with the initial ask to enable the ASRC from
FSL/NXP? If we leave the ends of the processing chain completely
undefined, who's going to use this processing chain? Shouldn't there be
at least one example of how existing userspace (alsa-lib, pipewire,
wireplumber, etc) might use the API? It's been a while now, but when we
introduced the compress API there was a companion 'tinycompress' utility
- largely inspired by 'tinyplay' - to showcase how the API was meant to
be used.
To be clear: I am not against this API at all, the direction to have
userspace orchestrate a buffer-based processing chain with minimal
latency is a good one, I am just concerned that we are leaving too many
points open in terms of integration with other audio components.
[Index of Archives]
[Pulseaudio]
[Linux Audio Users]
[ALSA Devel]
[Fedora Desktop]
[Fedora SELinux]
[Big List of Linux Books]
[Yosemite News]
[KDE Users]