Re: [PATCH v3] ALSA: compress_offload: introduce passthrough operation mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> The internal state requirement for multiple tasks is mostly given by the
> used stream structure, so user space will handle this correctly (restart
> stream on demand). You can imagine situation, where too many data are
> queued and user space will receive a signal to do something different,
> so it makes sense to support dequeuing of tasks. The stream state should
> be reset when the task is stopped (removed from the queue) even if there
> are other active tasks after this stopped one.
We are in agreement that the 'drop' (stop now) and 'drain' (keep going
until all data was consumed) capabilities are very much needed. I don't
think controlling the states of intermediate tasks is possible or even
desired though.

> I may also propose kernel API extension to inform user space that all
> active tasks must be canceled in one shot (ioctl).

Did you mean "All active tasks in the same context" - defined by the
open step?

>>>> Also I don't fully get the initial/final stages of processing. It seems
>>>> that the host needs to feed data to the first task in the chain, then
>>>> start it. That's fine for playback, but how would this be used if we
>>>> wanted to e.g. enable an ASRC on captured data coming from an audio
>>>> interface?
>>>
>>> There are no stream endpoints in kernel (no playback, no capture). It's
>>> just about we have some audio data, do something with them and return
>>> them back.
>>>
>>> For an universal media stream router, another API should be designed. I
>>> believe that using dma-buf buffers for I/O is nice and ready to be
>>> reused in another API.
>>
>> Humm, how would this work with the initial ask to enable the ASRC from
>> FSL/NXP? If we leave the ends of the processing chain completely
>> undefined, who's going to use this processing chain? Shouldn't there be
>> at least one example of how existing userspace (alsa-lib, pipewire,
>> wireplumber, etc) might use the API? It's been a while now, but when we
>> introduced the compress API there was a companion 'tinycompress' utility
>> - largely inspired by 'tinyplay' - to showcase how the API was meant to
>> be used.
> 
> I replied this in another answer. The expected users are media
> frameworks like gstreamer or ffmpeg (use this directly as a plugin in
> the processing chain). Maybe audio servers can use this hardware
> acceleration, too.
> 
> I would like to define the basic kernel API (ioctls) in the first stage
> and then continue with a test kernel module, user space library (maybe
> include support in tinycompress) and user space test utility.

Incremental development sounds fine, but at some point we'll need some
sort of development hardware to check how well things work, and what's
missing. In the case of the compress API some 12+ years ago we
completely missed the gapless playback requirement which led to the ugly
partial drain solution. We also underestimated the inertia and effort
needed to change userspace, so much so that the main users of the
compress API are in the Android world. I am not aware of any users of
the compress API in the traditional Gnome/KDE environments.




[Index of Archives]     [Pulseaudio]     [Linux Audio Users]     [ALSA Devel]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux