Re: [RFC PATCH] ALSA: compress_offload: introduce passthrough operation mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Thanks Jaroslav, this is very interesting indeed.
I added a set of comments to clarify the design.

> +There is a requirement to expose the audio hardware that accelerates various
> +tasks for user space such as sample rate converters, compressed
> +stream decoders, etc.

"passthrough" usually means 'no change to data, filter coefficients not
applied' in the audio world.
> +This is description for the API extension for the compress ALSA API which
> +is able to handle "tasks" that are not bound to real-time operations
> +and allows for the serialization of operations.

not sure what "not bound to real-time operations" means. sample-rate
conversion is probably the most dependent on accurate timing :-)

> +Requirements
> +============
> +
> +The main requirements are:
> +
> +- serialization of multiple tasks for user space to allow multiple
> +  operations without user space intervention
> +
> +- separate buffers (input + output) for each operation
> +
> +- expose buffers using mmap to user space

If every buffer is mmap'ed to userspace, what prevents userspace from
interfering?

I think userspace would only be involved at the source and sink of the
processing chain, no?

> +- signal user space when the task is finished (standard poll mechanism)
> +
> +Design
> +======
> +
> +A new direction SND_COMPRESS_PASSTHROUGH is introduced to identify
> +the passthrough API.

not sure what you meant by 'direction', is this a new concept in
addition to PLAYBACK and CAPTURE?

edit: this is indeed what the code does, probably the documentation can
be clarified to explain why this is needed.

> +The API extension shares device enumeration and parameters handling from
> +the main compressed API. All other realtime streaming ioctls are deactivated
> +and a new set of task related ioctls are introduced. The standard
> +read/write/mmap I/O operations are not supported in the passtrough device.

The compress API was geared to encoders/decoders. I am not sure how we
would e.g. expose parameters for transcoders (decode-reencode) or even SRCs?

> +Device ("stream") state handling is reduced to OPEN/SETUP. All other
> +states are not available for the passthrough mode.
> +
> +Data I/O mechanism is using standard dma-buf interface with all advantages
> +like mmap, standard I/O, buffer sharing etc. One buffer is used for the
> +input data and second (separate) buffer is used for the ouput data. Each task
> +have separate I/O buffers.
> +
> +For the buffering parameters, the fragments means a limit of allocated tasks
> +for given device. The fragment_size limits the input buffer size for the given
> +device. The output buffer size is determined by the driver (may be different
> +from the input buffer size).
> +
> +State Machine
> +=============
> +
> +The passtrough audio stream state machine is described below :
> +
> +                                       +----------+
> +                                       |          |
> +                                       |   OPEN   |
> +                                       |          |
> +                                       +----------+
> +                                             |
> +                                             |
> +                                             | compr_set_params()
> +                                             |
> +                                             v
> +         all passthrough task ops      +----------+
> +  +------------------------------------|          |
> +  |                                    |   SETUP  |
> +  |                                    |
> +  |                                    +----------+
> +  |                                          |
> +  +------------------------------------------+
> +
> +
> +Passthrough operations (ioctls)
> +===============================
> +
> +CREATE
> +------
> +Creates a set of input/output buffers. The input buffer size is
> +fragment_size. Allocates unique seqno.
> +
> +The hardware drivers allocate internal 'struct dma_buf' for both input and

for each input and output buffers?

> +output buffers (using 'dma_buf_export()' function). The anonymous
> +file descriptors for those buffers are passed to user space.
> +
> +FREE
> +----
> +Free a set of input/output buffers. If an task is active, the stop
> +operation is executed before. If seqno is zero, operation is executed for all
> +tasks.
> +
> +START
> +-----
> +Starts (queues) a task. There are two cases of the task start - right after
> +the task is created. In this case, origin_seqno must be zero.
> +The second case is for reusing of already finished task. The origin_seqno
> +must identify the task to be reused. In both cases, a new seqno value
> +is allocated and returned to user space.
> +
> +The prerequisite is that application filled input dma buffer with
> +new source data and set input_size to pass the real data size to the driver.
> +
> +The order of data processing is preserved (first started job must be
> +finished at first).
> +
> +STOP
> +----
> +Stop (dequeues) a task. If seqno is zero, operation is executed for all
> +tasks.

Don't you need a DRAIN?

for a co-processor API, you would want all the input data to be consumed
and the stop happens when all the resulting data is provided in output
buffers.

And presumably when the input task is stopped, the state changes are
propagated to the next task by the framework? Or is userspace supposed
to track each and every task and change their state?

I also wonder if the state for a task should reflect that it's waiting
on data on its input, or conversely is blocked because the output
buffers were not consumed? Dealing with SRC, encoders or decoders mean
that the buffers are going to be used at vastly different rates on input
and outputs.

> +STATUS
> +------
> +Obtain the task status (active, finished). Also, the driver will set
> +the real output data size (valid area in the output buffer).

Is this assuming that the entire input buffer has valid data?
There could be cases where the buffers are made of variable-length
'frames', it would be interesting to send such partial buffers to
hardware. That's always been a problem with the existing compressed API,
we couldn't deal with buffers that were partially filled.






[Index of Archives]     [Pulseaudio]     [Linux Audio Users]     [ALSA Devel]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux