Re: Audio mem2mem devices aka asymmetric sample rate converters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 02, 2022 at 01:21:06PM +0200, Sascha Hauer wrote:

> How would such units be integrated into ASoC? I can think of two ways. First
> would be to create an separate audio card from them which records on one end
> and plays back with a different sample rate / format on the other end, in the
> v4l2 world that would be a classical mem2mem device. Is Alsa/ASoc prepared for
> something like this? Would it be feasible to go into such a direction? I
> haven't found any examples for this in the tree.

You could certainly do that, though I'd expect userspace wouldn't
know what to do with it without specific configuration.  It also
feels like it's probably not what users really want - generally
the use case is for rewriting an audio stream without going back
to memory, going back to memory means chopping things up into
periods which would tend to introduce additional latency and/or
fragility which is undesirable even if the devices were DMAing
directly to memory.

> The other way is to attach the ASRC to an existing audio card. That is done
> with the existing in-tree sound/soc/fsl/fsl_asrc.c and
> sound/soc/fsl/fsl_easrc.c drivers.  This approach feels somehow limited as it's
> not possible to just do conversions without playing/recording something. OTOH
> userspace is unaffected which might be an advantage. What nags me with that
> approach is that it's currently not integrated into the simple-audio-card or
> audio-graph-card bindings. Currently the driver can only be used in conjunction
> with the fsl,imx-audio-* card driver. It seems backward to integrate such a
> generic ASRC unit into a special purpose audio card driver. The ASoC core is
> fully unaware of the ASRC with this approach currently which also doesn't look
> very appealing. OTOH I don't know if ASoC could handle this. Can ASoC handle
> for example a chain of DAIs when there are different sample rates and formats
> in that chain?

This is essentially the general problem with DPCM not really
scaling at all well, we need to rework the core so that it
understands tracking information about the digital parameters of
signals through the system like it tracks simple analog on/off
information.  At the minute the core doesn't really understand
what's going on with the digital routing within the SoC at all,
it's all done with manual fixups.

If you search for talks from Lars-Peter Clausen at ELC-E you
should find some good overviews of the general direction.  This
is broadly what all the stuff about converting everything to
components is going towards, we're removing the distinction
between CPU and CODEC components so that everything is
interchangable.  The problem is that someone (ideally people with
systems with this sort of hardware!) needs to do a bunch of heavy
lifting in the framework and nobody's had the time to work on the
main part of the problem yet.  Once it's done then things like
the audio-graph-card should be able to handle things easily.

In theory right now you should implement the ASRC as a component
driver.  You can then set it up as a standalone card if you want
to, or integrate into a custom card as you do now.

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux