Re: More Generic Audio Graph Sound Card idea

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 9/25/20 2:22 PM, Mark Brown wrote:
On Fri, Sep 25, 2020 at 10:43:59AM +0900, Kuninori Morimoto wrote:

But multi-Codec side is difficult.
Because it is selected via "endpoint" via CPU.
No way to select it via "port" and/or "ports".

Indeed.

We might want to select Multi-CPU/Codec by using multi deivces ?
in such case, using "ports" idea is not enough.

Using extra device like DSP can be more generic ?

	<--- multi-CPU --->
	            *******
	CPU0-1 <--> *     * <--> Codec0
	CPU0-2 <--> *     *
	CPU0-3 <--> *     *
	            *******

I think this is what we want for SoCs, represent the DSPs explicitly and
then have the FEs and BEs all be ports on the DSP.  I think a similar
thing would also work for legacy (I2S & so on) DAIs where we've got more
endpoints on the DAI - if we define slots on the DAI then from the point
of view of the DT bindings it's just a very, very inflexible DSP:

         CPU1 <--> DAI slot A <--> Codec1-1
               \-> DAI slot B <--> Codec1-2
         CPU2 <--> DAI slot C <--> Codec1-3

or whatever.  This doesn't allow for really complex setups that change
the slot mapping at runtime (TBH those probably need custom cards
anyway) but I think it should support most cases where TDM causes
difficulties today.  I'm not sure if we need this for more modern buses
like SoundWire, I'd hope we can dynamically assign slots at runtime more
easily, but ICBW.

SoundWire doesn't have a notion of 'slot'. Instead you program the data ports for the type of audio data to be transmitted/received.

See some pointers at https://mipi.org/sites/default/files/MIPI-SoundWire-webinar-20150121-final.pdf
Pages 42-47 describe the main concepts.

The actual bit allocation can be done in different ways. On the Intel side, we use a dynamic allocation. It's my understanding that Qualcomm have a static allocation for their amplifier links.

In most cases, a sink port receives exactly what it needs, but for playback we have cases where all amplifiers receive the same data (we call this 'mirror mode', and each amplifier will be configured to render a specific channel from the data received. This is useful to deal with orientation/posture changes where the data transmitted on the wires doesn't need to be changed. This avoid dynamic re-configurations on the DSP + bus sides, only the amplifier settings need to be modified - typically via controls.

That said, the mapping of data ports between CPU and codec sides is rather static, mostly because devices typically dedicate specific data ports to specific functionality. SDCA will not change this, quite the opposite, the mapping between ports and audio functionality behind the port will be defined in platform firmware.

It's a bit of a stretch but conceptually there is some level of overlap between SoundWire data ports and TDM slots, e.g. if in a TDM link you used slots 4,5 for headset playback, you might use data port 2 on a SoundWire link. It's however a 'logical' mapping, the actual position of the bits in the frame is handled by the bit allocation.

Hope this helps!
-Pierre




[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux