Re: [RFC PATCH 00/14] Introduce QC USB SND audio offloading support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/23/22 17:31, Wesley Cheng wrote:
> Several Qualcomm based chipsets can support USB audio offloading to a
> dedicated audio DSP, which can take over issuing transfers to the USB
> host controller.  The intention is to reduce the load on the main
> processors in the SoC, and allow them to be placed into lower power modes.

It would be nice to clarify what you want to offload
a) audio data transfers for isoc ports
b) control for e.g. volume settings (those go to endpoint 0 IIRC)
c) Both?

This has a lot of implications on the design. ASoC/DPCM is mainly
intended for audio data transfers, control is a separate problem with
configurations handled with register settings or bus-specific commands.

> There are several parts to this design:
>   1. Adding ASoC binding layer
>   2. Create a USB backend for Q6DSP
>   3. Introduce XHCI interrupter support
>   4. Create vendor ops for the USB SND driver
> 
> Adding ASoC binding layer:
> soc-usb: Intention is to treat a USB port similar to a headphone jack.
> The port is always present on the device, but cable/pin status can be
> enabled/disabled.  Expose mechanisms for USB backend ASoC drivers to
> communicate with USB SND.
> 
> Create a USB backend for Q6DSP:
> q6usb: Basic backend driver that will be responsible for maintaining the
> resources needed to initiate a playback stream using the Q6DSP.  Will
> be the entity that checks to make sure the connected USB audio device
> supports the requested PCM format.  If it does not, the PCM open call will
> fail, and userpsace ALSA can take action accordingly.
> 
> Introduce XHCI interrupter support:
> XHCI HCD supports multiple interrupters, which allows for events to be routed
> to different event rings.  This is determined by "Interrupter Target" field
> specified in Section "6.4.1.1 Normal TRB" of the XHCI specification.
> 
> Events in the offloading case will be routed to an event ring that is assigned
> to the audio DSP.
> 
> Create vendor ops for the USB SND driver:
> qc_audio_offload: This particular driver has several components associated
> with it:
> - QMI stream request handler
> - XHCI interrupter and resource management
> - audio DSP memory management
> 
> When the audio DSP wants to enable a playback stream, the request is first
> received by the ASoC platform sound card.  Depending on the selected route,
> ASoC will bring up the individual DAIs in the path.  The Q6USB backend DAI
> will send an AFE port start command (with enabling the USB playback path), and
> the audio DSP will handle the request accordingly.
> 
> Part of the AFE USB port start handling will have an exchange of control
> messages using the QMI protocol.  The qc_audio_offload driver will populate the
> buffer information:
> - Event ring base address
> - EP transfer ring base address
> 
> and pass it along to the audio DSP.  All endpoint management will now be handed
> over to the DSP, and the main processor is not involved in transfers.
> 
> Overall, implementing this feature will still expose separate sound card and PCM
> devices for both the platorm card and USB audio device:
>  0 [SM8250MTPWCD938]: sm8250 - SM8250-MTP-WCD9380-WSA8810-VA-D
>                       SM8250-MTP-WCD9380-WSA8810-VA-DMIC
>  1 [Audio          ]: USB-Audio - USB Audio
>                       Generic USB Audio at usb-xhci-hcd.1.auto-1.4, high speed
> 
> This is to ensure that userspace ALSA entities can decide which route to take
> when executing the audio playback.  In the above, if card#1 is selected, then
> USB audio data will take the legacy path over the USB PCM drivers, etc...

You would still need some sort of mutual exclusion to make sure the isoc
endpoints are not used concurrently by the two cards. Relying on
userspace intelligence to enforce that exclusion is not safe IMHO.

Intel looked at this sort of offload support a while ago and our
directions were very different - for a variety of reasons USB offload is
enabled on Windows platforms but remains a TODO for Linux. Rather than
having two cards, you could have a single card and addition subdevices
that expose the paths through the DSP. The benefits were that there was
a single set of controls that userspace needed to know about, and volume
settings were the same no matter which path you used (legacy or
DSP-optimized paths). That's consistent with the directions to use 'Deep
Buffer' PCM paths for local playback, it's the same idea of reducing
power consumption with optimized routing.

Another point is that there may be cases where the DSP paths are not
available if the DSP memory and MCPS budget is exceeded. In those cases,
the DSP parts needs the ability to notify userspace that the legacy path
should be used.

Another case to handle is that some USB devices can handle way more data
than DSPs can chew, for example Pro audio boxes that can deal with 8ch
192kHz will typically use the legacy paths. Some also handle specific
formats such as DSD over PCM. So it's quite likely that PCM devices for
card0 and card1 above do NOT expose support for the same formats, or put
differently that only a subset of the USB device capabilities are
handled through the DSP.

And last, power optimizations with DSPs typically come from additional
latency helping put the SoC in low-power modes. That's not necessarily
ideal for all usages, e.g. for music recording and mixing I am not
convinced the DSP path would help at all.

> This feature was validated using:
> - tinymix: set/enable the multimedia path to route to USB backend
> - tinyplay: issue playback on platform card



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux