Re: [PATCH v30 00/30] Introduce QC USB SND audio offloading support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 12/10/2024 7:18 AM, Cezary Rojewski wrote:
> On 2024-12-06 1:28 AM, Wesley Cheng wrote:
>>
>> On 12/4/2024 2:01 PM, Cezary Rojewski wrote:
>>> On 2024-12-03 9:38 PM, Wesley Cheng wrote:
>>>> Hi Cezary,
>>>>
>>>> On 12/3/2024 8:17 AM, Cezary Rojewski wrote:
>
> ...
>
>>>>> UAOL is one of our priorities right now and some (e.g.: me) prefer to not pollute their mind with another approaches until what they have in mind is crystalized. In short, I'd vote for a approach where USB device has a ASoC representative in sound/soc/codecs/ just like it is the case for HDAudio. Either that or at least a ASoC-component representative, a dependency for UAOL-capable card to enumerate.
>>>>>
>>>>
>>>> Just to clarify, "struct snd_soc_usb" does have some correlation with our "codec" entity within the QCOM ASoC design.  This would be the q6usb driver.
>>>>
>>>>
>>>>> Currently struct snd_soc_usb does not represent any component at all. Lack of codec representative, component representative and, given my current understanding, mixed dependency of sound/usb on sound/soc/soc-usb does lead to hard-to-understand ASoC solution.
>>>>
>>>>
>>>> IMO the dependency on USB SND is necessary, so that we can leverage all the pre-existing sequences used to identify USB audio devices, and have some ability to utilize USB HCD APIs as well within the offload driver.
>>>
>>> So, while I do not have patches in shape good enough to be shared, what we have in mind is closer to existing HDAudio solution and how it is covered in both ALSA and ASoC.
>>>
>>> A ASoC sound card is effectively a combination of instances of struct snd_soc_component. Think of it as an MFD device. Typically at least two components are needed:
>>>
>>> - platform component, e.g.: for representing DSP-capable device
>>> - codec component, e.g.: for representing the codec device
>>>
>>> USB could be represented by such a component, listed as a dependency of a sound card. By component I literally mean it extending the base struct:
>>>
>>> stuct snd_soc_usb {
>>>      struct snd_soc_component base;
>>>      (...)
>>> };
>>>
>>> In my opinion HDAudio is a good example of how to mesh existing ALSA-based implementation with ASoC. Full, well implemented behaviour of HDAudio codec device drivers is present at sound/pci/hda/patch_*.c and friends. That part of devoid of any ASoC members. At the same time, an ASoC wrapper is present at sound/soc/codecs/hda.c. It will represent each and every HDAudio codec device on the HDAudio bus as a ASoC-component. This follows the ASoC design and thus is easy understand for any daily ASoC user, at least in my opinion.
>>>
>>> Next, the USB Audio Offload streams are a limited resource but I do not see a reason to not treat it as a pool. Again, HDAudio comes into picture. The HDAudio streams are assigned and released with help of HDAudio library, code found in sound/hda/hdac_stream.c. In essence, as long as UAOL-capable streaming is allowed, a pcm->open() could approach a UAOL-lib (? component perhaps?) and perform ->assign(). If no resources available, fallback to the non-offloaded case.
>>>
>>> While I have not commented on the kcontrol part, the above means that our current design does go into a different direction. We'd like to avoid stream-assignment hardcoding i.e.: predefining who owns a UAOL-capable stream if possible.
>>>
>>>
>>
>> Thanks for sharing the implementation for HDA.  I did take a look to the best of my ability on how the HDAudio library was built, and I see the differences that are there with the current proposal.  However, I think modifying the current design to something like that would also require the QCOM ASoC side to change a bit too.  As mentioned by Pierre, I think its worthwhile to see if we can get the initial changes in, which is the major part of the challenge.  For the most part, I think we could eventually refactor soc-usb to behave similarly to what hda_bind.c is doing.  Both entities are the ones that handle linking (or creation in case of HDA) of ASoC components.  The one major factor I can see is that within the HDA implementation vs USB SND is that, for USB, hot plugging is a common practice, and that's a scenario that will probably need more discussion if we do make that shift.
>>
>>
>> Anyway, I just wanted to acknowledge the technical details that are utilized by HDAudio, and that we could potentially get there with USB SoC as well.
>
> Hello,
>
>
> After analyzing the USB for some time to get an even better understanding of what's present in this series, I arrived at a conclusion that indeed, the approach present here clearly differs from what I would call _by the book_ approach for hardware-based USB Audio offloading.
>
> All sections below refer to the public xHCI spec [1].
> A high-level bullets for the probing procedure:
>
> 1. xHCI root and resources probe() as they do today
> 2. xHCI reads HCCPARAMS2 (section 5.3.9) and checks GSC bit
> 2a. If GSC==0, the UAOL enumeration halts
>
> 3. xHCI sends GET_EXTPROP_TRB with ECI=1 to retrieve capabilities supported (section 4.6.17 and Table 4-3)
> 3a. If AUDIO_SIDEBAND bit is not set, the UAOL enumeration halts
>
> 4. Create a platform_device instance. This instance will act as a bridge between USB and ASoC world. For simplicity, let's call it usb-component, a representative of USB in struct snd_soc_component.
>
> 5. On the platform_device->probe() the device requests information about resources available from xHCI (section 7.9.1.1), ECI=1, SubType=001
> 6. Allocate a list of streams per device or list per endpoint supported based on the data retrieved with the followup TRB of SubType=010.
>

Hi Cezary,

Ah...this is why I mentioned earlier if what you were talking about was the XHCI audio sideband feature mentioned in the xHCI spec, which this series is not.  What you are mentioning is a full HW offload of audio transfers, and that system memory is not utilized for those transfers.  In this case, we're just offloading the work, ie handling of data transfers, to an audio DSP.  This is what Mathias and I clarified on the below discussion:
https://lore.kernel.org/linux-usb/17890837-f74f-483f-bbfe-658b3e8176d6@xxxxxxxxxxxxxxx/


> (things get more complicated here, stopping)
>
> Now, any time a sound card with bound usb-component would begin PCM operation, starting with substream->open(), the component would first check if the device and/or the endpoint has resources necessary to support offloading. If not, it would fallback to the non-offloaded case.
>
>
> I do not see implementation for any TRBs I mentioned above here. The HCCPARAMS2 seem to be ignored too. At the same time, I'm unsure about the "interrupters" piece. I believe they make the approach present here somehow work, yet may not be required by the _by the book_ approach at all.
>
>

IMO, the xHCI spec doesn't really go over the audio sideband implementation in detail, so its hard to evaluate what a proper design is to accommodate for it.  I've heard that there was work done on the Windows OS to support this, but other than a brief mention of it, there were no implementation details either.  In this series, the proposal is that the apps/main core is still responsible for handling the control interfaces and power management, and only offloading the handling and completion of transfers.


Thanks

Wesley Cheng





[Index of Archives]     [Pulseaudio]     [Linux Audio Users]     [ALSA Devel]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux