>>> Specifically, the QC ADSP can support all potential endpoints that are >>> exposed by the audio data interface. This includes, feedback endpoints >>> (both implicit and explicit) as well as the isochronous (data) >>> endpoints. >> >> implicit feedback means support for capture. This is confusing... >> > > I mean, a USB device can expose a capture path, but as of now, we won't > enable the offloading to the audio DSP for it. However, if we're > executing playback, and device does support implicit feedback, we will > pass that along to the audio DSP to utilize. Not following. Implicit feedback means a capture stream *SHALL* be started. Are you saying this capture stream is hidden and handled at the DSP level only? If yes, what prevents you from exposing the capture stream to userspace as well? I must be missing something. >>> +static const struct snd_soc_dai_ops q6usb_ops = { >>> + .probe = msm_dai_q6_dai_probe, >>> + .prepare = q6afe_dai_prepare, >>> + .hw_params = q6usb_hw_params, >> >> this is rather confusing with two different layers used for hw_params >> and prepare? Additional comments or explanations wouldn't hurt. >> > > I thought this was how the ASoC design was. Each DAI defined for a > particular path has it own set of callbacks implemented to bring up any > required resources for that entity. So in this case, it initializes the > "cpu" DAI, which is the main component that handles communication with > the audio DSP. Usually prepare and hw_params rely on the type of DAI callbacks, but here you are mixing "q6afe" and "q6usb" which are shown in your Patch0 diagram as "cpu" and "codec" dais respectively. I don't think it's correct to tie the two, it's a clear layering violation IMHO. The codec dai .prepare should not invoke something that modifies the state of the CPU dai, which should have its own .prepare callback.