I'd like to gather some opinion on the best way to expose dual interface codecs to alsa-lib and userspace in general. Consider a codec with 2 digital audio interfaces :- 1. I2S interface to host CPU 2. PCM interface to BT codec or GSM modem. Interface 1 is used for traditional playback and record. Interface 2 is used to send and receive PCM audio to a BT codec (for Tx/Rx) from/to an onboard Mic/Speaker. e.g. ____________ DAI 2 ______ Mic --------> | Codec | <==============> | BT | --< Spk <---------|____________| |______| /\ || DAI 1 || ____\/____ | CPU | |__________| This allows BT/GSM audio to work without the CPU DMAing any pcm data as audio is now sent directly via the audio codec (the CPU can then sleep). Currently, we have both interfaces exposed to userspace as PCM's. This allows for hw params to be configured for interface 2 (as they would for any other device). The only difference is that we never start the PCM and always keep it in the prepared state. There is the obvious problem that applications may try and start this pcm, so I'm wondering if we need a new class of pcm device that doesn't support a host buffer (e.g. bufferless) and cant be started ? or alternatively some other approach may be better. Cheers Liam ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ Alsa-devel mailing list Alsa-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/alsa-devel