My use-case is as follows: I have a headless PC (a Raspberry Pi) that I want to use as a generic sound server. I want to run three audio applications on this; two of those apps will talk to ALSA directly, and one needs to use Pulseaudio. I have only one actual hardware audio interface, a USB DAC. So all three apps need to share the same hardware. Also, the USB DAC provides hardware-based volume control; I would like all apps (regardless of being ALSA- or PulseAudio-based) to also share that control. My understanding is that there appears to be two different ways to do this: (1) Configure the dmix plugin for ALSA, and have all applications, including PulseAudio, use dmix (rather than any one app directly taking exclusive control of the hardware device) (2) Configure a "pulse" virtual ALSA device that allows ALSA apps to use the ALSA API as a passthrough to PulseAudio And maybe there are other ways? My question is: is one way better than the other? Or, maybe a better question, what are the pros and cons of each approach? It feels like the second option is more typically recommended, but I find that counter-intuitive, since if the application is written against the ALSA API, why does PulseAudio need to be involved at all? The app pushes data to ALSA, which forwards to PulseAudio, which ultimately pushes it back to ALSA. A related question: is there any way to maintain "bitperfect" playback with any of these device-sharing schemes? That is, if there is only one application using the device, then the PCM data will be passed directly to the DAC, without any resampling. Only if the hardware does not support the sample rate, or if multiple applications want to output sound simultaneously, will the PCM data be changed from the source. Thanks! Matt _______________________________________________ Alsa-user mailing list Alsa-user@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/alsa-user