Petrus de Calguarium wrote: > I notice that a lot of MIDI programs require jack-audio-connection-kit in > order to emit sound. If you need only MIDI, you can set up JACK to use the dummy input/output drivers, then you can continue to use PulseAudio. JACK MIDI is handled separately and (usually) forwarded to the ALSA sequencer interface (AFAIK, some sequencers like FluidSynth can also plug in directly at JACK level, but it's not the usual way), at which point it becomes the software sequencer's problem where to output the actual sound (and if you have hardware MIDI support, it doesn't reenter the system at all and should be transparently mixed in by the sound card). But if you use regular JACK sound, there's no reliable way to have JACK and PulseAudio use the same sound device at the same time, the best JACK can do is suspend PulseAudio's use of the device when it needs it, through a "device reservation" mechanism designed for this purpose. The JACK and PulseAudio developers think that's the best solution for the common use case and aren't interested in solutions like running JACK on top of PulseAudio (which would come with high latency costs) or emulating the JACK protocol in PulseAudio (quite complicated). It's possible to run PulseAudio on top of JACK, but that also comes with high latency, needs manual setup and is considered permanently experimental; the developers aren't interested in making that setup work out of the box. You also lose some PulseAudio features if you use that setup. I'm not convinced having 2 competing sound servers which can't fully interoperate is a good thing, but it's the current situation. Audio production stuff uses JACK, the rest of the applications have mostly standardized on PulseAudio (or some API which PulseAudio provides compatibility support for, like ALSA or ESD). Kevin Kofler