Hi, > I am new to the list and new to Linux audio. welcome :) . > I wrote a simple QT program that uses sound, and it Mhhh. A simple program to record audio 8-P ? > requires the NAS sound server. Is it really called NAS, or MAS (media application server)? > I installed NAS from Mandrake rpm's rebooted the box, and > the program works great. > > The problem is that most of my other software, for example > Amarok, stopped producing sound. > > Prior to installing NAS, I used KDE and there sound server > aRts. When I start Amarok I get a message from aRts, that > it can't open device for playback. NAS now blocks the audio device. Try to make artsd play into NAS instead to the audio device could be a solution. I'd like to recommend the following 15MB: http://sysexxer.sourceforge.net/files/LinuxAudioBasics.avi > I thus have two questions: > > 1. Is there a way to resolve such a situation, in a way > that allows two programs that use different sound servers > to play sounds at the same time? There could only be one layer which can handle all audio stuff from different applications. In linux, we are on free software, and so we have choices. There are many audio layers, but not every audio application can communicate with each of them. > 2. A more generic question: Why do we have to use sound > servers in Linux? shouldn't it be the job of the operating > system to allow sound multiplexing through a set of > standard API's? Well, I'm not allowed to comment on this issue, otherwise I risk to be at least blacklisted by the members of this list ;-)))) . > The situation is especially bad, if there > is no comfortable solution to the problem in my first > question. It depends what your application is for. arts will most probably be dropped by the KDE project, and maybe Gnome will drop esound. So, your application should not use one of them. Instead, MAS and Gstreamer are the two candidates for future Gnome and KDE versions. If your application should be independent from Gnome or KDE, you could make it an direct ALSA application. But this is a dirty design, because it then would block the audio device resp. not get access to the device if an other application already uses the device. The same if you use the older OSS-device (usually known as /dev/dsp) You could use DMIX, an ALSA based sofmixer. But DMIX isn't installed on much systems. I expect it to be more often found in the future, but currently it isn't found on common users machines. If your application is intended for professional audio work, you should make it use the JACK soundserver. Most linux musicians are used to use it for their daily work. There's already a bunch of applications (Synths and so on) which will only run on JACK. Best regards ce