Hi! It has been a while since I last posted on this mailing list. A quick update on the current situation of PulseAudio: I am still working on my thesis and will need another month before I can focus on PulseAudio full-time again. In the meantime Pierre Ossman agreed to maintain PulseAudio. Please note however, that he's not working full-time on it, nor is he integrating new stuff. In January I've been attending the FOMS and linux.conf.au conferences. On LCA I did a presentation about PulseAudio, which you can view online on: http://mirror.linux.org.au/pub/linux.conf.au/2007/video/talks/211.ogg (Theora) The slides: http://0pointer.de/public/pulseaudio-presentation-lca2007.pdf During FOMS we discussed the current state of Linux audio. After a long arguing we agreed that introducing a new abstracted cross-platform audio API (dubbed "libsydney" for now) is a good idea, to work against the current mess of Linux audio systems and APIs. Yepp, trying to fix that we have too many incompatible, competing audio APIs and systems by introducing yet another one seems paradoxical, but we think this is the best way to go. As many of you might know I am not happy with the current PulseAudio audio streaming API. While it is very powerful it is also very complicated due to its asynchronous nature. It is my intention to adopt this new abstracted audio API as the only official audio playback API for PulseAudio. That way, if you program against PulseAudio you get cross-platform support for free. You might ask why we decided to design our own new API instead of adopting an existing API such as PortAudio. The most important reason here is that all the current APIs are not designed with sound servers in mind. The new libsydney API in contrast exposes a buffering model that is much more suitable for networked sound servers. Besides working better on these setups the API makes certains things much easier to implement. On normal hardware soundcards the buffering model is emulated. Basically the new API is based on the PulseAudio API, but without the asynchronicity and cleaned up for non-PulseAudio backends. The API design has been done by Jean-Marc Valin (Xiph.org/speex), Mikko Leppanen (Nokia) and me. Besides a draft header file no code has been written yet. Hence enough of these fluffy promises for now. I will post more about this on my blog eventually. Another outcome of FOMS is that we now have - thanks to Jean-Marc Valin - an LGPL fixed-point resampler implementation (part of libspeex SVN), which will hopefully speed up resampling in PulseAudio (such as the one done by module-combine) quite a bit. Lennart -- Lennart Poettering; lennart [at] poettering [dot] net ICQ# 11060553; GPG 0x1A015CC4; http://0pointer.net/lennart/