In article <Pine.LNX.4.62.0609221023490.31361 at localhost.localdomain>, Willem van der Walt <wvdwalt at csir.co.za> wrote: > Orca uses gnome-speech which has a driver for speech-dispatcher which > can be configured to use espeak. If espeak is the default voice of > speech-dispatcher, it will work. Regards, Willem I asked about this on the Ubuntu Accessibility list, and got this answer from Luke Yelavich: On Fri, Sep 22, 2006 at 08:40:08PM EST, Jonathan Duddington wrote: > I've been asked whether Orca can use the eSpeak software synthesizer. > > I believe that Gnome Speech can be set up to use Speech Dispatcher, > and Speech Dispatcher can use eSpeak. So in theory the answer seems > to be "yes". Has anyone done this, and does it work in practice? > Does responsiveness suffer as a result of the additional intermediary? I have used orca with speech-dispatcher via gnome-speech, but not with espeak. It is to say the least, very clunky. There is a speech output module under development for orca for interfacing directly with speech-dispatcher, but it is still very much in the early stages. > Or would it be better to write a Gnome Speech driver for eSpeak? The > drivers seem fairly small but I don't understand the environment in > which they are written (concepts such as bonobo, oaf, corba etc). But > if someone wants to write a driver then I'll be happy to assist. What would actually be better, is to have a module written for speech-dispatcher to interface with the espeak shared library, rather than call the command-line utility all the time. Personally, I feel gnome-speech needs to be removed, and speech-dispatcher should be the speech back-end of choice. Espeak will very likely be a synthesizer that will be used for various tasks in the next release of Ubuntu, such as spoken boot, https://wiki.ubuntu.com/SpokenBoot. I intend to get the newest version fo espeak packaged in the next few days.