>I'm not visually impaired and I've not used a hardware synth myself, >but I'm curious. What is the advantage of a hardware synth over a >software synth? > >I can think of a few possibilities, but I'm curious which are true and >are important for those who use or prefer hardware synths: > >1. It doesn't affect the computer's sound system, which can therefore >play other sounds unaffected by the TTS. This could probably be >achieved for a software synth by using two sound cards. Not that easily since you'd have to use 2 sets of speakers or a mixer which could be done but would be a bit awkward. Using headphones for speech and putting other stuff through speakers would be an option but a hardware synth just takes care of all that for you. > >2. System startup messages can be spoken before the point when the >sound system and synth software is initialized and working. This would >be overcome by the proposed "Spoken Boot" feature. Not entirely unless you propose to have the spoken boot feature speak every kernel message. > >3. Problems with installing and setting up a software synth. > Yep. This is a big one. If you're using software speech only and the software has a bug or something that causes it not to work you can get yourself into a situation where you have no speech. Having the kernel handle the speech has the advantage that you know what's going on unless the kernel is hosed which means that you would know that too. In the case of gnome there are several things that could potentially cause you to lose speech and if you're relying on software speech you can be in a pickle. That said, speechd-up and speech-dispatcher seem to be quite stable with espeak on my laptop which I cannot hook a hardware synth to. So I can usually drop to a text console to fix things if gnome screws up. There's also the issue of every live CD on the planet detecting the wrong soundcard at boot so I have to crawl behind the desk and temporarily use the onboard sound card to run things like the Ubuntu live CD. There should be a boot option to force the module of your choice to be used for sound. We also run into the issue that gnome doesn't seem to let you use multiple sound sources for different things. >4. Prefer the sound of the hardware synth voice to those currently >available with software synths. Yep. I really like the DoubleTalk voices! I'm sure others like their favourite synth voices as well. I like the ESpeak voice too but it doesn't speak as fast as my DoubleTalk LT. > >5. Limitations of computer processor power or memory, although I doubt >this is an issue now. Not much of an issue with something like Espeak but if you try to run Festival on older hardware you'll hit huge processing issues. > >6. The hardware synth offers some feature not available in the >software synths. Yep. Speed! Software synths don't speak or respond as fast as something like a DoubleTalk. If your system is under load your speech will be adversely affected since it's just another process. Also the kernel preempt code doesn't play well with software speech so you add latency to the system as a whole since you need to disable it for software speech to work correctly. This is about all I can think of right now but I may come up with more later.