I had always assumed early speech synthesizers where obviously non-human because the technology simply didn't exist to make more human sounding voices viable... As for responsiveness, I think there are two different things being conflated here. It is indeed erroneous to equate TTS with the frontend tool using it, whether that frontend is a screen reader, a mechanical voicebox for the mute, or what have you, and yes, the screen reader, if that's what's using the TTS should be controlling most of what the TTS is doing, but it's important that the TTS be able to render speech quickly enough that the screen reader can actually use it. A TTS that can take an eBook as input and spit out an audiobook indistinguishable from one recorded by a professional reader in a sound booth would be great for generating audiobooks even if it took twice as long to render the audio as to play it back, but it would be kind of lousy to use with a screen reader if it took 5 seconds to speak everytime Orca sent it a sentence to speak. I could be wrong, but I suspect this is the kind of thing whoever originally asked about responsiveness was talking about. _______________________________________________ Blinux-list mailing list Blinux-list@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/blinux-list