I don't know how any speech synth, be it hardware or software works at the nuts-and-bolts level, but I suspect the people who make the DeckTalk synths have put more effort into ensuring their devices work with Windows than they have for Linux and wouldn't be surprised if they've never published any documentation that would assist with building a deckTalk speech dispatcher module or whatever would be needed to get them working with Orca. There's hardware far more mainstream than hardware speech synths that have shoddy or non-existent Linux support because the hardware vendor gave Linux users no choice but reverse engineer how to talk with their devices. And even if the decktalks are fully documented and it's just a matter of someone with the right skills writing the needed bit of code, there aren't that many people working on Linux accessibility, and I get the impression people using hardware synths are a small percentage of an already small demographic, so it isn't surprising that no one has made supporting hardware synths a priority. Orca has only a single active developer, best I can tell, Debian's Accessibility Team is one person, the Slint distribution is maintained by one person, Vinux collapsed due to lack of manpower, and those are just the examples I can name off top of my head. I don't pay attention to what's going on over in Windows land, but I would be surprised if NVDA doesn't eclipse Orca in number of developer hours that go into it just by virtue of Window's larger user base, and for all I know, the people behind JAWS might have someone they pay just to maintain hardware synth support. Admittedly, a missing feature really sucks for those who need it most, but there's not much that can be done if there isn't someone with the time, capability, and willingness to implement it. _______________________________________________ Blinux-list mailing list Blinux-list@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/blinux-list