speechd-up status and ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 18, 2011 at 11:48 PM, William Hubbs <w.d.hubbs at gmail.com> wrote:
> If I can make things work with speechd-up the way I want them to, you
> won't need to run sd in system wide mode. What I want to work toward is
> having sd drop privs to the user you give it with the -u option, open
> the /dev/softsynth device, then connect to sd, which will autospawn a
> copy of sd running as that user.

I support this, but I'm just curios about implementation details.

Will this work on the consoles before a user has logged in?  What if
I'm logged into my gnome desktop with Orca running, which spawns
speech-dispatcher, switch to a console and login, and then log out on
my gnome desktop?  Currently, /dev/softsynth only allows read/write by
root.  Will making it world writable cause any security issues?

I think it is possible to get all this right, but it would be easy to
get it wrong.  The approved way would add a lot of complexity and
potential instability: deal with session management, create a session
for non-logged-in users on consoles, and such (which may have already
happened).  Attach to d-bus for session information, and SD is
considering moving to d-bus anyway.

However, speechd-up is currently a very simple single source file
project.  Even so, it was very buggy up until the ospeakup upgrades.
Before then, no one used speechd-up because it sucked.  I'd hate to
see it get very complicated.

I believe speech-dispatcher is already there, much to the detriment of
the project.  Note, for example, the continuing instability issues.
Note that important SD features like returning sound samples to the
application are so hard to implement, they never will be.  Voiceman,
an alternative to SD, only exists because SD will likely never support
auto-detection of language.  Integration of new TTS engines into SD
requires typically 1,000 lines of multi-threaded C code.  Integration
should be super easy, but the process isn't even well documented.  You
can't even attach to SD TTS clients with a debugger, and have to
develop with printf, as if we were writing kernel drivers.  Important
real functionality, like dealing with punctuation consistently, aren't
dealt with in SD.  Bugs like saying capital all the time rather than
increasing the pitch, continue for years.

Anyway, I'd just be careful to remember the KISS rule (keep it simple,
stupid), and try not to make things more complicated than need be.  If
SD had done this, we'd have one universal speech switch running on
everything from Android to Windows.  TTS engines could be integrated
once, and run everywhere.  We'd have one speech back end, rather than
SD, Voiceman, and the emacspeak speech server (among others).  As it
is, it's hard as hell to even get speakup talking to SD reliably, with
just one copy running.  It's no wonder that most people just use
espeakup.  Yasr support in Vinux broke recently because of changes in
SD.  What's left is SD being used to connect Orca to espeak, and even
that crashes every now and then.  And so... what's considered a high
priority for SD?  I hope to God it's not freaking d-bus integration.

I honestly think about throwing away SD and starting over with a clone
that could act as a drop-in replacement.  Then, all that other cool
stuff could happen.

Bill



[Index of Archives]     [Linux for the Blind]     [Fedora Discussioin]     [Linux Kernel]     [Yosemite News]     [Big List of Linux Books]
  Powered by Linux