Re: How to mix (naturally speaking) win32 and native (python) li

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



vitamin wrote:
> Eric S. Johansson wrote:
>> Is it possible to make this code (natlink) talk to naturally speaking in
>> wine using Windows 32 but speak to Python in Linux so we can do all our fun
>> command-and-control stuff there.
> 
> No, Linux's python has no notion of COM (which is obviously a win32 only
> thing). And windows python doesn't work all that well on Wine.

no surprise.  so if I write a linux python C extension that uses wine libs to
make the com connection it won't work?  I guess this means getting natspeak/wine
to see the taskbar/window id of the active app will be dammed near impossible?

(see what twisted trouble wine can get you into)

You probably deserve a bit of back story at this point. I'm trying to build a
speech recognition friendly dictation target that can cut or paste between the
dictation target and the application target. Our shorthand for it is "persistent
dictation box". This derives from the feature in NaturallySpeaking called
"dictation box". You can use it to dictate text and be able to edit it using
speech recognition. On completion of dictation/editing, you can then paste it
into any Windows application using standard Windows cut-and-paste sequences. The
downside is that the dictation box goes away when you paste the text.  I figure
that if you can leave a dictation box up and associated with an application,
it'll be an easier environment to work with especially with short burst
dictation like Instant Messenger.

To do this the "easy way" I need something like Notepad created with the
NaturallySpeaking friendly edit controls. Once I give the command to transfer, I
need to show the data into a cut-and-paste buffer, shift focus to the right
Linux application and paste.

I would like to do as much of this as I can in Python because my hands are
broken and Python is probably the only language I can dictate using speech
recognition unassisted. I also have a couple of people who have expressed
interest in helping building a speech recognition friendly editing environment
but, I need to finish the UI design before we can move forward.

The next example comes from this speech friendly editing environment. At the
very least, to make an editor usable with speech recognition, you need to be
able to generate keystrokes that activate keyboard shortcuts to various
functions. Emacs is wonderful because if there isn't a keystroke already
defined, you can define one and get the functionality you need. VI is
particularly hideous because well, if you misrecognize command and generate
text, God knows what's going to happen to your work.

Hopefully this back story helps you understand what I'm up against. I'm really
hoping to make Linux more friendly to disable programmers like myself but it's
going to take help from developers whose hands still work otherwise the job
won't get done because it's too big for someone with broken hands.



[Index of Archives]     [Gimp for Windows]     [Red Hat]     [Samba]     [Yosemite Camping]     [Graphics Cards]     [Wine Home]

  Powered by Linux