Eric S. Johansson wrote: > vitamin wrote: > > > Eric S. Johansson wrote: > > > > > Is it possible to make this code (natlink) talk to naturally speaking in > > > wine using Windows 32 but speak to Python in Linux so we can do all our fun > > > command-and-control stuff there. > > > > > > > No, Linux's python has no notion of COM (which is obviously a win32 only > > thing). And windows python doesn't work all that well on Wine. > > > > no surprise. so if I write a linux python C extension that uses wine libs to > make the com connection it won't work? I guess this means getting natspeak/wine > to see the taskbar/window id of the active app will be dammed near impossible? > > (see what twisted trouble wine can get you into) > > You probably deserve a bit of back story at this point. I'm trying to build a > speech recognition friendly dictation target that can cut or paste between the > dictation target and the application target. Our shorthand for it is "persistent > dictation box". This derives from the feature in NaturallySpeaking called > "dictation box". You can use it to dictate text and be able to edit it using > speech recognition. On completion of dictation/editing, you can then paste it > into any Windows application using standard Windows cut-and-paste sequences. The > downside is that the dictation box goes away when you paste the text. I figure > that if you can leave a dictation box up and associated with an application, > it'll be an easier environment to work with especially with short burst > dictation like Instant Messenger. > > To do this the "easy way" I need something like Notepad created with the > NaturallySpeaking friendly edit controls. Once I give the command to transfer, I > need to show the data into a cut-and-paste buffer, shift focus to the right > Linux application and paste. > > I would like to do as much of this as I can in Python because my hands are > broken and Python is probably the only language I can dictate using speech > recognition unassisted. I also have a couple of people who have expressed > interest in helping building a speech recognition friendly editing environment > but, I need to finish the UI design before we can move forward. > > The next example comes from this speech friendly editing environment. At the > very least, to make an editor usable with speech recognition, you need to be > able to generate keystrokes that activate keyboard shortcuts to various > functions. Emacs is wonderful because if there isn't a keystroke already > defined, you can define one and get the functionality you need. VI is > particularly hideous because well, if you misrecognize command and generate > text, God knows what's going to happen to your work. > > Hopefully this back story helps you understand what I'm up against. I'm really > hoping to make Linux more friendly to disable programmers like myself but it's > going to take help from developers whose hands still work otherwise the job > won't get done because it's too big for someone with broken hands. Hi, if I understand it correctly, your idea is quite related (but not the same as) to: http://forum.winehq.org/viewtopic.php?t=5048 My lack of experience with python does not let me know how well you can handle events there, I suppose as well as in C. So, to get it right: > I figure > that if you can leave a dictation box up and associated with an application, > it'll be an easier environment to work with especially with short burst You want to have a dictation box associated with a native app once you select it, is that right? For starters, I see that this box would have to be made using the Win32API, so that you can dictate onto it with NatSpeak. Then you'd need to have some kind of thread that monitored it for changes and then flushed them to the native window. I'm thinking C here. (This is all theory which can fully be wrong) So I guess that if you used native code to get info on the Window (as well as box positioning), you could use it to, then, create the box and the associated thread which would keep monitoring and flushing the data. But more than that, you'd have to forward individual keypresses. (so that you could have CTRL, ALT, etcetera working) Plus, this all would *proably* have to be run as separate process that waits until it is 'called' (when you decide to copy-and-paste-create-box. I don't know much about X, but I suppose there's a way to determine the currently selected widget inside a window, if it has a representable ID (I don't know about X, does it), you could then store it as the target ID. Then you could also store a handle to the dictation box and that would probably be half-way through. Again, thinking C here...and Winelib. Sorry if this didn't help, but I figured that storing all these ideas somewhere might be useful either for now or later, Jorl17