On 14/10/2015 21:39, Eric S. Johansson wrote: > update from the NaturallySpeaking in a VM project. > > don't remember what I told you before but, yes I can now send keystroke > events generated by speech recognition in the Windows guest into the > Linux input queue. I can also extract information from the Linux side, > and have it modify the grammar on the Windows side. The result of > activating that grammar is that I can execute code on either side in > response to speech recognition commands. it's fragile as all hell but > I'm the only one using it so far. :-) That's awesome! What was the problem? > Latency is a bit longer than I like. USB and network connections break > every time I come out of suspend part at least I don't have to use > Windows all the time. > > One thing is puzzling though. Windows, in idle, consume something like > 15 to 20% CPU according to top. I turn on NaturallySpeaking, the > utilization climbs to him roughly 30 to 40%. I turn on the microphone > and utilization jumps up to 80-110%. In other words, it takes up a > whole core. USB is really expensive because it's all done through polling. Do that in hardware, and your computer is a bit hotter; do that in software (that's what VMs do) and your computer doubles as a frying pan. If you have USB3 drivers in Windows, you can try using a USB3 controller. But it's probably going to waste a lot of processing power too, because USB audio uses a lot of small packets, making it basically the worst case. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html