On Thu, Sep 19, 2013 at 11:59:45AM -0400, Jonathan Lebon wrote: > I tried to dig a bit deeper in this. From my limited understanding, > it seems like stream events are implemented as enabled/disabled timers. > The issue is that if there's no data from the guest app pending, the > timeout in virEventPollRunOnce will be calculated as -1. So then we > block on the poll() and only come out once stdin is ready for reading. > > This means that if data is received from the guest during the blocking > poll(), there will be no dispatching until something happens on stdin > and poll() returns (hence why you have to <Enter> twice). poll() will be listening for i/o on the libvirt socket as well as stdin, so it'll see incoming I/O from the guest. > I'm sure there's a better solution, but is there any way to force the > timer created for streams to always be 0? Or even to use ppoll() > instead of poll() and arrange for a benign signal upon stream events? > Hopefully my analysis wasn't too far off. I don't think that is the case. The streams/events code is already used for the 'virsh console' command implementation which doesn't suffer from the problem you describe. One relevant thing is that stdio is line buffered by default and you aren't putting it into raw mode like virsh console does. This will delay I/O on the stdio streams. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| _______________________________________________ libvirt-users mailing list libvirt-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvirt-users