On Thu, Jan 23, 2020 at 10:22:33AM +0100, Michal Privoznik wrote: > On 1/22/20 1:18 PM, Daniel P. Berrangé wrote: > > On Wed, Jan 22, 2020 at 01:01:42PM +0100, Michal Privoznik wrote: > > > For anybody with libvirt insight: virNetClientIOHandleInput() -> > > > virNetClientCallDispatch() -> virNetClientCallDispatchStream() -> > > > virNetClientStreamQueuePacket(). > > > > > > > > > The obvious fix would be to stop processing incoming packets if stream has > > > "too much" data cached (define "too much"). But this may lead to > > > unresponsive client event loop - if the client doesn't pull data from > > > incoming stream fast enough they won't be able to make any other RPC. > > > > IMHO if they're not pulling stream data and still expecting to make > > other RPC calls in a timely manner, then their code is broken. > > This is virsh that we are talking about. It's not some random application. Right so the problem doesn't exist in virsh, as it isn't trying to run oither RPC calls while streaming data. So we ought to be able to just stop reading from the socket > > And I am able to limit virsh mem usage to "just" 100MiB with one well placed > usleep() - to slow down putting incoming stram packets onto the queue: > > diff --git i/src/rpc/virnetclientstream.c w/src/rpc/virnetclientstream.c > index f904eaba31..cfb3f225f2 100644 > --- i/src/rpc/virnetclientstream.c > +++ w/src/rpc/virnetclientstream.c > @@ -358,6 +358,7 @@ int virNetClientStreamQueuePacket(virNetClientStreamPtr > st, > virNetClientStreamEventTimerUpdate(st); > > virObjectUnlock(st); > + usleep(1000); > return 0; > } > > > But any attempt I've made to ignore POLLIN if stream queue is longer than > say 8 packets was unsuccessful (the code still read incoming packets and > placed them into the queue). I blame passing the bucket algorithm for that > (rather than my poor skills :-P). I think we need some calls to virNetClientIOUpdateCallback() added around the places where we dealing with the stream data queue. The method will want to be updated so that it avoids setting the READABLE bit if the queue is too large I guess. > > > > > Having said that, in retrospect I rather regret ever implementing our > > stream APIs as we did. We really should have just exposed an API which > > lets you spawn an NBD server associated with a storage volume, or > > tunnelled NBD over libvirtd. The former is probably our best strategy > > these days, now that NBD has native TLS support. > > Yeah, but IIRC NBD wasn't a thing back then, was it? NBD predates libvirtd in fact though I can't remember when the qemu-mbd tool arrived vs libvirt's streams ! Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|