Re: Raising limits for our RPC messages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 04, 2017 at 01:39:17PM +0200, Michal Privoznik wrote:
> Dear list,
> 
> maybe you've seen us raising limit for various parts of RPC messages
> (for eaxmple: d15b29be, 66bfc7cc61ca0, e914dcfd, etc.). It usually
> happens when we receive a report that current limits are not enough.
> Well, we just did:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1440683

[snip]

> So after all, we will effectively have the limit of message size
> increased to $limit_of_split_messages * $current_limit_of_message_size.
> Instead of introducing complex code that splits data into messages and
> reconstructs back, we might as well increase the current limit of
> message size.

I guess it is helpful to consider the reason why we have a limit in
the first place.

Originally the RPC message buffer was statically allocated on the
stack :-) We fixed that, but then the struct containing the buffer
was still fully allocated in te struct. We fixed that too, and the
RPC message struct dynamically grows the buffer as needed.

At this point, the limit really just exists to avoid accidental
/ deliberate DoS attack on server / client by making unreasonably
large requests.

Whether we have 1 large message or 3 fragmented messages, we're
going to end up serializing the RPC top the messages at the same
time for sake of simplicity, so our peak memory usage is going to
be the same in both cases. So I agree, that fragmenting messages
is just adding complexity without a real win.

So I think it is reasonable to simply increase the buffer size as
we have done before.

That said, we should bear this problem in mind before adding more
"bulk query" APIs, as it isn't sensible to carry on increasing
RPC message size forever. As somepoint we have to consider that
serializing 100's of MB of data into an RPC message is an
inherantly inefficient design, and consider alternative API
designs without this.

For bulk stats query we could do something totally radical and
have the facility  for the client to send us a shared memory
region which we asynchronously populate with stats, avoiding the
RPC layer entirely. Obviously only works for local connections,
but I get the impression that most mgmt apps have a node-local
agent talking to libvirtd anyway.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]
  Powered by Linux