On Fri, 2020-04-24 at 10:20 -0500, Eric Blake wrote: > On 4/24/20 7:37 AM, Daniel P. Berrangé wrote: > > On Fri, Apr 24, 2020 at 02:33:13PM +0200, Michal Privoznik wrote: > > > On 4/24/20 6:38 AM, Vincent Wu wrote: > > > > > > The save format is fragile. At the beginning there is a header which > > > describes the file, then there is libvirt section (which contains the domain > > > XML and a cookie) and then there is QEMU section (where QEMU saved the guest > > > memory). Because of this, we have to have the check you are hitting in place > > > so that we don't accidentally overwrite the QEMU section. > > > > BTW, does anyone recall why we were so restrictive on the XML length > > in the first place ? I looked at history and didn't see why we did > > it this way. > > > > It occurrs to me that given guest typical RAM sizes measuring many > > 100's of MB, we could easily make the header section have 1 MB of > > padding, and thus allow essentially arbitrary XML updates without > > worry about hitting a size limit. > > We've had guest XML reaching 1M before, but I agree that the initial > saved image creation should include padding to a nice boundary to make > future edits less likely to overflow the reserved heading. > > On new enough Linux, some file systems support > fallocate(FALLOC_FL_INSERT_RANGE) which can splice in a hole (all later > file contents are shifted in offsets); maybe our save code could take > advantage of that to repair existing saved images with insufficient > header size in a more efficient manner than manually shifting the rest > of the file contents ourselves. There's a bug filed for this: https://bugzilla.redhat.com/show_bug.cgi?id=1229255 Both you and Dan commented on it at some point, but I thought I'd bring it up in case you forgot - it was a while ago :) -- Andrea Bolognani / Red Hat / Virtualization