Re: [PATCH] qemu: process: Fix automatic setting of locked memory limit for VFIO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 04, 2015 at 17:16:53 -0700, Alex Williamson wrote:
> On Wed, 2015-11-04 at 16:54 +0100, Peter Krempa wrote:
> > On Wed, Nov 04, 2015 at 08:43:34 -0700, Alex Williamson wrote:
> > > On Wed, 2015-11-04 at 16:14 +0100, Peter Krempa wrote:

[...]

> > Additionally if users wish to impose a limit on this they still might
> > want to use the <hard_limit> setting.
> 
> What's wrong with the current algorithm?  The 1G fudge factor certainly

The wrong think is that it doesn't work all the time. See below...

> isn't ideal, but the 2nd bug you reference above is clearly a result
> that more memory is being added to the VM but the locked memory limit is
> not adjusted to account for it.  That's just an implementation

Indeed, that is a separate bug, and if we figure out how to make this
work all the time I'll fix that separately.

> oversight.  I'm not sure what's going on in the first bug, but why does
> using hard_limit to override the locked limit to something smaller than
> we think it should be set to automatically solve the problem?  Is it not
> getting set as we expect on power?  Do we simply need to set the limit
> using max memory rather than current memory?  It seems like there's a

Setting it to max memory won't actually fix the first referenced bug.
The bug can be reproduced even if you don't use max memory at all on
power pc (I didn't manage to update the BZ yet though with this
information). Using max memory for that would basically add yet another
workaround for setting the mlock size large enough.

The bug happens if you set up a guest with 1GiB of ram and pass a AMD
FirePro 2270 graphics card into it. Libvirt sets the memory limit to
1+1GiB and starts qemu. qemu then aborts as the VFIO code cannot lock
the memory.

This does not happen on larger guests though (2GiB of ram or more) which
leads to the suspicion that the limit doesn't take into account some
kind of overhead. As the original comment in the code was hinting that
this is just a guess and it proved to be unreliable we shouldn't special
case such configurations.

Setting it to max memory + 1G would actually work around the second
mentioned bug to the current state.

> whole lot of things we could do that are better than allowing the VM
> unlimited locked memory.  Thanks,

I'm happy to set something large enough, but the value we set must be
the absolute upper bound of anything that might be necessary so that it
will 'just work'. We decided to be nice enough to the users to set the
limit to somethig that works so we shouldn't special case any
configuration.

Peter

Attachment: signature.asc
Description: Digital signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]