Re: [PATCH 0/3] Fix memlock limit during hotplug of mdev devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/6/19 10:29 AM, Daniel Henrique Barboza wrote:
> Hi,
> 
> 
> I've thought about the issue you're fixing. Have you considered/tried
> to handle the MemLockLimit upper in the call hierarchy with your new
> qemuDomainAdjustMaxMemLockHostdev() function? Both
> qemuDomainAttachMediatedDevice() and qemuDomainAttachHostPCIDevice()
> are called from qemuDomainAttachHostDevice(). In theory, a call to

That's a good point.  I did start going down that path, but that was
before I created qemuDomainAdjustMaxMemLockHostdev() and didn't like
the way things were turning out.  I will revisit that, since it would
certainly be better to have them adjusted in one place, rather than in
the different qemuDomainAttach* routines that are affected now (or may
be in the future).

> qemuDomainAdjustMaxMemLockHostdev() at the start of AttachHostDevice()
> would cover all hostdev types of the function. Then you can undo the
> memLock in the 'error' label there if something goes wrong.
> 
> It is possible to even argue about the be possibility of calling
> qemuDomainAdjustMaxMemLock() at the start of the generic hotplug
> function, qemuDomainAttachDeviceLive(), using the same principle -
> temporarily add the device in the vmdef, define the new mem limit
> assuming that everything will be OK, undo the adjustment if ret != 0.
> This would cover the case of memory hotplug, which needs RLIMIT
> changes as well and uses the same qemuDomainAdjustMaxMemLock()
> function to do it.>

That would be excellent.

> 
> This is more of a future work though. For now I believe it's appropriate
> to fix vfio-ccw hotplug ASAP.
> 

I agree.  I have a few other things at the moment, but will look into
these ideas after I get them sorted out.

Thanks for the reviews, and the suggestions!

 - Eric

> 
> Thanks,
> 
> 
> DHB
> 
> 
> 
> On 9/3/19 5:09 PM, Eric Farman wrote:
>> The routine qemuDomainGetMemLockLimitBytes() has a couple tests
>> to determine what the maximum limit of locked memory should be.
>> If I start a domain without any vfio stuff, /proc/$PID/limits says
>> the limit is 64KiB.  If I start a guest with a vfio-ccw hostdev,
>> the limit is now $GUEST_MEMORY + 1GiB.  It doesn't matter if I
>> have one hostdev or a thousand; it's always 1GiB more than the
>> configured amount of guest memory.
>>
>> If I start a guest without any vfio devices, and hotplug that same
>> vfio-ccw hostdev, the limit remains at 64KiB and I start getting
>> I/O errors in the guest.  This makes sense, since some of the I/O
>> chains are long enough to exceed that page limit, and the host
>> starts throwing errors.
>>
>> There is already code that adjusts this limit during the hotplug
>> of vfio-pci devices, so patch 1 refactors that code so that it
>> can be re-used on the mdev hotplug path in patch 3.
>>
>> Patch 2, meanwhile, adds some cleanup that I think is missing in
>> the vfio-pci path, based on my read of the mdev path.
>>
>>    $ grep locked /proc/83543/limits
>>    Max locked memory         65536                65536               
>> bytes
>>    $ virsh attach-device guest scratch-ca8b.xml
>>    Device attached successfully
>>
>>    $ grep locked /proc/83543/limits
>>    Max locked memory         3221225472           3221225472          
>> bytes
>>
>>
>> Eric Farman (3):
>>    qemu: Refactor the max memlock routine
>>    qemu: Reset the maximum locked memory on hotplug fail
>>    qemu: Adjust max memlock on mdev hotplug
>>
>>   src/qemu/qemu_domain.c  | 30 ++++++++++++++++++++++++++++++
>>   src/qemu/qemu_domain.h  |  2 ++
>>   src/qemu/qemu_hotplug.c | 22 ++++++++++++----------
>>   3 files changed, 44 insertions(+), 10 deletions(-)
>>
> 

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux