On 9/13/18 3:49 AM, Mike Kravetz wrote:
On 09/12/2018 12:47 PM, Andrew Morton wrote:
(switched to email. Please respond via emailed reply-to-all, not via the
bugzilla web interface).
On Tue, 11 Sep 2018 03:59:11 +0000 bugzilla-daemon@xxxxxxxxxxxxxxxxxxx wrote:
https://bugzilla.kernel.org/show_bug.cgi?id=201085
Bug ID: 201085
Summary: Kernel allows mlock() on pages in CMA without
migrating pages out of CMA first
Product: Memory Management
Version: 2.5
Kernel Version: 4.18
Hardware: All
OS: Linux
Tree: Mainline
Status: NEW
Severity: normal
Priority: P1
Component: Page Allocator
Assignee: akpm@xxxxxxxxxxxxxxxxxxxx
Reporter: tpearson@xxxxxxxxxxxxxxxxxxxxx
Regression: No
Pages allocated in CMA are not migrated out of CMA when non-CMA memory is
available and locking is attempted via mlock(). This can result in rapid
exhaustion of the CMA pool if memory locking is used by an application with
large memory requirements such as QEMU.
To reproduce, on a dual-CPU (NUMA) POWER9 host try to launch a VM with mlock=on
and 1/2 or more of physical memory allocated to the guest. Observe full CMA
pool depletion occurs despite plenty of normal free RAM available.
--
You are receiving this mail because:
You are the assignee for the bug.
IIRC, Aneesh is working on some powerpc IOMMU patches for a similar issue
(long term pinning of cma pages). Added him on Cc:
https://lkml.kernel.org/r/20180906054342.25094-2-aneesh.kumar@xxxxxxxxxxxxx
This report seems to be suggesting a more general solution/change. Wondering
if there is any overlap with this and Aneesh's work.
This is a related issue. I am looking at doing something similar to what
I did with IOMMU patches. That is migrate pages out of CMA region bfore
mlock.
The problem mentioned is similar to vfio. With VFIO we do pin the guest
pages and that is similar with -realtime mlock=on option of Qemu.
We can endup backing guest RAM with pages from CMA area and these are
different qemu options that do pin these guest pages for the lifetime of
the guest.
-aneesh