The patch titled Subject: huegtlbfs: fix page leak during migration of file pages has been added to the -mm tree. Its filename is huegtlbfs-fix-page-leak-during-migration-of-file-pages.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/huegtlbfs-fix-page-leak-during-migration-of-file-pages.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/huegtlbfs-fix-page-leak-during-migration-of-file-pages.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Subject: huegtlbfs: fix page leak during migration of file pages Files can be created and mapped in an explicitly mounted hugetlbfs filesystem. If pages in such files are migrated, the filesystem usage will not be decremented for the associated pages. This can result in mmap or page allocation failures as it appears there are fewer pages in the filesystem than there should be. For example, a test program which hole punches, faults and migrates pages in such a file (1G in size) will eventually fail because it can not allocate a page. Reported counts and usage at time of failure: node0 537 free_hugepages 1024 nr_hugepages 0 surplus_hugepages node1 1000 free_hugepages 1024 nr_hugepages 0 surplus_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 4.0G 0 100% /var/opt/hugepool Note that the filesystem shows 4G of pages used, while actual usage is 511 pages (just under 1G). Failed trying to allocate page 512. If a hugetlb page is associated with an explicitly mounted filesystem, this information in contained in the page_private field. At migration time, this information is not preserved. To fix, simply transfer page_private from old to new page at migration time if necessary. Also, migrate_page_states() unconditionally clears page_private and PagePrivate of the old page. It is unlikely, but possible that these fields could be non-NULL and are needed at hugetlb free page time. So, do not touch these fields for hugetlb pages. Link: http://lkml.kernel.org/r/20190130211443.16678-1-mike.kravetz@xxxxxxxxxx Fixes: 290408d4a250 ("hugetlb: hugepage migration core") Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: "Kirill A . Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Davidlohr Bueso <dave@xxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/hugetlbfs/inode.c | 10 ++++++++++ mm/migrate.c | 10 ++++++++-- 2 files changed, 18 insertions(+), 2 deletions(-) --- a/fs/hugetlbfs/inode.c~huegtlbfs-fix-page-leak-during-migration-of-file-pages +++ a/fs/hugetlbfs/inode.c @@ -859,6 +859,16 @@ static int hugetlbfs_migrate_page(struct rc = migrate_huge_page_move_mapping(mapping, newpage, page); if (rc != MIGRATEPAGE_SUCCESS) return rc; + + /* + * page_private is subpool pointer in hugetlb pages, transfer + * if needed. + */ + if (page_private(page) && !page_private(newpage)) { + set_page_private(newpage, page_private(page)); + set_page_private(page, 0); + } + if (mode != MIGRATE_SYNC_NO_COPY) migrate_page_copy(newpage, page); else --- a/mm/migrate.c~huegtlbfs-fix-page-leak-during-migration-of-file-pages +++ a/mm/migrate.c @@ -641,8 +641,14 @@ void migrate_page_states(struct page *ne */ if (PageSwapCache(page)) ClearPageSwapCache(page); - ClearPagePrivate(page); - set_page_private(page, 0); + /* + * Unlikely, but PagePrivate and page_private could potentially + * contain information needed at hugetlb free page time. + */ + if (!PageHuge(page)) { + ClearPagePrivate(page); + set_page_private(page, 0); + } /* * If any waiters have accumulated on the new page then _ Patches currently in -mm which might be from mike.kravetz@xxxxxxxxxx are huegtlbfs-fix-page-leak-during-migration-of-file-pages.patch