+ replace-free-hugepage-folios-after-migration-fix-3.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/hugetlb: prevent reuse of isolated free hugepages
has been added to the -mm mm-unstable branch.  Its filename is
     replace-free-hugepage-folios-after-migration-fix-3.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/replace-free-hugepage-folios-after-migration-fix-3.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: yangge <yangge1116@xxxxxxx>
Subject: mm/hugetlb: prevent reuse of isolated free hugepages
Date: Fri, 10 Jan 2025 10:56:06 +0800

When there are free hugetlb folios in the hugetlb pool, during the
migration of in-use hugetlb folios, new folios is allocated from the free
hugetlb pool.  After the migration is completed, the old folios are
released back to the free hugetlb pool.  However, after the old folios are
released to the free hugetlb pool, they may be reallocated.  When
replace_free_hugepage_folios() is executed later, it cannot release these
old folios back to the buddy system.

As discussed with David in [1], when alloc_contig_range() is used to
migrate multiple in-use hugetlb pages, it can lead to the issue described
above.  For example:

[huge 0] [huge 1]

To migrate huge 0, we obtain huge x from the pool.  After the migration is
completed, we return the now-freed huge 0 back to the pool.  When it's
time to migrate huge 1, we can simply reuse the now-freed huge 0 from the
pool.  As a result, when replace_free_hugepage_folios() is executed, it
cannot release huge 0 back to the buddy system.

To solve the problem above, we should prevent reuse of isolated free
hugepages.

Link: https://lore.kernel.org/lkml/1734503588-16254-1-git-send-email-yangge1116@xxxxxxx/
Link: https://lkml.kernel.org/r/1736477766-23525-1-git-send-email-yangge1116@xxxxxxx
Fixes: 08d312ee4c0a ("mm: replace free hugepage folios after migration")
Signed-off-by: yangge <yangge1116@xxxxxxx>
Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: Muchun Song <muchun.song@xxxxxxxxx>
Cc: SeongJae Park <sj@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/hugetlb.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/mm/hugetlb.c~replace-free-hugepage-folios-after-migration-fix-3
+++ a/mm/hugetlb.c
@@ -48,6 +48,7 @@
 #include <linux/page_owner.h>
 #include "internal.h"
 #include "hugetlb_vmemmap.h"
+#include <linux/page-isolation.h>
 
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
@@ -1336,6 +1337,9 @@ static struct folio *dequeue_hugetlb_fol
 		if (folio_test_hwpoison(folio))
 			continue;
 
+		if (is_migrate_isolate_page(&folio->page))
+			continue;
+
 		list_move(&folio->lru, &h->hugepage_activelist);
 		folio_ref_unfreeze(folio, 1);
 		folio_clear_hugetlb_freed(folio);
_

Patches currently in -mm which might be from yangge1116@xxxxxxx are

replace-free-hugepage-folios-after-migration.patch
replace-free-hugepage-folios-after-migration-fix-3.patch
mm-compaction-skip-memory-compaction-when-there-are-not-enough-migratable-pages.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux