+ hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: memcontrol: only transfer the memcg data for migration
has been added to the -mm mm-unstable branch.  Its filename is
     hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Nhat Pham <nphamcs@xxxxxxxxx>
Subject: memcontrol: only transfer the memcg data for migration
Date: Tue, 3 Oct 2023 16:14:22 -0700

For most migration use cases, only transfer the memcg data from the old
folio to the new folio, and clear the old folio's memcg data.  No charging
and uncharging will be done.  These use cases include the new hugetlb
memcg accounting behavior (which was not previously handled).

This shaves off some work on the migration path, and avoids the temporary
double charging of a folio during its migration.

The only exception is replace_page_cache_folio(), which will use the old
mem_cgroup_migrate() (now renamed to mem_cgroup_replace_folio).  In that
context, the isolation of the old page isn't quite as thorough as with
migration, so we cannot use our new implementation directly.

This patch is the result of the following discussion on the new hugetlb
memcg accounting behavior:

https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/

Link: https://lkml.kernel.org/r/20231003231422.4046187-1-nphamcs@xxxxxxxxx
Signed-off-by: Nhat Pham <nphamcs@xxxxxxxxx>
Suggested-by: Johannes Weiner <hannes@xxxxxxxxxxx>
Reported-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
Closes: https://lore.kernel.org/lkml/20231003171329.GB314430@monkey/
Cc: Frank van der Linden <fvdl@xxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Muchun Song <muchun.song@xxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxxx>
Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
Cc: Shuah Khan <shuah@xxxxxxxxxx>
Cc: Tejun heo <tj@xxxxxxxxxx>
Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
Cc: Zefan Li <lizefan.x@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 include/linux/memcontrol.h |    7 +++++
 mm/filemap.c               |    2 -
 mm/memcontrol.c            |   45 ++++++++++++++++++++++++++++++++---
 mm/migrate.c               |    3 --
 4 files changed, 51 insertions(+), 6 deletions(-)

--- a/include/linux/memcontrol.h~hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix
+++ a/include/linux/memcontrol.h
@@ -727,6 +727,8 @@ static inline void mem_cgroup_uncharge_l
 
 void mem_cgroup_cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages);
 
+void mem_cgroup_replace_folio(struct folio *old, struct folio *new);
+
 void mem_cgroup_migrate(struct folio *old, struct folio *new);
 
 /**
@@ -1310,6 +1312,11 @@ static inline void mem_cgroup_cancel_cha
 {
 }
 
+static inline void mem_cgroup_replace_folio(struct folio *old,
+		struct folio *new)
+{
+}
+
 static inline void mem_cgroup_migrate(struct folio *old, struct folio *new)
 {
 }
--- a/mm/filemap.c~hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix
+++ a/mm/filemap.c
@@ -816,7 +816,7 @@ void replace_page_cache_folio(struct fol
 	new->mapping = mapping;
 	new->index = offset;
 
-	mem_cgroup_migrate(old, new);
+	mem_cgroup_replace_folio(old, new);
 
 	xas_lock_irq(&xas);
 	xas_store(&xas, new);
--- a/mm/memcontrol.c~hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix
+++ a/mm/memcontrol.c
@@ -7471,16 +7471,17 @@ void __mem_cgroup_uncharge_list(struct l
 }
 
 /**
- * mem_cgroup_migrate - Charge a folio's replacement.
+ * mem_cgroup_replace_folio - Charge a folio's replacement.
  * @old: Currently circulating folio.
  * @new: Replacement folio.
  *
  * Charge @new as a replacement folio for @old. @old will
- * be uncharged upon free.
+ * be uncharged upon free. This is only used by the page cache
+ * (in replace_page_cache_folio()).
  *
  * Both folios must be locked, @new->mapping must be set up.
  */
-void mem_cgroup_migrate(struct folio *old, struct folio *new)
+void mem_cgroup_replace_folio(struct folio *old, struct folio *new)
 {
 	struct mem_cgroup *memcg;
 	long nr_pages = folio_nr_pages(new);
@@ -7519,6 +7520,44 @@ void mem_cgroup_migrate(struct folio *ol
 	local_irq_restore(flags);
 }
 
+/**
+ * mem_cgroup_migrate - Transfer the memcg data from the old to the new folio.
+ * @old: Currently circulating folio.
+ * @new: Replacement folio.
+ *
+ * Transfer the memcg data from the old folio to the new folio for migration.
+ * The old folio's data info will be cleared. Note that the memory counters
+ * will remain unchanged throughout the process.
+ *
+ * Both folios must be locked, @new->mapping must be set up.
+ */
+void mem_cgroup_migrate(struct folio *old, struct folio *new)
+{
+	struct mem_cgroup *memcg;
+
+	VM_BUG_ON_FOLIO(!folio_test_locked(old), old);
+	VM_BUG_ON_FOLIO(!folio_test_locked(new), new);
+	VM_BUG_ON_FOLIO(folio_test_anon(old) != folio_test_anon(new), new);
+	VM_BUG_ON_FOLIO(folio_nr_pages(old) != folio_nr_pages(new), new);
+
+	if (mem_cgroup_disabled())
+		return;
+
+	memcg = folio_memcg(old);
+	/*
+	 * Note that it is normal to see !memcg for a hugetlb folio.
+	 * It could have been allocated when memory_hugetlb_accounting was not
+	 * selected, for e.g.
+	 */
+	VM_WARN_ON_ONCE_FOLIO(!memcg, old);
+	if (!memcg)
+		return;
+
+	/* Transfer the charge and the css ref */
+	commit_charge(new, memcg);
+	old->memcg_data = 0;
+}
+
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
 EXPORT_SYMBOL(memcg_sockets_enabled_key);
 
--- a/mm/migrate.c~hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix
+++ a/mm/migrate.c
@@ -633,8 +633,7 @@ void folio_migrate_flags(struct folio *n
 
 	folio_copy_owner(newfolio, folio);
 
-	if (!folio_test_hugetlb(folio))
-		mem_cgroup_migrate(folio, newfolio);
+	mem_cgroup_migrate(folio, newfolio);
 }
 EXPORT_SYMBOL(folio_migrate_flags);
 
_

Patches currently in -mm which might be from nphamcs@xxxxxxxxx are

zswap-change-zswaps-default-allocator-to-zsmalloc.patch
zswap-shrinks-zswap-pool-based-on-memory-pressure.patch
memcontrol-add-helpers-for-hugetlb-memcg-accounting.patch
hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller.patch
hugetlb-memcg-account-hugetlb-backed-memory-in-memory-controller-fix.patch
selftests-add-a-selftest-to-verify-hugetlb-usage-in-memcg.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux