+ mm-hugetlb_cgroup-introduce-peak-and-rsvdpeak-to-v2.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/hugetlb_cgroup: introduce peak and rsvd.peak to v2
has been added to the -mm mm-unstable branch.  Its filename is
     mm-hugetlb_cgroup-introduce-peak-and-rsvdpeak-to-v2.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-hugetlb_cgroup-introduce-peak-and-rsvdpeak-to-v2.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Xiu Jianfeng <xiujianfeng@xxxxxxxxxx>
Subject: mm/hugetlb_cgroup: introduce peak and rsvd.peak to v2
Date: Tue, 2 Jul 2024 12:57:28 +0000

Introduce peak and rsvd.peak to v2 to show the historical maximum usage of
resources, as in some scenarios it is necessary to configure the value of
max/rsvd.max based on the peak usage of resources.

Since HugeTLB doesn't support page reclaim, enforcing the limit at page
fault time implies that, the application will get SIGBUS signal if it
tries to fault in HugeTLB pages beyond its limit.  Therefore the
application needs to know exactly how many HugeTLB pages it uses before
hand, and the sysadmin needs to make sure that there are enough available
on the machine for all the users to avoid processes getting SIGBUS.

When running some open-source software, it may not be possible to know the
exact amount of hugetlb it consumes, so cannot correctly configure the max
value.  If there is a peak metric, we can run the open-source software
first and then configure the max based on the peak value.  In cgroup v1,
the hugetlb controller provides the max_usage_in_bytes and
rsvd.max_usage_in_bytes interface to display the historical maximum usage,
so introduce peak and rsvd.peak to v2 to address this issue.

Link: https://lkml.kernel.org/r/20240702125728.2743143-1-xiujianfeng@xxxxxxxxxx
Signed-off-by: Xiu Jianfeng <xiujianfeng@xxxxxxxxxx>
Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Jonathan Corbet <corbet@xxxxxxx>
Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx>
Cc: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx>
Cc: Tejun Heo <tj@xxxxxxxxxx>
Cc: Zefan Li <lizefan.x@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 Documentation/admin-guide/cgroup-v2.rst |    8 ++++++++
 mm/hugetlb_cgroup.c                     |   19 +++++++++++++++++++
 2 files changed, 27 insertions(+)

--- a/Documentation/admin-guide/cgroup-v2.rst~mm-hugetlb_cgroup-introduce-peak-and-rsvdpeak-to-v2
+++ a/Documentation/admin-guide/cgroup-v2.rst
@@ -2590,6 +2590,14 @@ HugeTLB Interface Files
         hugetlb pages of <hugepagesize> in this cgroup.  Only active in
         use hugetlb pages are included.  The per-node values are in bytes.
 
+  hugetlb.<hugepagesize>.peak
+	Show historical maximum usage for "hugepagesize" hugetlb.  It exists
+        for all the cgroup except root.
+
+  hugetlb.<hugepagesize>.rsvd.peak
+	Show historical maximum usage for "hugepagesize" hugetlb reservations.
+        It exists for all the cgroup except root.
+
 Misc
 ----
 
--- a/mm/hugetlb_cgroup.c~mm-hugetlb_cgroup-introduce-peak-and-rsvdpeak-to-v2
+++ a/mm/hugetlb_cgroup.c
@@ -583,6 +583,13 @@ static int hugetlb_cgroup_read_u64_max(s
 		else
 			seq_printf(seq, "%llu\n", val * PAGE_SIZE);
 		break;
+	case RES_RSVD_MAX_USAGE:
+		counter = &h_cg->rsvd_hugepage[idx];
+		fallthrough;
+	case RES_MAX_USAGE:
+		val = (u64)counter->watermark;
+		seq_printf(seq, "%llu\n", val * PAGE_SIZE);
+		break;
 	default:
 		BUG();
 	}
@@ -739,6 +746,18 @@ static struct cftype hugetlb_dfl_tmpl[]
 		.seq_show = hugetlb_cgroup_read_u64_max,
 		.flags = CFTYPE_NOT_ON_ROOT,
 	},
+	{
+		.name = "peak",
+		.private = RES_MAX_USAGE,
+		.seq_show = hugetlb_cgroup_read_u64_max,
+		.flags = CFTYPE_NOT_ON_ROOT,
+	},
+	{
+		.name = "rsvd.peak",
+		.private = RES_RSVD_MAX_USAGE,
+		.seq_show = hugetlb_cgroup_read_u64_max,
+		.flags = CFTYPE_NOT_ON_ROOT,
+	},
 	{
 		.name = "events",
 		.seq_show = hugetlb_events_show,
_

Patches currently in -mm which might be from xiujianfeng@xxxxxxxxxx are

mm-memcg-remove-redundant-seq_buf_has_overflowed.patch
mm-memcg-adjust-the-warning-when-seq_buf-overflows.patch
mm-hugetlb_cgroup-introduce-peak-and-rsvdpeak-to-v2.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux