[merged mm-stable] revert-mm-damon-lru_sort-adjust-local-variable-to-dynamic-allocation.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: Revert "mm/damon/lru_sort: adjust local variable to dynamic allocation"
has been removed from the -mm tree.  Its filename was
     revert-mm-damon-lru_sort-adjust-local-variable-to-dynamic-allocation.patch

This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

------------------------------------------------------
From: SeongJae Park <sj@xxxxxxxxxx>
Subject: Revert "mm/damon/lru_sort: adjust local variable to dynamic allocation"
Date: Sun, 25 Aug 2024 21:23:23 -0700

This reverts commit 0742cadf5e4c ("mm/damon/lru_sort: adjust local
variable to dynamic allocation").

The commit was introduced to avoid unnecessary usage of stack memory for
per-scheme region priorities histogram buffer.  The fix is nice, but the
point of the fix looks not very clear if the commit message is not read
together.  That's mainly because the buffer is a private field, which
means it is hidden from the DAMON API users.  That's not the fault of the
fix but the underlying data structure.

Now the per-scheme histogram buffer is gone, so the problem that the
commit was fixing is also removed.  The use of kmemdup() has no more point
but just making the code bit difficult to understand.  Revert the fix.

Link: https://lkml.kernel.org/r/20240826042323.87025-5-sj@xxxxxxxxxx
Signed-off-by: SeongJae Park <sj@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/damon/lru_sort.c |   15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

--- a/mm/damon/lru_sort.c~revert-mm-damon-lru_sort-adjust-local-variable-to-dynamic-allocation
+++ a/mm/damon/lru_sort.c
@@ -148,17 +148,12 @@ static struct damon_target *target;
 static struct damos *damon_lru_sort_new_scheme(
 		struct damos_access_pattern *pattern, enum damos_action action)
 {
-	struct damos *damos;
-	struct damos_quota *quota = kmemdup(&damon_lru_sort_quota,
-				    sizeof(damon_lru_sort_quota), GFP_KERNEL);
-
-	if (!quota)
-		return NULL;
+	struct damos_quota quota = damon_lru_sort_quota;
 
 	/* Use half of total quota for hot/cold pages sorting */
-	quota->ms = quota->ms / 2;
+	quota.ms = quota.ms / 2;
 
-	damos = damon_new_scheme(
+	return damon_new_scheme(
 			/* find the pattern, and */
 			pattern,
 			/* (de)prioritize on LRU-lists */
@@ -166,12 +161,10 @@ static struct damos *damon_lru_sort_new_
 			/* for each aggregation interval */
 			0,
 			/* under the quota. */
-			quota,
+			&quota,
 			/* (De)activate this according to the watermarks. */
 			&damon_lru_sort_wmarks,
 			NUMA_NO_NODE);
-	kfree(quota);
-	return damos;
 }
 
 /* Create a DAMON-based operation scheme for hot memory regions */
_

Patches currently in -mm which might be from sj@xxxxxxxxxx are






[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux