The patch titled Subject: mm/damon: improve damon_new_region strategy has been added to the -mm mm-unstable branch. Its filename is mm-damon-improve-damon_new_region-strategy.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-damon-improve-damon_new_region-strategy.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Dawei Li <set_pte_at@xxxxxxxxxxx> Subject: mm/damon: improve damon_new_region strategy Date: Mon, 12 Sep 2022 22:39:03 +0800 Kdamond is implemented as a periodical split-merge pattern, which will create and destroy regions possibly at high frequency (hundreds or even thousands of per sec), depending on the number of regions and aggregation period. In that case, kmalloc and kfree could bring speed and space overheads, which can be improved by using a private kmem cache. Link: https://lkml.kernel.org/r/TYCP286MB2323DA1894FA55BB9CF90978CA449@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Signed-off-by: Dawei Li <set_pte_at@xxxxxxxxxxx> Cc: SeongJae Park <sj@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/damon/core.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) --- a/mm/damon/core.c~mm-damon-improve-damon_new_region-strategy +++ a/mm/damon/core.c @@ -29,6 +29,8 @@ static bool running_exclusive_ctxs; static DEFINE_MUTEX(damon_ops_lock); static struct damon_operations damon_registered_ops[NR_DAMON_OPS]; +static struct kmem_cache *damon_region_cache __ro_after_init; + /* Should be called under damon_ops_lock with id smaller than NR_DAMON_OPS */ static bool __damon_is_registered_ops(enum damon_ops_id id) { @@ -119,7 +121,7 @@ struct damon_region *damon_new_region(un { struct damon_region *region; - region = kmalloc(sizeof(*region), GFP_KERNEL); + region = kmem_cache_alloc(damon_region_cache, GFP_KERNEL); if (!region) return NULL; @@ -148,7 +150,7 @@ static void damon_del_region(struct damo static void damon_free_region(struct damon_region *r) { - kfree(r); + kmem_cache_free(damon_region_cache, r); } void damon_destroy_region(struct damon_region *r, struct damon_target *t) @@ -1279,4 +1281,18 @@ bool damon_find_biggest_system_ram(unsig return true; } +static int __init damon_init(void) +{ + damon_region_cache = kmem_cache_create("damon_region_cache", sizeof(struct damon_region), + 0, 0, NULL); + if (unlikely(!damon_region_cache)) { + pr_err("creating damon_region_cache fails\n"); + return -ENOMEM; + } + + return 0; +} + +subsys_initcall(damon_init); + #include "core-test.h" _ Patches currently in -mm which might be from set_pte_at@xxxxxxxxxxx are mm-damon-improve-damon_new_region-strategy.patch