This is a note to let you know that I've just added the patch titled mm/damon/core: make damon_start() waits until kdamond_fn() starts to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-damon-core-make-damon_start-waits-until-kdamond_fn-starts.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 6376a824595607e99d032a39ba3394988b4fce96 Mon Sep 17 00:00:00 2001 From: SeongJae Park <sj@xxxxxxxxxx> Date: Fri, 8 Dec 2023 17:50:18 +0000 Subject: mm/damon/core: make damon_start() waits until kdamond_fn() starts From: SeongJae Park <sj@xxxxxxxxxx> commit 6376a824595607e99d032a39ba3394988b4fce96 upstream. The cleanup tasks of kdamond threads including reset of corresponding DAMON context's ->kdamond field and decrease of global nr_running_ctxs counter is supposed to be executed by kdamond_fn(). However, commit 0f91d13366a4 ("mm/damon: simplify stop mechanism") made neither damon_start() nor damon_stop() ensure the corresponding kdamond has started the execution of kdamond_fn(). As a result, the cleanup can be skipped if damon_stop() is called fast enough after the previous damon_start(). Especially the skipped reset of ->kdamond could cause a use-after-free. Fix it by waiting for start of kdamond_fn() execution from damon_start(). Link: https://lkml.kernel.org/r/20231208175018.63880-1-sj@xxxxxxxxxx Fixes: 0f91d13366a4 ("mm/damon: simplify stop mechanism") Signed-off-by: SeongJae Park <sj@xxxxxxxxxx> Reported-by: Jakub Acs <acsjakub@xxxxxxxxx> Cc: Changbin Du <changbin.du@xxxxxxxxx> Cc: Jakub Acs <acsjakub@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # 5.15.x Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: SeongJae Park <sj@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- include/linux/damon.h | 3 +++ mm/damon/core.c | 7 +++++++ 2 files changed, 10 insertions(+) --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -8,6 +8,7 @@ #ifndef _DAMON_H_ #define _DAMON_H_ +#include <linux/completion.h> #include <linux/mutex.h> #include <linux/time64.h> #include <linux/types.h> @@ -452,6 +453,8 @@ struct damon_ctx { /* private: internal use only */ struct timespec64 last_aggregation; struct timespec64 last_ops_update; + /* for waiting until the execution of the kdamond_fn is started */ + struct completion kdamond_started; /* public: */ struct task_struct *kdamond; --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -383,6 +383,8 @@ struct damon_ctx *damon_new_ctx(void) if (!ctx) return NULL; + init_completion(&ctx->kdamond_started); + ctx->attrs.sample_interval = 5 * 1000; ctx->attrs.aggr_interval = 100 * 1000; ctx->attrs.ops_update_interval = 60 * 1000 * 1000; @@ -519,11 +521,14 @@ static int __damon_start(struct damon_ct mutex_lock(&ctx->kdamond_lock); if (!ctx->kdamond) { err = 0; + reinit_completion(&ctx->kdamond_started); ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond.%d", nr_running_ctxs); if (IS_ERR(ctx->kdamond)) { err = PTR_ERR(ctx->kdamond); ctx->kdamond = NULL; + } else { + wait_for_completion(&ctx->kdamond_started); } } mutex_unlock(&ctx->kdamond_lock); @@ -1147,6 +1152,8 @@ static int kdamond_fn(void *data) pr_debug("kdamond (%d) starts\n", current->pid); + complete(&ctx->kdamond_started); + if (ctx->ops.init) ctx->ops.init(ctx); if (ctx->callback.before_start && ctx->callback.before_start(ctx)) Patches currently in stable-queue which might be from sj@xxxxxxxxxx are queue-6.1/mm-damon-core-make-damon_start-waits-until-kdamond_fn-starts.patch