This is a note to let you know that I've just added the patch titled mm/damon/core: make damon_start() waits until kdamond_fn() starts to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-damon-core-make-damon_start-waits-until-kdamond_f.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 3e849a5b8acdcdfc99f1925220b31ab299747d5a Author: SeongJae Park <sj@xxxxxxxxxx> Date: Fri Dec 8 17:50:18 2023 +0000 mm/damon/core: make damon_start() waits until kdamond_fn() starts [ Upstream commit 6376a824595607e99d032a39ba3394988b4fce96 ] The cleanup tasks of kdamond threads including reset of corresponding DAMON context's ->kdamond field and decrease of global nr_running_ctxs counter is supposed to be executed by kdamond_fn(). However, commit 0f91d13366a4 ("mm/damon: simplify stop mechanism") made neither damon_start() nor damon_stop() ensure the corresponding kdamond has started the execution of kdamond_fn(). As a result, the cleanup can be skipped if damon_stop() is called fast enough after the previous damon_start(). Especially the skipped reset of ->kdamond could cause a use-after-free. Fix it by waiting for start of kdamond_fn() execution from damon_start(). Link: https://lkml.kernel.org/r/20231208175018.63880-1-sj@xxxxxxxxxx Fixes: 0f91d13366a4 ("mm/damon: simplify stop mechanism") Signed-off-by: SeongJae Park <sj@xxxxxxxxxx> Reported-by: Jakub Acs <acsjakub@xxxxxxxxx> Cc: Changbin Du <changbin.du@xxxxxxxxx> Cc: Jakub Acs <acsjakub@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # 5.15.x Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/include/linux/damon.h b/include/linux/damon.h index 506118916378b..a953d7083cd59 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -534,6 +534,8 @@ struct damon_ctx { * update */ unsigned long next_ops_update_sis; + /* for waiting until the execution of the kdamond_fn is started */ + struct completion kdamond_started; /* public: */ struct task_struct *kdamond; diff --git a/mm/damon/core.c b/mm/damon/core.c index 30c93de59475f..aff611b6eafe1 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -423,6 +423,8 @@ struct damon_ctx *damon_new_ctx(void) if (!ctx) return NULL; + init_completion(&ctx->kdamond_started); + ctx->attrs.sample_interval = 5 * 1000; ctx->attrs.aggr_interval = 100 * 1000; ctx->attrs.ops_update_interval = 60 * 1000 * 1000; @@ -636,11 +638,14 @@ static int __damon_start(struct damon_ctx *ctx) mutex_lock(&ctx->kdamond_lock); if (!ctx->kdamond) { err = 0; + reinit_completion(&ctx->kdamond_started); ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond.%d", nr_running_ctxs); if (IS_ERR(ctx->kdamond)) { err = PTR_ERR(ctx->kdamond); ctx->kdamond = NULL; + } else { + wait_for_completion(&ctx->kdamond_started); } } mutex_unlock(&ctx->kdamond_lock); @@ -1347,6 +1352,7 @@ static int kdamond_fn(void *data) pr_debug("kdamond (%d) starts\n", current->pid); + complete(&ctx->kdamond_started); kdamond_init_intervals_sis(ctx); if (ctx->ops.init)