When getting mm_struct of the monitoring target process fails, there wil be no need to increase the access rate counter (nr_accesses) of the regions for the process. Hence, damon_va_check_accesses() skips calling damon_update_region_access_rate() in the case. This breaks the assumption that damon_update_region_access_rate() is called for every region, for every sampling interval. Call the function for every region even in the case. This might increase the overhead in some cases, but such case would not be frequent, so no significant impact is really expected. Signed-off-by: SeongJae Park <sj@xxxxxxxxxx> --- mm/damon/vaddr.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 7fc0bda73b4c..e36303271f9d 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -563,6 +563,11 @@ static void __damon_va_check_access(struct mm_struct *mm, static unsigned long last_folio_sz = PAGE_SIZE; static bool last_accessed; + if (!mm) { + damon_update_region_access_rate(r, false); + return; + } + /* If the region is in the last checked page, reuse the result */ if (same_target && (ALIGN_DOWN(last_addr, last_folio_sz) == ALIGN_DOWN(r->sampling_addr, last_folio_sz))) { @@ -586,15 +591,14 @@ static unsigned int damon_va_check_accesses(struct damon_ctx *ctx) damon_for_each_target(t, ctx) { mm = damon_get_mm(t); - if (!mm) - continue; same_target = false; damon_for_each_region(r, t) { __damon_va_check_access(mm, r, same_target); max_nr_accesses = max(r->nr_accesses, max_nr_accesses); same_target = true; } - mmput(mm); + if (mm) + mmput(mm); } return max_nr_accesses; -- 2.25.1