The proactive compaction will reuse per-node kcompactd threads, so we should also count the KCOMPACTD_MIGRATE_SCANNED and KCOMPACTD_FREE_SCANNED events for proactive compaction. Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> --- mm/compaction.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/mm/compaction.c b/mm/compaction.c index f8e8addc8664..62f6bb68c9cb 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2659,6 +2659,11 @@ static void proactive_compact_node(pg_data_t *pgdat) cc.zone = zone; compact_zone(&cc, NULL); + + count_compact_events(KCOMPACTD_MIGRATE_SCANNED, + cc.total_migrate_scanned); + count_compact_events(KCOMPACTD_FREE_SCANNED, + cc.total_free_scanned); } } -- 2.27.0