The quilt patch titled Subject: delayacct: add memory reclaim delay in get_page_from_freelist has been removed from the -mm tree. Its filename was delayacct-add-memory-reclaim-delay-in-get_page_from_freelist.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: liwenyu <wenyuli@xxxxxxxxxxxxxxx> Subject: delayacct: add memory reclaim delay in get_page_from_freelist Date: Wed, 20 Sep 2023 17:38:49 +0800 The current memory reclaim delay statistics only count the direct memory reclaim of the task in do_try_to_free_pages(). In systems with NUMA open, some tasks occasionally experience slower response times, but the total count of reclaim does not increase, using ftrace can show that node_reclaim has occurred. The memory reclaim occurring in get_page_from_freelist() is also due to heavy memory load. To get the impact of tasks in memory reclaim, this patch adds the statistics of the memory reclaim delay statistics for __node_reclaim(). Link: https://lkml.kernel.org/r/181C946095F0252B+7cc60eca-1abf-4502-aad3-ffd8ef89d910@xxxxxxxxxxxxxxx Signed-off-by: Wen Yu Li <wenyuli@xxxxxxxxxxxxxxx> Cc: Balbir Singh <bsingharora@xxxxxxxxx> Cc: <wangyun@xxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 2 ++ 1 file changed, 2 insertions(+) --- a/mm/vmscan.c~delayacct-add-memory-reclaim-delay-in-get_page_from_freelist +++ a/mm/vmscan.c @@ -7326,6 +7326,7 @@ static int __node_reclaim(struct pglist_ cond_resched(); psi_memstall_enter(&pflags); + delayacct_freepages_start(); fs_reclaim_acquire(sc.gfp_mask); /* * We need to be able to allocate from the reserves for RECLAIM_UNMAP @@ -7348,6 +7349,7 @@ static int __node_reclaim(struct pglist_ memalloc_noreclaim_restore(noreclaim_flag); fs_reclaim_release(sc.gfp_mask); psi_memstall_leave(&pflags); + delayacct_freepages_end(); trace_mm_vmscan_node_reclaim_end(sc.nr_reclaimed); _ Patches currently in -mm which might be from wenyuli@xxxxxxxxxxxxxxx are