The patch titled Subject: mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked() has been added to the -mm mm-unstable branch. Its filename is mm-page_alloc-fix-tracepoint-mm_page_alloc_zone_locked.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-page_alloc-fix-tracepoint-mm_page_alloc_zone_locked.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Wonhyuk Yang <vvghjk1234@xxxxxxxxx> Subject: mm/page_alloc: fix tracepoint mm_page_alloc_zone_locked() Currently, trace point mm_page_alloc_zone_locked() doesn't show correct information. First, when alloc_flag has ALLOC_HARDER/ALLOC_CMA, page can be allocated from MIGRATE_HIGHATOMIC/MIGRATE_CMA. Nevertheless, tracepoint use requested migration type not MIGRATE_HIGHATOMIC and MIGRATE_CMA. Second, after commit 44042b4498728 ("mm/page_alloc: allow high-order pages to be stored on the per-cpu lists") percpu-list can store high order pages. But trace point determine whether it is a refiil of percpu-list by comparing requested order and 0. To handle these problems, make mm_page_alloc_zone_locked() only be called by __rmqueue_smallest with correct migration type. With a new argument called percpu_refill, it can show roughly whether it is a refill of percpu-list. Link: https://lkml.kernel.org/r/20220512025307.57924-1-vvghjk1234@xxxxxxxxx Signed-off-by: Wonhyuk Yang <vvghjk1234@xxxxxxxxx> Acked-by: Mel Gorman <mgorman@xxxxxxx> Cc: Baik Song An <bsahn@xxxxxxxxxx> Cc: Hong Yeon Kim <kimhy@xxxxxxxxxx> Cc: Taeung Song <taeung@xxxxxxxxxxxxxxx> Cc: <linuxgeek@xxxxxxxxxxxx> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/trace/events/kmem.h | 14 +++++++++----- mm/page_alloc.c | 13 +++++-------- 2 files changed, 14 insertions(+), 13 deletions(-) --- a/include/trace/events/kmem.h~mm-page_alloc-fix-tracepoint-mm_page_alloc_zone_locked +++ a/include/trace/events/kmem.h @@ -229,20 +229,23 @@ TRACE_EVENT(mm_page_alloc, DECLARE_EVENT_CLASS(mm_page, - TP_PROTO(struct page *page, unsigned int order, int migratetype), + TP_PROTO(struct page *page, unsigned int order, int migratetype, + int percpu_refill), - TP_ARGS(page, order, migratetype), + TP_ARGS(page, order, migratetype, percpu_refill), TP_STRUCT__entry( __field( unsigned long, pfn ) __field( unsigned int, order ) __field( int, migratetype ) + __field( int, percpu_refill ) ), TP_fast_assign( __entry->pfn = page ? page_to_pfn(page) : -1UL; __entry->order = order; __entry->migratetype = migratetype; + __entry->percpu_refill = percpu_refill; ), TP_printk("page=%p pfn=0x%lx order=%u migratetype=%d percpu_refill=%d", @@ -250,14 +253,15 @@ DECLARE_EVENT_CLASS(mm_page, __entry->pfn != -1UL ? __entry->pfn : 0, __entry->order, __entry->migratetype, - __entry->order == 0) + __entry->percpu_refill) ); DEFINE_EVENT(mm_page, mm_page_alloc_zone_locked, - TP_PROTO(struct page *page, unsigned int order, int migratetype), + TP_PROTO(struct page *page, unsigned int order, int migratetype, + int percpu_refill), - TP_ARGS(page, order, migratetype) + TP_ARGS(page, order, migratetype, percpu_refill) ); TRACE_EVENT(mm_page_pcpu_drain, --- a/mm/page_alloc.c~mm-page_alloc-fix-tracepoint-mm_page_alloc_zone_locked +++ a/mm/page_alloc.c @@ -2474,6 +2474,9 @@ struct page *__rmqueue_smallest(struct z del_page_from_free_list(page, zone, current_order); expand(zone, page, order, current_order, migratetype); set_pcppage_migratetype(page, migratetype); + trace_mm_page_alloc_zone_locked(page, order, migratetype, + pcp_allowed_order(order) && + migratetype < MIGRATE_PCPTYPES); return page; } @@ -2997,7 +3000,7 @@ __rmqueue(struct zone *zone, unsigned in zone_page_state(zone, NR_FREE_PAGES) / 2) { page = __rmqueue_cma_fallback(zone, order); if (page) - goto out; + return page; } } retry: @@ -3010,9 +3013,6 @@ retry: alloc_flags)) goto retry; } -out: - if (page) - trace_mm_page_alloc_zone_locked(page, order, migratetype); return page; } @@ -3673,11 +3673,8 @@ struct page *rmqueue_buddy(struct zone * * reserved for high-order atomic allocation, so order-0 * request should skip it. */ - if (order > 0 && alloc_flags & ALLOC_HARDER) { + if (order > 0 && alloc_flags & ALLOC_HARDER) page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); - if (page) - trace_mm_page_alloc_zone_locked(page, order, migratetype); - } if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); if (!page) { _ Patches currently in -mm which might be from vvghjk1234@xxxxxxxxx are mm-page_alloc-cache-the-result-of-node_dirty_ok.patch mm-page_alloc-fix-tracepoint-mm_page_alloc_zone_locked.patch