Sorry for the late reply, but I finally have time to catchup on my inbox. On Thu, 12 May 2022 11:53:07 +0900 Wonhyuk Yang <vvghjk1234@xxxxxxxxx> wrote: > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2476,6 +2476,9 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, > del_page_from_free_list(page, zone, current_order); > expand(zone, page, order, current_order, migratetype); > set_pcppage_migratetype(page, migratetype); > + trace_mm_page_alloc_zone_locked(page, order, migratetype, > + pcp_allowed_order(order) && > + migratetype < MIGRATE_PCPTYPES); It would make more sense to put this logic into the TP_fast_assign() if possible. This code is added at the location of execution, and even though it may not run while tracing is disabled, it does affect icache. Could you pass in the order (you already have migratetype) and then in the trace event have: TP_fast_assign( __entry->pfn = page ? page_to_pfn(page) : -1UL; __entry->order = order; __entry->migratetype = migratetype; + __entry->percpu_refill = pcp_allowed_order(order) && migratetype < MIGRATE_PCPTYPES); ), ?? -- Steve > return page; > } >