On Mon, 28 Dec 2020 21:29:01 +0800 Hailong liu <carver4lio@xxxxxxx> wrote: > The trace point *trace_mm_page_alloc_zone_locked()* in __rmqueue() does not > currently cover all branches. Add the missing tracepoint and check the page > before do that. > > ... > > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2871,7 +2871,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, > zone_page_state(zone, NR_FREE_PAGES) / 2) { > page = __rmqueue_cma_fallback(zone, order); > if (page) > - return page; > + goto out; > } > #endif > retry: > @@ -2884,8 +2884,9 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, > alloc_flags)) > goto retry; > } > - > - trace_mm_page_alloc_zone_locked(page, order, migratetype); > +out: > + if (page) > + trace_mm_page_alloc_zone_locked(page, order, migratetype); > return page; > } Looks right to me, but it generates a warning. Using IS_ENABLED() works around it. From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Subject: mm-page_alloc-add-a-missing-mm_page_alloc_zone_locked-tracepoint-fix use IS_ENABLED() to suppress warning mm/page_alloc.c: In function ‘__rmqueue’: mm/page_alloc.c:2889:1: warning: label ‘out’ defined but not used [-Wunused-label] out: ^~~ Cc: Hailong liu <liu.hailong6@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-add-a-missing-mm_page_alloc_zone_locked-tracepoint-fix +++ a/mm/page_alloc.c @@ -2862,20 +2862,20 @@ __rmqueue(struct zone *zone, unsigned in { struct page *page; -#ifdef CONFIG_CMA - /* - * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. - */ - if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { - page = __rmqueue_cma_fallback(zone, order); - if (page) - goto out; + if (IS_ENABLED(CONFIG_CMA)) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + if (alloc_flags & ALLOC_CMA && + zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2) { + page = __rmqueue_cma_fallback(zone, order); + if (page) + goto out; + } } -#endif retry: page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { _