On Fri, 11 Jun 2021 14:38:34 +0800 chengkaitao <pilgrimtao@xxxxxxxxx> wrote: > From: chengkaitao <pilgrimtao@xxxxxxxxx> > > 1. Already has (order >= pageblock_order / 2) here, we don't neet > (order >= pageblock_order) > 2. set function can_steal_fallback to inline > > ... > > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2619,18 +2619,8 @@ static void change_pageblock_range(struct page *pageblock_page, > * is worse than movable allocations stealing from unmovable and reclaimable > * pageblocks. > */ > -static bool can_steal_fallback(unsigned int order, int start_mt) > +static inline bool can_steal_fallback(unsigned int order, int start_mt) > { > - /* > - * Leaving this order check is intended, although there is > - * relaxed order check in next check. The reason is that > - * we can actually steal whole pageblock if this condition met, > - * but, below check doesn't guarantee it and that is just heuristic > - * so could be changed anytime. > - */ > - if (order >= pageblock_order) > - return true; > - > if (order >= pageblock_order / 2 || > start_mt == MIGRATE_RECLAIMABLE || > start_mt == MIGRATE_UNMOVABLE || Well, that redundant check was put there deliberately, as the comment explains. The reasoning is perhaps a little dubious, but it seems that the compiler has optimized away the redundant check anyway (your patch doesn't alter code size).