On 3/1/19 1:16 PM, Andrey Ryabinin wrote: > A slightly better version of __split_huge_page(); > > Signed-off-by: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx> Ack. > Cc: Vlastimil Babka <vbabka@xxxxxxx> > Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > Cc: Rik van Riel <riel@xxxxxxxxxxx> > Cc: William Kucharski <william.kucharski@xxxxxxxxxx> > Cc: John Hubbard <jhubbard@xxxxxxxxxx> > --- > mm/huge_memory.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 4ccac6b32d49..fcf657886b4b 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2440,11 +2440,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, > pgoff_t end, unsigned long flags) > { > struct page *head = compound_head(page); > - struct zone *zone = page_zone(head); > + pg_data_t *pgdat = page_pgdat(head); > struct lruvec *lruvec; > int i; > > - lruvec = mem_cgroup_page_lruvec(head, zone->zone_pgdat); > + lruvec = mem_cgroup_page_lruvec(head, pgdat); > > /* complete memcg works before add pages to LRU */ > mem_cgroup_split_huge_fixup(head); > @@ -2475,7 +2475,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, > xa_unlock(&head->mapping->i_pages); > } > > - spin_unlock_irqrestore(&page_pgdat(head)->lru_lock, flags); > + spin_unlock_irqrestore(&pgdat->lru_lock, flags); > > remap_page(head); > >