On Mon 17-12-18 16:06:51, Oscar Salvador wrote: [...] > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index a6e7bfd18cde..18d41e85f672 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8038,11 +8038,12 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > * handle each tail page individually in migration. > */ > if (PageHuge(page)) { > + struct page *head = compound_head(page); > > - if (!hugepage_migration_supported(page_hstate(page))) > + if (!hugepage_migration_supported(page_hstate(head))) > goto unmovable; OK, this makes sense. > > - iter = round_up(iter + 1, 1<<compound_order(page)) - 1; > + iter = round_up(iter + 1, 1<<compound_order(head)) - 1; but this less so. You surely do not want to move by the full hugetlb page when you got a tail page, right? You could skip too much. You have to consider page - head into the equation. Btw. the reason we haven't seen before is that a) giga pages are rarely used and b) normale hugepages should be properly aligned and they do not span more mem sections. Maybe there is some obscure path to trigger this for CMA but I do not see it. > continue; > } > > -- > 2.13.7 -- Michal Hocko SUSE Labs