On 03.12.20 18:48, David Hildenbrand wrote: > On 03.12.20 18:15, Vlastimil Babka wrote: >> On 12/3/20 5:26 PM, David Hildenbrand wrote: >>> On 03.12.20 01:03, Vlastimil Babka wrote: >>>> On 12/2/20 1:21 PM, Muchun Song wrote: >>>>> The max order page has no buddy page and never merge to other order. >>>>> So isolating and then freeing it is pointless. >>>>> >>>>> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx> >>>> >>>> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> >>>> >>>>> --- >>>>> mm/page_isolation.c | 2 +- >>>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>>> >>>>> diff --git a/mm/page_isolation.c b/mm/page_isolation.c >>>>> index a254e1f370a3..bddf788f45bf 100644 >>>>> --- a/mm/page_isolation.c >>>>> +++ b/mm/page_isolation.c >>>>> @@ -88,7 +88,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) >>>>> */ >>>>> if (PageBuddy(page)) { >>>>> order = buddy_order(page); >>>>> - if (order >= pageblock_order) { >>>>> + if (order >= pageblock_order && order < MAX_ORDER - 1) { >>>>> pfn = page_to_pfn(page); >>>>> buddy_pfn = __find_buddy_pfn(pfn, order); >>>>> buddy = page + (buddy_pfn - pfn); >>>> >>>> Hm I wonder if order == MAX_ORDER - 1, then the buddy can actually be a >>>> !pfn_valid() in some corner case? pfn_valid_within(buddy_pfn) that follows would >>>> only catch it on archs with holes in zone. Then is_migrate_isolate_page(buddy) >>>> might access an invalid buddy. So this might be actually a bug fix and not just >>>> optimization, just the bug hasn't been observed in practice. >>> >>> I think we have no users that isolate/unisolate close to holes. >>> >>> CMA regions are properly aligned (to max of page_order / >>> max_order_nr_pages) and don't contain holes. >> >> The problem as I see it, is that buddy_order(page) might be already MAX_ORDER - >> 1 (e.g. two pageblocks on x86), and then finding buddy of that one is beyond the >> guaranteed alignment (if they merged, which they can't, it would be four > > Oh, I see. I would have assume that __find_buddy_pfn() would not hand > out invalid buddies. But you're right, it's generic: > > pfn = 1024 (4M) > order = MAX_ORDER - 1 = 10 > buddy_pfn = __find_buddy_pfn(pfn, order) > > -> pfn ^ (1 << order) = 0 > > > If that page has no struct page (!pfn_valid), we're doomed, I agree. It > would be problematic if we have alloc_contig_range() users with ranges > not aligned/multiples of to 8 MB (MAX_ORDER) I guess. virtio-mem and > gigantic pages should be fine. CMA might be problematic, though? Do we > have such small CMA ranges or with such alignment? COuld be I guess. > > cma_init_reserved_mem() only checks > > alignment = PAGE_SIZE << max_t(unsigned long, MAX_ORDER - 1, > pageblock_order); > Thinking again (SPARSE), we always end up in a single memory section. Usually, all pfns within a single section are valid. The only exception is with HAVE_ARCH_PFN_VALID - arm and arm6. arm64 also has HOLES_IN_ZONE - so we always check for pfn_valid() in this code. arm only has HAVE_ARCH_PFN_VALID with SPARSE on ARCH_OMAP1. So only in that combination, we might run into that issue if I am not wrong. Not sure about !SPARSE and mips. -- Thanks, David / dhildenb