On Tue, Dec 14, 2021 at 05:03:26PM -0800, syzbot wrote: > commit 3ebffc96befbaf9de9297b00d67091bb702fad8e > Author: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Date: Sun Jun 28 02:19:08 2020 +0000 > > mm: Use multi-index entries in the page cache > > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=1276e4bab00000 > final oops: https://syzkaller.appspot.com/x/report.txt?x=1176e4bab00000 > console output: https://syzkaller.appspot.com/x/log.txt?x=1676e4bab00000 Well, this is all entirely plausible: + xas_split_alloc(&xas, head, compound_order(head), + mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); It looks like I can fix this by moving the memory allocation before the acquisition of the i_mmap_lock. Any objections to this: +++ b/mm/huge_memory.c @@ -2653,6 +2653,13 @@ int split_huge_page_to_list(struct page *page, struct lis t_head *list) goto out; } + xas_split_alloc(&xas, head, compound_order(head), + mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); + if (xas_error(&xas)) { + ret = xas_error(&xas); + goto out; + } + anon_vma = NULL; i_mmap_lock_read(mapping); @@ -2679,15 +2686,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) unmap_page(head); - if (mapping) { - xas_split_alloc(&xas, head, compound_order(head), - mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); - if (xas_error(&xas)) { - ret = xas_error(&xas); - goto out_unlock; - } - } - /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); if (mapping) { (relative to the above patch)