On Sun, Sep 15, 2024 at 01:08:27PM GMT, Dan Carpenter wrote: > Hi Linus, > > Commit 79a61cc3fc04 ("mm: avoid leaving partial pfn mappings around in > error case") from Sep 11, 2024 (linux-next), leads to the following > Smatch static checker warning: > > mm/memory.c:2709 remap_pfn_range_notrack() > warn: sleeping in atomic context > > mm/memory.c > 2696 int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr, > 2697 unsigned long pfn, unsigned long size, pgprot_t prot) > 2698 { > 2699 int error = remap_pfn_range_internal(vma, addr, pfn, size, prot); > 2700 > 2701 if (!error) > 2702 return 0; > 2703 > 2704 /* > 2705 * A partial pfn range mapping is dangerous: it does not > 2706 * maintain page reference counts, and callers may free > 2707 * pages due to the error. So zap it early. > 2708 */ > --> 2709 zap_page_range_single(vma, addr, size, NULL); > > The lru_add_drain() function at the start of zap_page_range_single() takes a > mutext. Hm does it? I see a local lock, and some folio batch locking which are local locks too? Unless this is hugetlb, I see: -> hugetlb_zap_begin() -> __hugetlb_zap_begin() -> hugetlb_vma_lock_write() -> down_write() -> might_sleep() (Also __hugetlb_zap_begin() -> i_mmap_lock_write() -> down_write()) I see only spin locks in the page table allocation paths (unless I'm missing something). I may be missing something, however! > > 2710 return error; > 2711 } > > It's the preempt_disable() in gru_fault() which is the issue. The call tree > is: > > gru_fault() <- disables preempt > -> remap_pfn_range() > -> remap_pfn_range_notrack() > > regards, > dan carpenter >