On Mon 11-01-16 14:05:28, Kirill A. Shutemov wrote: > Dmitry Vyukov has reported[1] possible deadlock (triggered by his syzkaller > fuzzer): > > Possible unsafe locking scenario: > > CPU0 CPU1 > ---- ---- > lock(&hugetlbfs_i_mmap_rwsem_key); > lock(&mapping->i_mmap_rwsem); > lock(&hugetlbfs_i_mmap_rwsem_key); > lock(&mapping->i_mmap_rwsem); > > Both traces points to mm_take_all_locks() as a source of the problem. > It doesn't take care about ordering or hugetlbfs_i_mmap_rwsem_key (aka > mapping->i_mmap_rwsem for hugetlb mapping) vs. i_mmap_rwsem. Hmm, but huge_pmd_share is called with mmap_sem held no? At least my current cscope claims that huge_pte_alloc is called from copy_hugetlb_page_range and hugetlb_fault both of which should be called with mmap sem held for write (via dup_mmap) resp. read (via page fault resp. gup) while mm_take_all_locks expects mmap_sem for write as well. > huge_pmd_share() does memory allocation under hugetlbfs_i_mmap_rwsem_key > and allocator can take i_mmap_rwsem if it hit reclaim. So we need to > take i_mmap_rwsem from all hugetlb VMAs before taking i_mmap_rwsem from > rest of VMAs. > > The patch also documents locking order for hugetlbfs_i_mmap_rwsem_key. The documentation part alone makes sense but I fail to see how this can solve any deadlock in the current code. > [1] http://lkml.kernel.org/r/CACT4Y+Zu95tBs-0EvdiAKzUOsb4tczRRfCRTpLr4bg_OP9HuVg@xxxxxxxxxxxxxx > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > Reported-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>