By making maybe_unlock_mmap_for_io() handle the VMA lock correctly, we make fault_dirty_shared_page() safe to be called without the mmap lock held. Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Reported-by: David Hildenbrand <david@xxxxxxxxxx> Tested-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> --- Andrew, can you insert this before "mm: handle faults that merely update the accessed bit under the VMA lock" please? It could be handled as a fix patch, but it actually stands on its own as a separate patch. No big deal if it has to go in after that patch. mm/internal.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index 8611f7c5bd16..c7720e83cb3c 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -706,7 +706,7 @@ static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf, if (fault_flag_allow_retry_first(flags) && !(flags & FAULT_FLAG_RETRY_NOWAIT)) { fpin = get_file(vmf->vma->vm_file); - mmap_read_unlock(vmf->vma->vm_mm); + release_fault_lock(vmf); } return fpin; } -- 2.40.1