On 27/03/2018 23:30, David Rientjes wrote: > On Tue, 13 Mar 2018, Laurent Dufour wrote: > >> diff --git a/mm/mmap.c b/mm/mmap.c >> index faf85699f1a1..5898255d0aeb 100644 >> --- a/mm/mmap.c >> +++ b/mm/mmap.c >> @@ -558,6 +558,10 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma, >> else >> mm->highest_vm_end = vm_end_gap(vma); >> >> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT >> + seqcount_init(&vma->vm_sequence); >> +#endif >> + >> /* >> * vma->vm_prev wasn't known when we followed the rbtree to find the >> * correct insertion point for that vma. As a result, we could not >> @@ -692,6 +696,30 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start, >> long adjust_next = 0; >> int remove_next = 0; >> >> + /* >> + * Why using vm_raw_write*() functions here to avoid lockdep's warning ? >> + * >> + * Locked is complaining about a theoretical lock dependency, involving >> + * 3 locks: >> + * mapping->i_mmap_rwsem --> vma->vm_sequence --> fs_reclaim >> + * >> + * Here are the major path leading to this dependency : >> + * 1. __vma_adjust() mmap_sem -> vm_sequence -> i_mmap_rwsem >> + * 2. move_vmap() mmap_sem -> vm_sequence -> fs_reclaim >> + * 3. __alloc_pages_nodemask() fs_reclaim -> i_mmap_rwsem >> + * 4. unmap_mapping_range() i_mmap_rwsem -> vm_sequence >> + * >> + * So there is no way to solve this easily, especially because in >> + * unmap_mapping_range() the i_mmap_rwsem is grab while the impacted >> + * VMAs are not yet known. >> + * However, the way the vm_seq is used is guarantying that we will >> + * never block on it since we just check for its value and never wait >> + * for it to move, see vma_has_changed() and handle_speculative_fault(). >> + */ >> + vm_raw_write_begin(vma); >> + if (next) >> + vm_raw_write_begin(next); >> + >> if (next && !insert) { >> struct vm_area_struct *exporter = NULL, *importer = NULL; >> > > Eek, what about later on: > > /* > * Easily overlooked: when mprotect shifts the boundary, > * make sure the expanding vma has anon_vma set if the > * shrinking vma had, to cover any anon pages imported. > */ > if (exporter && exporter->anon_vma && !importer->anon_vma) { > int error; > > importer->anon_vma = exporter->anon_vma; > error = anon_vma_clone(importer, exporter); > if (error) > return error; > } > > This needs > > if (error) { > if (next && next != vma) > vm_raw_write_end(next); > vm_raw_write_end(vma); > return error; > } Nice catch ! Thanks, Laurent.