On Tue, 19 May 2020 22:29:08 -0700 Michel Lespinasse <walken@xxxxxxxxxx> wrote: > Convert comments that reference mmap_sem to reference mmap_lock instead. This may not be complete.. From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Subject: mmap-locking-api-convert-mmap_sem-comments-fix fix up linux-next leftovers Cc: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx> Cc: Davidlohr Bueso <dbueso@xxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxx> Cc: Jerome Glisse <jglisse@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Laurent Dufour <ldufour@xxxxxxxxxxxxx> Cc: Liam Howlett <Liam.Howlett@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Michel Lespinasse <walken@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Ying Han <yinghan@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- arch/powerpc/mm/fault.c | 2 +- include/linux/pgtable.h | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) --- a/arch/powerpc/mm/fault.c~mmap-locking-api-convert-mmap_sem-comments-fix +++ a/arch/powerpc/mm/fault.c @@ -138,7 +138,7 @@ static noinline int bad_access_pkey(stru * 2. T1 : set AMR to deny access to pkey=4, touches, page * 3. T1 : faults... * 4. T2: mprotect_key(foo, PAGE_SIZE, pkey=5); - * 5. T1 : enters fault handler, takes mmap_sem, etc... + * 5. T1 : enters fault handler, takes mmap_lock, etc... * 6. T1 : reaches here, sees vma_pkey(vma)=5, when we really * faulted on a pte with its pkey=4. */ --- a/include/linux/pgtable.h~mmap-locking-api-convert-mmap_sem-comments-fix +++ a/include/linux/pgtable.h @@ -1101,11 +1101,11 @@ static inline pmd_t pmd_read_atomic(pmd_ #endif /* * This function is meant to be used by sites walking pagetables with - * the mmap_sem hold in read mode to protect against MADV_DONTNEED and + * the mmap_lock held in read mode to protect against MADV_DONTNEED and * transhuge page faults. MADV_DONTNEED can convert a transhuge pmd * into a null pmd and the transhuge page fault can convert a null pmd * into an hugepmd or into a regular pmd (if the hugepage allocation - * fails). While holding the mmap_sem in read mode the pmd becomes + * fails). While holding the mmap_lock in read mode the pmd becomes * stable and stops changing under us only if it's not null and not a * transhuge pmd. When those races occurs and this function makes a * difference vs the standard pmd_none_or_clear_bad, the result is @@ -1115,7 +1115,7 @@ static inline pmd_t pmd_read_atomic(pmd_ * * For 32bit kernels with a 64bit large pmd_t this automatically takes * care of reading the pmd atomically to avoid SMP race conditions - * against pmd_populate() when the mmap_sem is hold for reading by the + * against pmd_populate() when the mmap_lock is hold for reading by the * caller (a special atomic read not done by "gcc" as in the generic * version above, is also needed when THP is disabled because the page * fault can populate the pmd from under us). _