The patch titled Subject: mm: pgtable: remove pte_offset_map_nolock() has been added to the -mm mm-unstable branch. Its filename is mm-pgtable-remove-pte_offset_map_nolock.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-pgtable-remove-pte_offset_map_nolock.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Subject: mm: pgtable: remove pte_offset_map_nolock() Date: Thu, 26 Sep 2024 14:46:26 +0800 Now no users are using the pte_offset_map_nolock(), remove it. Link: https://lkml.kernel.org/r/d04f9bbbcde048fb6ffa6f2bdbc6f9b22d5286f9.1727332572.git.zhengqi.arch@xxxxxxxxxxxxx Signed-off-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx> Reviewed-by: Muchun Song <muchun.song@xxxxxxxxx> Acked-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mike Rapoport (Microsoft) <rppt@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Vishal Moola (Oracle) <vishal.moola@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- Documentation/mm/split_page_table_lock.rst | 3 -- include/linux/mm.h | 2 - mm/pgtable-generic.c | 21 ------------------- 3 files changed, 26 deletions(-) --- a/Documentation/mm/split_page_table_lock.rst~mm-pgtable-remove-pte_offset_map_nolock +++ a/Documentation/mm/split_page_table_lock.rst @@ -16,9 +16,6 @@ There are helpers to lock/unlock a table - pte_offset_map_lock() maps PTE and takes PTE table lock, returns pointer to PTE with pointer to its PTE table lock, or returns NULL if no PTE table; - - pte_offset_map_nolock() - maps PTE, returns pointer to PTE with pointer to its PTE table - lock (not taken), or returns NULL if no PTE table; - pte_offset_map_ro_nolock() maps PTE, returns pointer to PTE with pointer to its PTE table lock (not taken), or returns NULL if no PTE table; --- a/include/linux/mm.h~mm-pgtable-remove-pte_offset_map_nolock +++ a/include/linux/mm.h @@ -3015,8 +3015,6 @@ static inline pte_t *pte_offset_map_lock return pte; } -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp); pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp); pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd, --- a/mm/pgtable-generic.c~mm-pgtable-remove-pte_offset_map_nolock +++ a/mm/pgtable-generic.c @@ -305,18 +305,6 @@ nomap: return NULL; } -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp) -{ - pmd_t pmdval; - pte_t *pte; - - pte = __pte_offset_map(pmd, addr, &pmdval); - if (likely(pte)) - *ptlp = pte_lockptr(mm, &pmdval); - return pte; -} - pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, spinlock_t **ptlp) { @@ -372,15 +360,6 @@ pte_t *pte_offset_map_rw_nolock(struct m * and disconnected table. Until pte_unmap(pte) unmaps and rcu_read_unlock()s * afterwards. * - * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); - * but when successful, it also outputs a pointer to the spinlock in ptlp - as - * pte_offset_map_lock() does, but in this case without locking it. This helps - * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time - * act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock - * pointer for the page table that it returns. In principle, the caller should - * recheck *pmd once the lock is taken; in practice, no callsite needs that - - * either the mmap_lock for write, or pte_same() check on contents, is enough. - * * pte_offset_map_ro_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map(); * but when successful, it also outputs a pointer to the spinlock in ptlp - as * pte_offset_map_lock() does, but in this case without locking it. This helps _ Patches currently in -mm which might be from zhengqi.arch@xxxxxxxxxxxxx are mm-pgtable-introduce-pte_offset_map_rorw_nolock.patch powerpc-assert_pte_locked-use-pte_offset_map_ro_nolock.patch mm-filemap-filemap_fault_recheck_pte_none-use-pte_offset_map_ro_nolock.patch mm-khugepaged-__collapse_huge_page_swapin-use-pte_offset_map_ro_nolock.patch arm-adjust_pte-use-pte_offset_map_rw_nolock.patch mm-handle_pte_fault-use-pte_offset_map_rw_nolock.patch mm-khugepaged-collapse_pte_mapped_thp-use-pte_offset_map_rw_nolock.patch mm-copy_pte_range-use-pte_offset_map_rw_nolock.patch mm-mremap-move_ptes-use-pte_offset_map_rw_nolock.patch mm-page_vma_mapped_walk-map_pte-use-pte_offset_map_rw_nolock.patch mm-userfaultfd-move_pages_pte-use-pte_offset_map_rw_nolock.patch mm-multi-gen-lru-walk_pte_range-use-pte_offset_map_rw_nolock.patch mm-pgtable-remove-pte_offset_map_nolock.patch