The patch titled Subject: mm/memory.c: refactor insert_page to prepare for batched-lock insert has been added to the -mm tree. Its filename is mm-refactor-insert_page-to-prepare-for-batched-lock-insert.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-refactor-insert_page-to-prepare-for-batched-lock-insert.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-refactor-insert_page-to-prepare-for-batched-lock-insert.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Arjun Roy <arjunroy@xxxxxxxxxx> Subject: mm/memory.c: refactor insert_page to prepare for batched-lock insert Add helper methods for vm_insert_page()/insert_page() to prepare for vm_insert_pages(), which batch-inserts pages to reduce spinlock operations when inserting multiple consecutive pages into the user page table. The intention of this patch-set is to reduce atomic ops for tcp zerocopy receives, which normally hits the same spinlock multiple times consecutively. Link: http://lkml.kernel.org/r/20200128025958.43490-1-arjunroy.kdev@xxxxxxxxx Signed-off-by: Arjun Roy <arjunroy@xxxxxxxxxx> Signed-off-by: Eric Dumazet <edumazet@xxxxxxxxxx> Signed-off-by: Soheil Hassas Yeganeh <soheil@xxxxxxxxxx> Cc: David Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 39 ++++++++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 15 deletions(-) --- a/mm/memory.c~mm-refactor-insert_page-to-prepare-for-batched-lock-insert +++ a/mm/memory.c @@ -1430,6 +1430,27 @@ pte_t *__get_locked_pte(struct mm_struct return pte_alloc_map_lock(mm, pmd, addr, ptl); } +static int validate_page_before_insert(struct page *page) +{ + if (PageAnon(page) || PageSlab(page) || page_has_type(page)) + return -EINVAL; + flush_dcache_page(page); + return 0; +} + +static int insert_page_into_pte_locked(struct mm_struct *mm, pte_t *pte, + unsigned long addr, struct page *page, pgprot_t prot) +{ + if (!pte_none(*pte)) + return -EBUSY; + /* Ok, finally just insert the thing.. */ + get_page(page); + inc_mm_counter_fast(mm, mm_counter_file(page)); + page_add_file_rmap(page, false); + set_pte_at(mm, addr, pte, mk_pte(page, prot)); + return 0; +} + /* * This is the old fallback for page remapping. * @@ -1445,26 +1466,14 @@ static int insert_page(struct vm_area_st pte_t *pte; spinlock_t *ptl; - retval = -EINVAL; - if (PageAnon(page) || PageSlab(page) || page_has_type(page)) + retval = validate_page_before_insert(page); + if (retval) goto out; retval = -ENOMEM; - flush_dcache_page(page); pte = get_locked_pte(mm, addr, &ptl); if (!pte) goto out; - retval = -EBUSY; - if (!pte_none(*pte)) - goto out_unlock; - - /* Ok, finally just insert the thing.. */ - get_page(page); - inc_mm_counter_fast(mm, mm_counter_file(page)); - page_add_file_rmap(page, false); - set_pte_at(mm, addr, pte, mk_pte(page, prot)); - - retval = 0; -out_unlock: + retval = insert_page_into_pte_locked(mm, pte, addr, page, prot); pte_unmap_unlock(pte, ptl); out: return retval; _ Patches currently in -mm which might be from arjunroy@xxxxxxxxxx are mm-refactor-insert_page-to-prepare-for-batched-lock-insert.patch mm-add-vm_insert_pages.patch net-zerocopy-use-vm_insert_pages-for-tcp-rcv-zerocopy.patch