+ mm-add-vm_insert_pages-2.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: add missing page_count() check to vm_insert_pages().
has been added to the -mm tree.  Its filename is
     mm-add-vm_insert_pages-2.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-add-vm_insert_pages-2.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-add-vm_insert_pages-2.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Arjun Roy <arjunroy@xxxxxxxxxx>
Subject: add missing page_count() check to vm_insert_pages().

Add missing page_count() check to vm_insert_pages(), specifically inside
insert_page_in_batch_locked().  This was accidentally forgotten in the
original patchset.

See: https://marc.info/?l=linux-mm&m=158156166403807&w=2

The intention of this patch-set is to reduce atomic ops for tcp zerocopy
receives, which normally hits the same spinlock multiple times
consecutively.

Link: http://lkml.kernel.org/r/20200214005929.104481-1-arjunroy.kdev@xxxxxxxxx
Signed-off-by: Arjun Roy <arjunroy@xxxxxxxxxx>
Cc: Arjun Roy <arjunroy@xxxxxxxxxx>
Cc: Eric Dumazet <edumazet@xxxxxxxxxx>
Cc: Soheil Hassas Yeganeh <soheil@xxxxxxxxxx>
Cc: David Miller <davem@xxxxxxxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/memory.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/mm/memory.c~mm-add-vm_insert_pages-2
+++ a/mm/memory.c
@@ -1463,8 +1463,11 @@ static int insert_page_into_pte_locked(s
 static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
 			unsigned long addr, struct page *page, pgprot_t prot)
 {
-	const int err = validate_page_before_insert(page);
+	int err;
 
+	if (!page_count(page))
+		return -EINVAL;
+	err = validate_page_before_insert(page);
 	return err ? err : insert_page_into_pte_locked(
 		mm, pte_offset_map(pmd, addr), addr, page, prot);
 }
_

Patches currently in -mm which might be from arjunroy@xxxxxxxxxx are

mm-refactor-insert_page-to-prepare-for-batched-lock-insert.patch
mm-add-vm_insert_pages.patch
mm-add-vm_insert_pages-2.patch
net-zerocopy-use-vm_insert_pages-for-tcp-rcv-zerocopy.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux