+ mm-introduce-page_shift.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: Introduce page_shift()
has been added to the -mm tree.  Its filename is
     mm-introduce-page_shift.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-introduce-page_shift.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-introduce-page_shift.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx>
Subject: mm: Introduce page_shift()

Replace PAGE_SHIFT + compound_order(page) with the new page_shift()
function.  Minor improvements in readability.

Link: http://lkml.kernel.org/r/20190721104612.19120-3-willy@xxxxxxxxxxxxx
Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Reviewed-by: Ira Weiny <ira.weiny@xxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 arch/powerpc/mm/book3s64/iommu_api.c |    7 ++-----
 drivers/vfio/vfio_iommu_spapr_tce.c  |    2 +-
 include/linux/mm.h                   |    6 ++++++
 3 files changed, 9 insertions(+), 6 deletions(-)

--- a/arch/powerpc/mm/book3s64/iommu_api.c~mm-introduce-page_shift
+++ a/arch/powerpc/mm/book3s64/iommu_api.c
@@ -129,11 +129,8 @@ static long mm_iommu_do_alloc(struct mm_
 		 * Allow to use larger than 64k IOMMU pages. Only do that
 		 * if we are backed by hugetlb.
 		 */
-		if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page)) {
-			struct page *head = compound_head(page);
-
-			pageshift = compound_order(head) + PAGE_SHIFT;
-		}
+		if ((mem->pageshift > PAGE_SHIFT) && PageHuge(page))
+			pageshift = page_shift(compound_head(page));
 		mem->pageshift = min(mem->pageshift, pageshift);
 		/*
 		 * We don't need struct page reference any more, switch
--- a/drivers/vfio/vfio_iommu_spapr_tce.c~mm-introduce-page_shift
+++ a/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -190,7 +190,7 @@ static bool tce_page_is_contained(struct
 	 * a page we just found. Otherwise the hardware can get access to
 	 * a bigger memory chunk that it should.
 	 */
-	return (PAGE_SHIFT + compound_order(compound_head(page))) >= page_shift;
+	return page_shift(compound_head(page)) >= page_shift;
 }
 
 static inline bool tce_groups_attached(struct tce_container *container)
--- a/include/linux/mm.h~mm-introduce-page_shift
+++ a/include/linux/mm.h
@@ -811,6 +811,12 @@ static inline unsigned long page_size(st
 	return PAGE_SIZE << compound_order(page);
 }
 
+/* Returns the number of bits needed for the number of bytes in a page */
+static inline unsigned int page_shift(struct page *page)
+{
+	return PAGE_SHIFT + compound_order(page);
+}
+
 void free_compound_page(struct page *page);
 
 #ifdef CONFIG_MMU
_

Patches currently in -mm which might be from willy@xxxxxxxxxxxxx are

mm-introduce-page_size.patch
mm-introduce-page_shift.patch
mm-introduce-compound_nr.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux