+ swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm, swap, get_swap_pages: use entry_size instead of cluster in parameter
has been added to the -mm tree.  Its filename is
     swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Huang Ying <ying.huang@xxxxxxxxx>
Subject: mm, swap, get_swap_pages: use entry_size instead of cluster in parameter

As suggested by Matthew Wilcox, it is better to use "int entry_size"
instead of "bool cluster" as parameter to specify whether to operate for
huge or normal swap entries.  Because this improve the flexibility to
support other swap entry size.  And Dave Hansen thinks that this improves
code readability too.

So in this patch, the "bool cluster" parameter of get_swap_pages() is
replaced by "int entry_size".

And nr_swap_entries() trick is used to reduce the binary size when
!CONFIG_TRANSPARENT_HUGE_PAGE.

       text	   data	    bss	    dec	    hex	filename
base  24215	   2028	    340	  26583	   67d7	mm/swapfile.o
head  24123	   2004	    340	  26467	   6763	mm/swapfile.o

Link: http://lkml.kernel.org/r/20180720071845.17920-7-ying.huang@xxxxxxxxx
Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx>
Suggested-by: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Acked-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Shaohua Li <shli@xxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Minchan Kim <minchan@xxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 include/linux/swap.h |    2 +-
 mm/swap_slots.c      |    8 ++++----
 mm/swapfile.c        |   16 ++++++++--------
 3 files changed, 13 insertions(+), 13 deletions(-)

diff -puN include/linux/swap.h~swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter include/linux/swap.h
--- a/include/linux/swap.h~swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter
+++ a/include/linux/swap.h
@@ -447,7 +447,7 @@ extern void si_swapinfo(struct sysinfo *
 extern swp_entry_t get_swap_page(struct page *page);
 extern void put_swap_page(struct page *page, swp_entry_t entry);
 extern swp_entry_t get_swap_page_of_type(int);
-extern int get_swap_pages(int n, bool cluster, swp_entry_t swp_entries[]);
+extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size);
 extern int add_swap_count_continuation(swp_entry_t, gfp_t);
 extern void swap_shmem_alloc(swp_entry_t);
 extern int swap_duplicate(swp_entry_t);
diff -puN mm/swapfile.c~swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter mm/swapfile.c
--- a/mm/swapfile.c~swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter
+++ a/mm/swapfile.c
@@ -940,18 +940,18 @@ static unsigned long scan_swap_map(struc
 
 }
 
-int get_swap_pages(int n_goal, bool cluster, swp_entry_t swp_entries[])
+int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size)
 {
-	unsigned long nr_pages = cluster ? SWAPFILE_CLUSTER : 1;
+	unsigned long size = swap_entry_size(entry_size);
 	struct swap_info_struct *si, *next;
 	long avail_pgs;
 	int n_ret = 0;
 	int node;
 
 	/* Only single cluster request supported */
-	WARN_ON_ONCE(n_goal > 1 && cluster);
+	WARN_ON_ONCE(n_goal > 1 && size == SWAPFILE_CLUSTER);
 
-	avail_pgs = atomic_long_read(&nr_swap_pages) / nr_pages;
+	avail_pgs = atomic_long_read(&nr_swap_pages) / size;
 	if (avail_pgs <= 0)
 		goto noswap;
 
@@ -961,7 +961,7 @@ int get_swap_pages(int n_goal, bool clus
 	if (n_goal > avail_pgs)
 		n_goal = avail_pgs;
 
-	atomic_long_sub(n_goal * nr_pages, &nr_swap_pages);
+	atomic_long_sub(n_goal * size, &nr_swap_pages);
 
 	spin_lock(&swap_avail_lock);
 
@@ -988,14 +988,14 @@ start_over:
 			spin_unlock(&si->lock);
 			goto nextsi;
 		}
-		if (cluster) {
+		if (size == SWAPFILE_CLUSTER) {
 			if (!(si->flags & SWP_FILE))
 				n_ret = swap_alloc_cluster(si, swp_entries);
 		} else
 			n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
 						    n_goal, swp_entries);
 		spin_unlock(&si->lock);
-		if (n_ret || cluster)
+		if (n_ret || size == SWAPFILE_CLUSTER)
 			goto check_out;
 		pr_debug("scan_swap_map of si %d failed to find offset\n",
 			si->type);
@@ -1021,7 +1021,7 @@ nextsi:
 
 check_out:
 	if (n_ret < n_goal)
-		atomic_long_add((long)(n_goal - n_ret) * nr_pages,
+		atomic_long_add((long)(n_goal - n_ret) * size,
 				&nr_swap_pages);
 noswap:
 	return n_ret;
diff -puN mm/swap_slots.c~swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter mm/swap_slots.c
--- a/mm/swap_slots.c~swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter
+++ a/mm/swap_slots.c
@@ -269,8 +269,8 @@ static int refill_swap_slots_cache(struc
 
 	cache->cur = 0;
 	if (swap_slot_cache_active)
-		cache->nr = get_swap_pages(SWAP_SLOTS_CACHE_SIZE, false,
-					   cache->slots);
+		cache->nr = get_swap_pages(SWAP_SLOTS_CACHE_SIZE,
+					   cache->slots, 1);
 
 	return cache->nr;
 }
@@ -316,7 +316,7 @@ swp_entry_t get_swap_page(struct page *p
 
 	if (PageTransHuge(page)) {
 		if (IS_ENABLED(CONFIG_THP_SWAP))
-			get_swap_pages(1, true, &entry);
+			get_swap_pages(1, &entry, HPAGE_PMD_NR);
 		goto out;
 	}
 
@@ -350,7 +350,7 @@ repeat:
 			goto out;
 	}
 
-	get_swap_pages(1, false, &entry);
+	get_swap_pages(1, &entry, 1);
 out:
 	if (mem_cgroup_try_charge_swap(page, entry)) {
 		put_swap_page(page, entry);
_

Patches currently in -mm which might be from ying.huang@xxxxxxxxx are

mm-clear_huge_page-move-order-algorithm-into-a-separate-function.patch
mm-huge-page-copy-target-sub-page-last-when-copy-huge-page.patch
mm-hugetlbfs-rename-address-to-haddr-in-hugetlb_cow.patch
mm-hugetlbfs-pass-fault-address-to-cow-handler.patch
mm-swap-make-config_thp_swap-depends-on-config_swap.patch
swap-add-comments-to-lock_cluster_or_swap_info.patch
mm-swapfilec-replace-some-ifdef-with-is_enabled.patch
swap-use-swap_count-in-swap_page_trans_huge_swapped.patch
swap-unify-normal-huge-code-path-in-swap_page_trans_huge_swapped.patch
swap-unify-normal-huge-code-path-in-put_swap_page.patch
swap-get_swap_pages-use-entry_size-instead-of-cluster-in-parameter.patch
swap-add-__swap_entry_free_locked.patch
swap-put_swap_page-share-more-between-huge-normal-code-path.patch
mm-swap-fix-race-between-swapoff-and-some-swap-operations.patch
mm-swap-fix-race-between-swapoff-and-some-swap-operations-v6.patch
mm-fix-race-between-swapoff-and-mincore.patch

--
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux