- swap-prefetch-fix-lru_cache_add_tail.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled

     swap-prefetch: fix lru_cache_add_tail()

has been removed from the -mm tree.  Its filename is

     swap-prefetch-fix-lru_cache_add_tail.patch

This patch was dropped because it was folded into another patch

------------------------------------------------------
Subject: swap-prefetch: fix lru_cache_add_tail()
From: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>


lru_cache_add_tail() uses the inactive per-cpu pagevec.  This causes normal
inactive and intactive tail inserts to end up on the wrong end of the list.

When the pagevec is completed by lru_cache_add_tail() but still contains
normal inactive pages, all pages will be added to the inactive tail and
vice versa.

Also *add_drain*() will always complete to the inactive head.

Add a third per-cpu pagevec to alleviate this problem.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
Acked-by: Con Kolivas <kernel@xxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxx>
---

 mm/swap.c |    8 +++++++-
 1 files changed, 7 insertions(+), 1 deletion(-)

diff -puN mm/swap.c~swap-prefetch-fix-lru_cache_add_tail mm/swap.c
--- devel/mm/swap.c~swap-prefetch-fix-lru_cache_add_tail	2006-05-18 00:51:57.000000000 -0700
+++ devel-akpm/mm/swap.c	2006-05-18 00:57:03.000000000 -0700
@@ -138,6 +138,7 @@ EXPORT_SYMBOL(mark_page_accessed);
  */
 static DEFINE_PER_CPU(struct pagevec, lru_add_pvecs) = { 0, };
 static DEFINE_PER_CPU(struct pagevec, lru_add_active_pvecs) = { 0, };
+static DEFINE_PER_CPU(struct pagevec, lru_add_tail_pvecs) = { 0, };
 
 void fastcall lru_cache_add(struct page *page)
 {
@@ -159,6 +160,8 @@ void fastcall lru_cache_add_active(struc
 	put_cpu_var(lru_add_active_pvecs);
 }
 
+static inline void __pagevec_lru_add_tail(struct pagevec *pvec);
+
 static void __lru_add_drain(int cpu)
 {
 	struct pagevec *pvec = &per_cpu(lru_add_pvecs, cpu);
@@ -169,6 +172,9 @@ static void __lru_add_drain(int cpu)
 	pvec = &per_cpu(lru_add_active_pvecs, cpu);
 	if (pagevec_count(pvec))
 		__pagevec_lru_add_active(pvec);
+	pvec = &per_cpu(lru_add_tail_pvecs, cpu);
+	if (pagevec_count(pvec))
+		__pagevec_lru_add_tail(pvec);
 }
 
 void lru_add_drain(void)
@@ -416,7 +422,7 @@ static inline void __pagevec_lru_add_tai
  */
 void fastcall lru_cache_add_tail(struct page *page)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvecs);
+	struct pagevec *pvec = &get_cpu_var(lru_add_tail_pvecs);
 
 	page_cache_get(page);
 	if (!pagevec_add(pvec, page))
_

Patches currently in -mm which might be from a.p.zijlstra@xxxxxxxxx are

buglet-in-radix_tree_tag_set.patch
mm-implement-swap-prefetching.patch
swap-prefetch-fix-lru_cache_add_tail.patch
swap-prefetch-fix-lru_cache_add_tail-tidy.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux