(Feedback welcome if there is a different/better way to do this without using a page flag!) Since about 2.6.27, the page replacement algorithm maintains an "active" bit to help decide which pages are most eligible to reclaim, see http://linux-mm.org/PageReplacementDesign This "active' information is also useful to cleancache but is lost by the time that cleancache has the opportunity to preserve the pageful of data. This patch adds a new page flag "WasActive" to retain the state. The flag may possibly be useful elsewhere. It is up to each cleancache backend to utilize the bit as it desires. The matching patch for zcache is included here for clarification/discussion purposes, though it will need to go through GregKH and the staging tree. The patch resolves issues reported with cleancache which occur especially during streaming workloads on older processors, see https://lkml.org/lkml/2011/8/17/351 Signed-off-by: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index e90a673..0f5e86a 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -51,6 +51,9 @@ * PG_hwpoison indicates that a page got corrupted in hardware and contains * data with incorrect ECC bits that triggered a machine check. Accessing is * not safe since it may cause another machine check. Don't touch! + * + * PG_wasactive reflects that a page previously was promoted to active status. + * Such pages should be considered higher priority for cleancache backends. */ /* @@ -107,6 +110,9 @@ enum pageflags { #ifdef CONFIG_TRANSPARENT_HUGEPAGE PG_compound_lock, #endif +#ifdef CONFIG_CLEANCACHE + PG_was_active, +#endif __NR_PAGEFLAGS, /* Filesystems */ @@ -209,6 +215,10 @@ PAGEFLAG(SwapBacked, swapbacked) __CLEARPAGEFLAG(SwapBacked, swapbacked) __PAGEFLAG(SlobFree, slob_free) +#ifdef CONFIG_CLEANCACHE +PAGEFLAG(WasActive, was_active) +#endif + /* * Private page markings that may be used by the filesystem that owns the page * for its own purposes. diff --git a/mm/vmscan.c b/mm/vmscan.c index c52b235..fdd9e88 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -636,6 +636,8 @@ void putback_lru_page(struct page *page) int was_unevictable = PageUnevictable(page); VM_BUG_ON(PageLRU(page)); + if (active) + SetPageWasActive(page); redo: ClearPageUnevictable(page); @@ -1429,6 +1431,7 @@ update_isolated_counts(struct mem_cgroup_zone *mz, if (PageActive(page)) { lru += LRU_ACTIVE; ClearPageActive(page); + SetPageWasActive(page); nr_active += numpages; } count[lru] += numpages; @@ -1755,6 +1758,7 @@ static void shrink_active_list(unsigned long nr_to_scan, } ClearPageActive(page); /* we are de-activating */ + SetPageWasActive(page); list_add(&page->lru, &l_inactive); } diff --git a/drivers/staging/zcache/zcache-main.c b/drivers/staging/zcache/zcache-main.c index 642840c..8c81ec2 100644 --- a/drivers/staging/zcache/zcache-main.c +++ b/drivers/staging/zcache/zcache-main.c @@ -1696,6 +1696,8 @@ static void zcache_cleancache_put_page(int pool_id, u32 ind = (u32) index; struct tmem_oid oid = *(struct tmem_oid *)&key; + if (!PageWasActive(page)) + return; if (likely(ind == index)) (void)zcache_put_page(LOCAL_CLIENT, pool_id, &oid, index, page); } @@ -1710,6 +1712,8 @@ static int zcache_cleancache_get_page(int pool_id, if (likely(ind == index)) ret = zcache_get_page(LOCAL_CLIENT, pool_id, &oid, index, page); + if (ret == 0) + SetPageWasActive(page); return ret; } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href