Re: [RFC 0/3] soft reclaim rework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 09-04-13 14:13:12, Michal Hocko wrote:
[...]
> 2) kbuild test showed more or less the same results
> usage_in_bytes
> Base
> 		Group A		Group B
> Median		394817536	395634688
> 
> Patches applied
> median		483481600	302131200
> 
> A is kept closer to the soft limit again. There is some fluctuation
> around the limit because kbuild creates a lot of short lived processes.
> Base: 	 pgscan_kswapd_dma32 1648718	pgsteal_kswapd_dma32 1510749
> Patched: pgscan_kswapd_dma32 2042065	pgsteal_kswapd_dma32 1667745

OK, so I have patched the base version with the patch bellow which
uncovers soft reclaim scanning and reclaim and guess what:
Base:	 pgscan_kswapd_dma32 3710092	pgsteal_kswapd_dma32 3225191
Patched: pgscan_kswapd_dma32 1846700	pgsteal_kswapd_dma32 1442232
Base:	 pgscan_direct_dma32 2417683	pgsteal_direct_dma32 459702
Patched: pgscan_direct_dma32 1839331	pgsteal_direct_dma32 244338

The numbers are obviously timing dependent (wrt. previous run ~10% for
the patched kernel) but the ~1/2 half wrt. the base kernel seems real
we just haven't seen it previously because it wasn't accounted. I guess
this can be attributed to prio-0 soft reclaim behavior and a lot of
dirty pages on the LRU.

> The differences are much bigger now so it would be interesting how much
> has been scanned/reclaimed during soft reclaim in the base kernel.
---
>From 82761298527333eeecdf134b7426f95254b3e78c Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@xxxxxxx>
Date: Tue, 9 Apr 2013 15:46:53 +0200
Subject: [PATCH] account soft limit reclaim as it is part of the global
 reclaim currently

---
 mm/vmscan.c |   10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index df78d17..4dcc2ea 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -92,6 +92,7 @@ struct scan_control {
 	 * are scanned.
 	 */
 	nodemask_t	*nodemask;
+	bool soft_reclaim;
 };
 
 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
@@ -138,6 +139,10 @@ static bool global_reclaim(struct scan_control *sc)
 {
 	return !sc->target_mem_cgroup;
 }
+static bool soft_reclaim(struct scan_control *sc)
+{
+	return sc->soft_reclaim;
+}
 #else
 static bool global_reclaim(struct scan_control *sc)
 {
@@ -1309,7 +1314,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -nr_taken);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
 
-	if (global_reclaim(sc)) {
+	if (global_reclaim(sc) || soft_reclaim(sc)) {
 		zone->pages_scanned += nr_scanned;
 		if (current_is_kswapd())
 			__count_zone_vm_events(PGSCAN_KSWAPD, zone, nr_scanned);
@@ -1328,7 +1333,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 	reclaim_stat->recent_scanned[file] += nr_taken;
 
-	if (global_reclaim(sc)) {
+	if (global_reclaim(sc) || soft_reclaim(sc)) {
 		if (current_is_kswapd())
 			__count_zone_vm_events(PGSTEAL_KSWAPD, zone,
 					       nr_reclaimed);
@@ -2401,6 +2406,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
 		.order = 0,
 		.priority = 0,
 		.target_mem_cgroup = memcg,
+		.soft_reclaim = true,
 	};
 	struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
 
-- 
1.7.10.4

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]