Low reclaim efficiency occurs when many pages are scanned that cannot be reclaimed. This occurs for example when pages are dirty or under writeback. Node-based LRU reclaim introduces a new source as reclaim for allocation requests requiring lower zones will skip pages belonging to higher zones. This patch adds vmstat counters to count pages that were skipped because the calling context could not use pages from that zone. It will help distinguish one source of low reclaim efficiency. Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> --- include/linux/vm_event_item.h | 1 + mm/vmscan.c | 1 + mm/vmstat.c | 2 ++ 3 files changed, 4 insertions(+) diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 8dcb5a813163..cadaa0f05f67 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -26,6 +26,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, PGFREE, PGACTIVATE, PGDEACTIVATE, PGFAULT, PGMAJFAULT, PGLAZYFREED, + FOR_ALL_ZONES(PGSCAN_SKIP), PGREFILL, PGSTEAL_KSWAPD, PGSTEAL_DIRECT, diff --git a/mm/vmscan.c b/mm/vmscan.c index e92765eb0a1e..a5302b86c032 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1386,6 +1386,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, if (page_zonenum(page) > sc->reclaim_idx) { list_move(&page->lru, &pages_skipped); + __count_zone_vm_events(PGSCAN_SKIP, page_zone(page), 1); continue; } diff --git a/mm/vmstat.c b/mm/vmstat.c index 8562ebe2d311..4d8617b02032 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1007,6 +1007,8 @@ const char * const vmstat_text[] = { "pgmajfault", "pglazyfreed", + TEXTS_FOR_ZONES("pgskip") + "pgrefill", "pgsteal_kswapd", "pgsteal_direct", -- 2.6.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>