The patch titled Subject: mm, vmscan: release/reacquire lru_lock on pgdat change has been added to the -mm tree. Its filename is mm-vmscan-release-reacquire-lru_lock-on-pgdat-change.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-vmscan-release-reacquire-lru_lock-on-pgdat-change.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-vmscan-release-reacquire-lru_lock-on-pgdat-change.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Subject: mm, vmscan: release/reacquire lru_lock on pgdat change With node-lru, the locking is based on the pgdat. As Minchan pointed out, there is an opportunity to reduce LRU lock release/acquire in check_move_unevictable_pages by only changing lock on a pgdat change. Link: http://lkml.kernel.org/r/1468853426-12858-3-git-send-email-mgorman@xxxxxxxxxxxxxxxxxxx Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmscan.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff -puN mm/vmscan.c~mm-vmscan-release-reacquire-lru_lock-on-pgdat-change mm/vmscan.c --- a/mm/vmscan.c~mm-vmscan-release-reacquire-lru_lock-on-pgdat-change +++ a/mm/vmscan.c @@ -3774,24 +3774,24 @@ int page_evictable(struct page *page) void check_move_unevictable_pages(struct page **pages, int nr_pages) { struct lruvec *lruvec; - struct zone *zone = NULL; + struct pglist_data *pgdat = NULL; int pgscanned = 0; int pgrescued = 0; int i; for (i = 0; i < nr_pages; i++) { struct page *page = pages[i]; - struct zone *pagezone; + struct pglist_data *pagepgdat = page_pgdat(page); pgscanned++; - pagezone = page_zone(page); - if (pagezone != zone) { - if (zone) - spin_unlock_irq(zone_lru_lock(zone)); - zone = pagezone; - spin_lock_irq(zone_lru_lock(zone)); + pagepgdat = page_pgdat(page); + if (pagepgdat != pgdat) { + if (pgdat) + spin_unlock_irq(&pgdat->lru_lock); + pgdat = pagepgdat; + spin_lock_irq(&pgdat->lru_lock); } - lruvec = mem_cgroup_page_lruvec(page, zone->zone_pgdat); + lruvec = mem_cgroup_page_lruvec(page, pgdat); if (!PageLRU(page) || !PageUnevictable(page)) continue; @@ -3807,10 +3807,10 @@ void check_move_unevictable_pages(struct } } - if (zone) { + if (pgdat) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - spin_unlock_irq(zone_lru_lock(zone)); + spin_unlock_irq(&pgdat->lru_lock); } } #endif /* CONFIG_SHMEM */ _ Patches currently in -mm which might be from mgorman@xxxxxxxxxxxxxxxxxxx are mm-meminit-remove-early_page_nid_uninitialised.patch mm-vmstat-add-infrastructure-for-per-node-vmstats.patch mm-vmscan-move-lru_lock-to-the-node.patch mm-vmscan-move-lru-lists-to-node.patch mm-mmzone-clarify-the-usage-of-zone-padding.patch mm-vmscan-begin-reclaiming-pages-on-a-per-node-basis.patch mm-vmscan-have-kswapd-only-scan-based-on-the-highest-requested-zone.patch mm-vmscan-make-kswapd-reclaim-in-terms-of-nodes.patch mm-vmscan-remove-balance-gap.patch mm-vmscan-simplify-the-logic-deciding-whether-kswapd-sleeps.patch mm-vmscan-by-default-have-direct-reclaim-only-shrink-once-per-node.patch mm-vmscan-remove-duplicate-logic-clearing-node-congestion-and-dirty-state.patch mm-vmscan-do-not-reclaim-from-kswapd-if-there-is-any-eligible-zone.patch mm-vmscan-make-shrink_node-decisions-more-node-centric.patch mm-vmscan-make-shrink_node-decisions-more-node-centric-fix.patch mm-memcg-move-memcg-limit-enforcement-from-zones-to-nodes.patch mm-workingset-make-working-set-detection-node-aware.patch mm-page_alloc-consider-dirtyable-memory-in-terms-of-nodes.patch mm-move-page-mapped-accounting-to-the-node.patch mm-rename-nr_anon_pages-to-nr_anon_mapped.patch mm-move-most-file-based-accounting-to-the-node.patch mm-move-most-file-based-accounting-to-the-node-fix.patch mm-move-vmscan-writes-and-file-write-accounting-to-the-node.patch mm-vmscan-only-wakeup-kswapd-once-per-node-for-the-requested-classzone.patch mm-page_alloc-wake-kswapd-based-on-the-highest-eligible-zone.patch mm-convert-zone_reclaim-to-node_reclaim.patch mm-vmscan-avoid-passing-in-classzone_idx-unnecessarily-to-shrink_node.patch mm-vmscan-avoid-passing-in-classzone_idx-unnecessarily-to-compaction_ready.patch mm-vmscan-avoid-passing-in-classzone_idx-unnecessarily-to-compaction_ready-fix.patch mm-vmscan-avoid-passing-in-remaining-unnecessarily-to-prepare_kswapd_sleep.patch mm-vmscan-have-kswapd-reclaim-from-all-zones-if-reclaiming-and-buffer_heads_over_limit.patch mm-vmscan-have-kswapd-reclaim-from-all-zones-if-reclaiming-and-buffer_heads_over_limit-fix.patch mm-vmscan-add-classzone-information-to-tracepoints.patch mm-page_alloc-remove-fair-zone-allocation-policy.patch mm-page_alloc-cache-the-last-node-whose-dirty-limit-is-reached.patch mm-vmstat-replace-__count_zone_vm_events-with-a-zone-id-equivalent.patch mm-vmstat-account-per-zone-stalls-and-pages-skipped-during-reclaim.patch mm-vmstat-account-per-zone-stalls-and-pages-skipped-during-reclaim-fix.patch mm-vmstat-print-node-based-stats-in-zoneinfo-file.patch mm-vmstat-remove-zone-and-node-double-accounting-by-approximating-retries.patch mm-vmstat-remove-zone-and-node-double-accounting-by-approximating-retries-fix.patch mm-pagevec-release-reacquire-lru_lock-on-pgdat-change.patch mm-vmscan-update-all-zone-lru-sizes-before-updating-memcg.patch mm-vmscan-remove-redundant-check-in-shrink_zones.patch mm-vmscan-release-reacquire-lru_lock-on-pgdat-change.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html