The patch titled Subject: mm/list_lru.c: move locking from __list_lru_walk_one() to its caller has been added to the -mm tree. Its filename is mm-list_lru-move-locking-from-__list_lru_walk_one-to-its-caller.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-list_lru-move-locking-from-__list_lru_walk_one-to-its-caller.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-list_lru-move-locking-from-__list_lru_walk_one-to-its-caller.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Subject: mm/list_lru.c: move locking from __list_lru_walk_one() to its caller Move the locking inside __list_lru_walk_one() to its caller. This is a preparation step in order to introduce list_lru_walk_one_irq() which does spin_lock_irq() instead of spin_lock() for the locking. Link: http://lkml.kernel.org/r/20180716111921.5365-3-bigeasy@xxxxxxxxxxxxx Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Reviewed-by: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/list_lru.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff -puN mm/list_lru.c~mm-list_lru-move-locking-from-__list_lru_walk_one-to-its-caller mm/list_lru.c --- a/mm/list_lru.c~mm-list_lru-move-locking-from-__list_lru_walk_one-to-its-caller +++ a/mm/list_lru.c @@ -219,7 +219,6 @@ __list_lru_walk_one(struct list_lru *lru struct list_head *item, *n; unsigned long isolated = 0; - spin_lock(&nlru->lock); l = list_lru_from_memcg_idx(nlru, memcg_idx); restart: list_for_each_safe(item, n, &l->list) { @@ -265,8 +264,6 @@ restart: BUG(); } } - - spin_unlock(&nlru->lock); return isolated; } @@ -275,8 +272,14 @@ list_lru_walk_one(struct list_lru *lru, list_lru_walk_cb isolate, void *cb_arg, unsigned long *nr_to_walk) { - return __list_lru_walk_one(lru, nid, memcg_cache_id(memcg), - isolate, cb_arg, nr_to_walk); + struct list_lru_node *nlru = &lru->node[nid]; + unsigned long ret; + + spin_lock(&nlru->lock); + ret = __list_lru_walk_one(lru, nid, memcg_cache_id(memcg), + isolate, cb_arg, nr_to_walk); + spin_unlock(&nlru->lock); + return ret; } EXPORT_SYMBOL_GPL(list_lru_walk_one); @@ -291,8 +294,13 @@ unsigned long list_lru_walk_node(struct nr_to_walk); if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { for_each_memcg_cache_index(memcg_idx) { + struct list_lru_node *nlru = &lru->node[nid]; + + spin_lock(&nlru->lock); isolated += __list_lru_walk_one(lru, nid, memcg_idx, isolate, cb_arg, nr_to_walk); + spin_unlock(&nlru->lock); + if (*nr_to_walk <= 0) break; } _ Patches currently in -mm which might be from bigeasy@xxxxxxxxxxxxx are ntfs-dont-disable-interrupts-during-kmap_atomic.patch mm-workingset-remove-local_irq_disable-from-count_shadow_nodes.patch mm-workingset-make-shadow_lru_isolate-use-locking-suffix.patch mm-list_lru-use-list_lru_walk_one-in-list_lru_walk_node.patch mm-list_lru-move-locking-from-__list_lru_walk_one-to-its-caller.patch mm-list_lru-pass-struct-list_lru_node-as-an-argument-__list_lru_walk_one.patch mm-list_lru-introduce-list_lru_shrink_walk_irq.patch bdi-use-refcount_t-for-reference-counting-instead-atomic_t.patch userns-use-refcount_t-for-reference-counting-instead-atomic_t.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html