The patch titled Subject: mm/list_lru.c: use list_lru_walk_one() in list_lru_walk_node() has been removed from the -mm tree. Its filename was mm-list_lru-use-list_lru_walk_one-in-list_lru_walk_node.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Subject: mm/list_lru.c: use list_lru_walk_one() in list_lru_walk_node() Patch series "mm/list_lru: Add list_lru_shrink_walk_irq() and a user". This series removes the local_irq_disable() around list_lru_shrink_walk() (as used by mm/workingset) by adding list_lru_shrink_walk_irq(). Vladimir Davydov preferred this over an `irq' argument which I added to struct list_lru. The initial post (of this series) received a Reviewed-by tag by Vladimir Davydov which I added to each patch of the series. The series applies on top of akpm's tree which has Kirill's shrink_slab series and does not clash with it (akpm asked me to wait a week or so and repost it then). I tested the code paths by triggering the OOM-killer via memory over commit and lockdep did not complain (nor did I see any warnings). This patch (of 4): list_lru_walk_node() invokes __list_lru_walk_one() with -1 as the memcg_idx parameter. The same can be achieved by list_lru_walk_one() and passing NULL as memcg argument which then gets converted into -1. This is a preparation step when the spin_lock() function is lifted to the caller of __list_lru_walk_one(). Invoke list_lru_walk_one() instead __list_lru_walk_one() when possible. Link: http://lkml.kernel.org/r/20180716111921.5365-2-bigeasy@xxxxxxxxxxxxx Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Reviewed-by: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/list_lru.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/list_lru.c~mm-list_lru-use-list_lru_walk_one-in-list_lru_walk_node +++ a/mm/list_lru.c @@ -287,8 +287,8 @@ unsigned long list_lru_walk_node(struct long isolated = 0; int memcg_idx; - isolated += __list_lru_walk_one(lru, nid, -1, isolate, cb_arg, - nr_to_walk); + isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg, + nr_to_walk); if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) { for_each_memcg_cache_index(memcg_idx) { isolated += __list_lru_walk_one(lru, nid, memcg_idx, _ Patches currently in -mm which might be from bigeasy@xxxxxxxxxxxxx are bdi-use-refcount_t-for-reference-counting-instead-atomic_t.patch userns-use-refcount_t-for-reference-counting-instead-atomic_t.patch