On Thu, Oct 26, 2017 at 04:48:54PM -0700, Andi Kleen wrote: > static unsigned long scan_shadow_nodes(struct shrinker *shrinker, > struct shrink_control *sc) > { > + struct list_head *tmp, *pos; > unsigned long ret; > + LIST_HEAD(nodes); > + spinlock_t *lock = NULL; > > - /* list_lru lock nests inside IRQ-safe mapping->tree_lock */ > + ret = list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate, &nodes); > local_irq_disable(); > - ret = list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate, NULL); > + list_for_each_safe (pos, tmp, &nodes) > + free_shadow_node(pos, &lock); The nlru->lock in list_lru_shrink_walk() is the only thing that keeps truncation blocked on workingset_update_node() -> list_lru_del() and so ultimately keeping it from freeing the radix tree node. It's not safe to access the nodes on the private list after that. Batching mapping->tree_lock is possible, but you have to keep the lock-handoff scheme. Pass a &mapping to list_lru_shrink_walk() and only unlock and spin_trylock(&mapping->tree_lock) if it changes? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>