On Tue, Jun 14, 2022 at 06:15:14PM +0100, Catalin Marinas wrote: > On Sun, Jun 12, 2022 at 02:33:01PM -0400, Waiman Long wrote: > > @@ -1437,10 +1440,25 @@ static void kmemleak_scan(void) > > #endif > > /* reset the reference count (whiten the object) */ > > object->count = 0; > > - if (color_gray(object) && get_object(object)) > > + if (color_gray(object) && get_object(object)) { > > list_add_tail(&object->gray_list, &gray_list); > > + gray_list_cnt++; > > + object_pinned = true; > > + } > > > > raw_spin_unlock_irq(&object->lock); > > + > > + /* > > + * With object pinned by a positive reference count, it > > + * won't go away and we can safely release the RCU read > > + * lock and do a cond_resched() to avoid soft lockup every > > + * 64k objects. > > + */ > > + if (object_pinned && !(gray_list_cnt & 0xffff)) { > > + rcu_read_unlock(); > > + cond_resched(); > > + rcu_read_lock(); > > + } > > I'm not sure this gains much. There should be very few gray objects > initially (those passed to kmemleak_not_leak() for example). The > majority should be white objects. > > If we drop the fine-grained object->lock, we could instead take > kmemleak_lock outside the loop with a cond_resched_lock(&kmemleak_lock) > within the loop. I think we can get away with not having an > rcu_read_lock() at all for list traversal with the big lock outside the > loop. Actually this doesn't work is the current object in the iteration is freed. Does list_for_each_rcu_safe() help? -- Catalin