On Sun, Jun 12, 2022 at 02:33:01PM -0400, Waiman Long wrote: > @@ -1437,10 +1440,25 @@ static void kmemleak_scan(void) > #endif > /* reset the reference count (whiten the object) */ > object->count = 0; > - if (color_gray(object) && get_object(object)) > + if (color_gray(object) && get_object(object)) { > list_add_tail(&object->gray_list, &gray_list); > + gray_list_cnt++; > + object_pinned = true; > + } > > raw_spin_unlock_irq(&object->lock); > + > + /* > + * With object pinned by a positive reference count, it > + * won't go away and we can safely release the RCU read > + * lock and do a cond_resched() to avoid soft lockup every > + * 64k objects. > + */ > + if (object_pinned && !(gray_list_cnt & 0xffff)) { > + rcu_read_unlock(); > + cond_resched(); > + rcu_read_lock(); > + } I'm not sure this gains much. There should be very few gray objects initially (those passed to kmemleak_not_leak() for example). The majority should be white objects. If we drop the fine-grained object->lock, we could instead take kmemleak_lock outside the loop with a cond_resched_lock(&kmemleak_lock) within the loop. I think we can get away with not having an rcu_read_lock() at all for list traversal with the big lock outside the loop. The reason I added it in the first kmemleak incarnation was to defer kmemleak_object freeing as it was causing a re-entrant call into the slab allocator. I later went for fine-grained locking and RCU list traversal but I may have overdone it ;). -- Catalin