The patch titled Subject: mm/kmemleak: prevent soft lockup in first object iteration loop of kmemleak_scan() has been added to the -mm mm-unstable branch. Its filename is mm-kmemleak-prevent-soft-lockup-in-first-object-iteration-loop-of-kmemleak_scan.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-kmemleak-prevent-soft-lockup-in-first-object-iteration-loop-of-kmemleak_scan.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Waiman Long <longman@xxxxxxxxxx> Subject: mm/kmemleak: prevent soft lockup in first object iteration loop of kmemleak_scan() Date: Sun, 12 Jun 2022 14:33:01 -0400 The first RCU-based object iteration loop has to put almost all the objects into the gray list and so cannot skip taking the object lock. One way to avoid soft lockup is to insert occasional cond_resched() into the loop. This cannot be done while holding the RCU read lock which is to protect objects from removal. However, putting an object into the gray list means getting a reference to the object. That will prevent the object from removal as well without the need to hold the RCU read lock. So insert a cond_resched() call after every 64k objects are put into the gray list. Link: https://lkml.kernel.org/r/20220612183301.981616-4-longman@xxxxxxxxxx Signed-off-by: Waiman Long <longman@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/kmemleak.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) --- a/mm/kmemleak.c~mm-kmemleak-prevent-soft-lockup-in-first-object-iteration-loop-of-kmemleak_scan +++ a/mm/kmemleak.c @@ -1474,12 +1474,15 @@ static void kmemleak_scan(void) struct zone *zone; int __maybe_unused i; int new_leaks = 0; + int gray_list_cnt = 0; jiffies_last_scan = jiffies; /* prepare the kmemleak_object's */ rcu_read_lock(); list_for_each_entry_rcu(object, &object_list, object_list) { + bool object_pinned = false; + raw_spin_lock_irq(&object->lock); #ifdef DEBUG /* @@ -1505,10 +1508,25 @@ static void kmemleak_scan(void) /* reset the reference count (whiten the object) */ object->count = 0; - if (color_gray(object) && get_object(object)) + if (color_gray(object) && get_object(object)) { list_add_tail(&object->gray_list, &gray_list); + gray_list_cnt++; + object_pinned = true; + } raw_spin_unlock_irq(&object->lock); + + /* + * With object pinned by a positive reference count, it + * won't go away and we can safely release the RCU read + * lock and do a cond_resched() to avoid soft lockup every + * 64k objects. + */ + if (object_pinned && !(gray_list_cnt & 0xffff)) { + rcu_read_unlock(); + cond_resched(); + rcu_read_lock(); + } } rcu_read_unlock(); _ Patches currently in -mm which might be from longman@xxxxxxxxxx are mm-kmemleak-use-_irq-lock-unlock-variants-in-kmemleak_scan-_clear.patch mm-kmemleak-skip-unlikely-objects-in-kmemleak_scan-without-taking-lock.patch mm-kmemleak-prevent-soft-lockup-in-first-object-iteration-loop-of-kmemleak_scan.patch