On 2023/9/28 1:06, Catalin Marinas wrote: > On Wed, Sep 27, 2023 at 11:59:22AM +0800, Liu Shixin wrote: >> diff --git a/mm/kmemleak.c b/mm/kmemleak.c >> index 54c2c90d3abc..5a2bbd85df57 100644 >> --- a/mm/kmemleak.c >> +++ b/mm/kmemleak.c >> @@ -208,6 +208,8 @@ static struct rb_root object_tree_root = RB_ROOT; >> static struct rb_root object_phys_tree_root = RB_ROOT; >> /* protecting the access to object_list, object_tree_root (or object_phys_tree_root) */ >> static DEFINE_RAW_SPINLOCK(kmemleak_lock); >> +/* Serial delete_object_part() to ensure all objects is deleted correctly */ >> +static DEFINE_RAW_SPINLOCK(delete_object_part_mutex); > Don't call this mutex, it implies sleeping. Sorry, I used to define it as a mutex lock and forgot to change it. > >> >> /* allocation caches for kmemleak internal data */ >> static struct kmem_cache *object_cache; >> @@ -784,13 +786,16 @@ static void delete_object_part(unsigned long ptr, size_t size, bool is_phys) >> { >> struct kmemleak_object *object; >> unsigned long start, end; >> + unsigned long flags; >> >> + raw_spin_lock_irqsave(&delete_object_part_mutex, flags); >> object = find_and_remove_object(ptr, 1, is_phys); >> if (!object) { >> #ifdef DEBUG >> kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n", >> ptr, size); >> #endif >> + raw_spin_unlock_irqrestore(&delete_object_part_mutex, flags); > I prefer a goto out and a single place for unlocking. > > However, we already take the kmemleak_lock in find_and_remove_object(). > So better to open-code that function here and avoid introducing a new > lock. __create_object() may need a new bool argument, no_lock or > something. Or just split it into separate functions for allocating the > kmemleak structure and adding it to the corresponding trees/lists under > a lock. >