On Wed, Aug 10, 2022 at 12:49:46PM -0400, Waiman Long wrote: > A circular locking problem is reported by lockdep due to the following > circular locking dependency. > > +--> cpu_hotplug_lock --> slab_mutex --> kn->active --+ > | | > +-----------------------------------------------------+ > > The forward cpu_hotplug_lock ==> slab_mutex ==> kn->active dependency > happens in > > kmem_cache_destroy(): cpus_read_lock(); mutex_lock(&slab_mutex); > ==> sysfs_slab_unlink() > ==> kobject_del() > ==> kernfs_remove() > ==> __kernfs_remove() > ==> kernfs_drain(): rwsem_acquire(&kn->dep_map, ...); > > The backward kn->active ==> cpu_hotplug_lock dependency happens in > > kernfs_fop_write_iter(): kernfs_get_active(); > ==> slab_attr_store() > ==> cpu_partial_store() > ==> flush_all(): cpus_read_lock() > > One way to break this circular locking chain is to avoid holding > cpu_hotplug_lock and slab_mutex while deleting the kobject in > sysfs_slab_unlink() which should be equivalent to doing a write_lock > and write_unlock pair of the kn->active virtual lock. > > Since the kobject structures are not protected by slab_mutex or the > cpu_hotplug_lock, we can certainly release those locks before doing > the delete operation. > > Move sysfs_slab_unlink() and sysfs_slab_release() to the newly > created kmem_cache_release() and call it outside the slab_mutex & > cpu_hotplug_lock critical sections. > > Signed-off-by: Waiman Long <longman@xxxxxxxxxx> > --- > [v2] Break kmem_cache_release() helper into 2 separate ones. > > mm/slab_common.c | 54 +++++++++++++++++++++++++++++++++--------------- > 1 file changed, 37 insertions(+), 17 deletions(-) > > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 17996649cfe3..7742d0446d8b 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -392,6 +392,36 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align, > } > EXPORT_SYMBOL(kmem_cache_create); > > +#ifdef SLAB_SUPPORTS_SYSFS > +static void kmem_cache_workfn_release(struct kmem_cache *s) > +{ > + sysfs_slab_release(s); > +} > +#else > +static void kmem_cache_workfn_release(struct kmem_cache *s) > +{ > + slab_kmem_cache_release(s); > +} > +#endif > + > +/* > + * For a given kmem_cache, kmem_cache_destroy() should only be called > + * once or there will be a use-after-free problem. The actual deletion > + * and release of the kobject does not need slab_mutex or cpu_hotplug_lock > + * protection. So they are now done without holding those locks. > + */ > +static void kmem_cache_release(struct kmem_cache *s) > +{ > +#ifdef SLAB_SUPPORTS_SYSFS > + sysfs_slab_unlink(s); > +#endif > + > + if (s->flags & SLAB_TYPESAFE_BY_RCU) > + schedule_work(&slab_caches_to_rcu_destroy_work); > + else > + kmem_cache_workfn_release(s); > +} > + > static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) > { > LIST_HEAD(to_destroy); > @@ -418,11 +448,7 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) > list_for_each_entry_safe(s, s2, &to_destroy, list) { > debugfs_slab_release(s); > kfence_shutdown_cache(s); > -#ifdef SLAB_SUPPORTS_SYSFS > - sysfs_slab_release(s); > -#else > - slab_kmem_cache_release(s); > -#endif > + kmem_cache_workfn_release(s); > } > } > > @@ -437,20 +463,10 @@ static int shutdown_cache(struct kmem_cache *s) > list_del(&s->list); > > if (s->flags & SLAB_TYPESAFE_BY_RCU) { > -#ifdef SLAB_SUPPORTS_SYSFS > - sysfs_slab_unlink(s); > -#endif > list_add_tail(&s->list, &slab_caches_to_rcu_destroy); > - schedule_work(&slab_caches_to_rcu_destroy_work); Hi Waiman! This version is much more readable, thank you! But can we, please, leave this schedule_work(&slab_caches_to_rcu_destroy_work) call here? I don't see a good reason to move it, do I miss something? It's nice to have list_add_tail() and schedule_work() calls nearby, so it's obvious we can't miss the latter. Thanks!