On Thu, Jun 13, 2024 at 11:13:52AM -0700, Paul E. McKenney wrote: > On Thu, Jun 13, 2024 at 07:58:17PM +0200, Uladzislau Rezki wrote: > > On Thu, Jun 13, 2024 at 10:45:59AM -0700, Paul E. McKenney wrote: > > > On Thu, Jun 13, 2024 at 07:38:59PM +0200, Uladzislau Rezki wrote: > > > > On Thu, Jun 13, 2024 at 08:06:30AM -0700, Paul E. McKenney wrote: > > > > > On Thu, Jun 13, 2024 at 03:06:54PM +0200, Uladzislau Rezki wrote: > > > > > > On Thu, Jun 13, 2024 at 05:47:08AM -0700, Paul E. McKenney wrote: > > > > > > > On Thu, Jun 13, 2024 at 01:58:59PM +0200, Jason A. Donenfeld wrote: > > > > > > > > On Wed, Jun 12, 2024 at 03:37:55PM -0700, Paul E. McKenney wrote: > > > > > > > > > On Wed, Jun 12, 2024 at 02:33:05PM -0700, Jakub Kicinski wrote: > > > > > > > > > > On Sun, 9 Jun 2024 10:27:12 +0200 Julia Lawall wrote: > > > > > > > > > > > Since SLOB was removed, it is not necessary to use call_rcu > > > > > > > > > > > when the callback only performs kmem_cache_free. Use > > > > > > > > > > > kfree_rcu() directly. > > > > > > > > > > > > > > > > > > > > > > The changes were done using the following Coccinelle semantic patch. > > > > > > > > > > > This semantic patch is designed to ignore cases where the callback > > > > > > > > > > > function is used in another way. > > > > > > > > > > > > > > > > > > > > How does the discussion on: > > > > > > > > > > [PATCH] Revert "batman-adv: prefer kfree_rcu() over call_rcu() with free-only callbacks" > > > > > > > > > > https://lore.kernel.org/all/20240612133357.2596-1-linus.luessing@xxxxxxxxx/ > > > > > > > > > > reflect on this series? IIUC we should hold off.. > > > > > > > > > > > > > > > > > > We do need to hold off for the ones in kernel modules (such as 07/14) > > > > > > > > > where the kmem_cache is destroyed during module unload. > > > > > > > > > > > > > > > > > > OK, I might as well go through them... > > > > > > > > > > > > > > > > > > [PATCH 01/14] wireguard: allowedips: replace call_rcu by kfree_rcu for simple kmem_cache_free callback > > > > > > > > > Needs to wait, see wg_allowedips_slab_uninit(). > > > > > > > > > > > > > > > > Also, notably, this patch needs additionally: > > > > > > > > > > > > > > > > diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c > > > > > > > > index e4e1638fce1b..c95f6937c3f1 100644 > > > > > > > > --- a/drivers/net/wireguard/allowedips.c > > > > > > > > +++ b/drivers/net/wireguard/allowedips.c > > > > > > > > @@ -377,7 +377,6 @@ int __init wg_allowedips_slab_init(void) > > > > > > > > > > > > > > > > void wg_allowedips_slab_uninit(void) > > > > > > > > { > > > > > > > > - rcu_barrier(); > > > > > > > > kmem_cache_destroy(node_cache); > > > > > > > > } > > > > > > > > > > > > > > > > Once kmem_cache_destroy has been fixed to be deferrable. > > > > > > > > > > > > > > > > I assume the other patches are similar -- an rcu_barrier() can be > > > > > > > > removed. So some manual meddling of these might be in order. > > > > > > > > > > > > > > Assuming that the deferrable kmem_cache_destroy() is the option chosen, > > > > > > > agreed. > > > > > > > > > > > > > <snip> > > > > > > void kmem_cache_destroy(struct kmem_cache *s) > > > > > > { > > > > > > int err = -EBUSY; > > > > > > bool rcu_set; > > > > > > > > > > > > if (unlikely(!s) || !kasan_check_byte(s)) > > > > > > return; > > > > > > > > > > > > cpus_read_lock(); > > > > > > mutex_lock(&slab_mutex); > > > > > > > > > > > > rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU; > > > > > > > > > > > > s->refcount--; > > > > > > if (s->refcount) > > > > > > goto out_unlock; > > > > > > > > > > > > err = shutdown_cache(s); > > > > > > WARN(err, "%s %s: Slab cache still has objects when called from %pS", > > > > > > __func__, s->name, (void *)_RET_IP_); > > > > > > ... > > > > > > cpus_read_unlock(); > > > > > > if (!err && !rcu_set) > > > > > > kmem_cache_release(s); > > > > > > } > > > > > > <snip> > > > > > > > > > > > > so we have SLAB_TYPESAFE_BY_RCU flag that defers freeing slab-pages > > > > > > and a cache by a grace period. Similar flag can be added, like > > > > > > SLAB_DESTROY_ONCE_FULLY_FREED, in this case a worker rearm itself > > > > > > if there are still objects which should be freed. > > > > > > > > > > > > Any thoughts here? > > > > > > > > > > Wouldn't we also need some additional code to later check for all objects > > > > > being freed to the slab, whether or not that code is initiated from > > > > > kmem_cache_destroy()? > > > > > > > > > Same away as SLAB_TYPESAFE_BY_RCU is handled from the kmem_cache_destroy() function. > > > > It checks that flag and if it is true and extra worker is scheduled to perform a > > > > deferred(instead of right away) destroy after rcu_barrier() finishes. > > > > > > Like this? > > > > > > SLAB_DESTROY_ONCE_FULLY_FREED > > > > > > Instead of adding a new kmem_cache_destroy_rcu() > > > or kmem_cache_destroy_wait() API member, instead add a > > > SLAB_DESTROY_ONCE_FULLY_FREED flag that can be passed to the > > > existing kmem_cache_destroy() function. Use of this flag would > > > suppress any warnings that would otherwise be issued if there > > > was still slab memory yet to be freed, and it would also spawn > > > workqueues (or timers or whatever) to do any needed cleanup work. > > > > > > > > The flag is passed as all others during creating a cache: > > > > slab = kmem_cache_create(name, size, ..., SLAB_DESTROY_ONCE_FULLY_FREED | OTHER_FLAGS, NULL); > > > > the rest description is correct to me. > > Good catch, fixed, thank you! > And here we go with prototype(untested): <snip> diff --git a/include/linux/slab.h b/include/linux/slab.h index 7247e217e21b..700b8a909f8a 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -59,6 +59,7 @@ enum _slab_flag_bits { #ifdef CONFIG_SLAB_OBJ_EXT _SLAB_NO_OBJ_EXT, #endif + _SLAB_DEFER_DESTROY, _SLAB_FLAGS_LAST_BIT }; @@ -139,6 +140,7 @@ enum _slab_flag_bits { */ /* Defer freeing slabs to RCU */ #define SLAB_TYPESAFE_BY_RCU __SLAB_FLAG_BIT(_SLAB_TYPESAFE_BY_RCU) +#define SLAB_DEFER_DESTROY __SLAB_FLAG_BIT(_SLAB_DEFER_DESTROY) /* Trace allocations and frees */ #define SLAB_TRACE __SLAB_FLAG_BIT(_SLAB_TRACE) diff --git a/mm/slab_common.c b/mm/slab_common.c index 1560a1546bb1..99458a0197b5 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -45,6 +45,11 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work); static DECLARE_WORK(slab_caches_to_rcu_destroy_work, slab_caches_to_rcu_destroy_workfn); +static LIST_HEAD(slab_caches_defer_destroy); +static void slab_caches_defer_destroy_workfn(struct work_struct *work); +static DECLARE_DELAYED_WORK(slab_caches_defer_destroy_work, + slab_caches_defer_destroy_workfn); + /* * Set of flags that will prevent slab merging */ @@ -448,6 +453,31 @@ static void slab_caches_to_rcu_destroy_workfn(struct work_struct *work) } } +static void +slab_caches_defer_destroy_workfn(struct work_struct *work) +{ + struct kmem_cache *s, *s2; + + mutex_lock(&slab_mutex); + list_for_each_entry_safe(s, s2, &slab_caches_defer_destroy, list) { + if (__kmem_cache_empty(s)) { + /* free asan quarantined objects */ + kasan_cache_shutdown(s); + (void) __kmem_cache_shutdown(s); + + list_del(&s->list); + + debugfs_slab_release(s); + kfence_shutdown_cache(s); + kmem_cache_release(s); + } + } + mutex_unlock(&slab_mutex); + + if (!list_empty(&slab_caches_defer_destroy)) + schedule_delayed_work(&slab_caches_defer_destroy_work, HZ); +} + static int shutdown_cache(struct kmem_cache *s) { /* free asan quarantined objects */ @@ -493,6 +523,13 @@ void kmem_cache_destroy(struct kmem_cache *s) if (s->refcount) goto out_unlock; + /* Should a destroy process be deferred? */ + if (s->flags & SLAB_DEFER_DESTROY) { + list_move_tail(&s->list, &slab_caches_defer_destroy); + schedule_delayed_work(&slab_caches_defer_destroy_work, HZ); + goto out_unlock; + } + err = shutdown_cache(s); WARN(err, "%s %s: Slab cache still has objects when called from %pS", __func__, s->name, (void *)_RET_IP_); <snip> Thanks! -- Uladzislau Rezki