On Thu, Jun 13, 2024 at 05:47:08AM -0700, Paul E. McKenney wrote: > On Thu, Jun 13, 2024 at 01:58:59PM +0200, Jason A. Donenfeld wrote: > > On Wed, Jun 12, 2024 at 03:37:55PM -0700, Paul E. McKenney wrote: > > > On Wed, Jun 12, 2024 at 02:33:05PM -0700, Jakub Kicinski wrote: > > > > On Sun, 9 Jun 2024 10:27:12 +0200 Julia Lawall wrote: > > > > > Since SLOB was removed, it is not necessary to use call_rcu > > > > > when the callback only performs kmem_cache_free. Use > > > > > kfree_rcu() directly. > > > > > > > > > > The changes were done using the following Coccinelle semantic patch. > > > > > This semantic patch is designed to ignore cases where the callback > > > > > function is used in another way. > > > > > > > > How does the discussion on: > > > > [PATCH] Revert "batman-adv: prefer kfree_rcu() over call_rcu() with free-only callbacks" > > > > https://lore.kernel.org/all/20240612133357.2596-1-linus.luessing@xxxxxxxxx/ > > > > reflect on this series? IIUC we should hold off.. > > > > > > We do need to hold off for the ones in kernel modules (such as 07/14) > > > where the kmem_cache is destroyed during module unload. > > > > > > OK, I might as well go through them... > > > > > > [PATCH 01/14] wireguard: allowedips: replace call_rcu by kfree_rcu for simple kmem_cache_free callback > > > Needs to wait, see wg_allowedips_slab_uninit(). > > > > Also, notably, this patch needs additionally: > > > > diff --git a/drivers/net/wireguard/allowedips.c b/drivers/net/wireguard/allowedips.c > > index e4e1638fce1b..c95f6937c3f1 100644 > > --- a/drivers/net/wireguard/allowedips.c > > +++ b/drivers/net/wireguard/allowedips.c > > @@ -377,7 +377,6 @@ int __init wg_allowedips_slab_init(void) > > > > void wg_allowedips_slab_uninit(void) > > { > > - rcu_barrier(); > > kmem_cache_destroy(node_cache); > > } > > > > Once kmem_cache_destroy has been fixed to be deferrable. > > > > I assume the other patches are similar -- an rcu_barrier() can be > > removed. So some manual meddling of these might be in order. > > Assuming that the deferrable kmem_cache_destroy() is the option chosen, > agreed. > <snip> void kmem_cache_destroy(struct kmem_cache *s) { int err = -EBUSY; bool rcu_set; if (unlikely(!s) || !kasan_check_byte(s)) return; cpus_read_lock(); mutex_lock(&slab_mutex); rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU; s->refcount--; if (s->refcount) goto out_unlock; err = shutdown_cache(s); WARN(err, "%s %s: Slab cache still has objects when called from %pS", __func__, s->name, (void *)_RET_IP_); ... cpus_read_unlock(); if (!err && !rcu_set) kmem_cache_release(s); } <snip> so we have SLAB_TYPESAFE_BY_RCU flag that defers freeing slab-pages and a cache by a grace period. Similar flag can be added, like SLAB_DESTROY_ONCE_FULLY_FREED, in this case a worker rearm itself if there are still objects which should be freed. Any thoughts here? -- Uladzislau Rezki