On Wed, Aug 7, 2024 at 12:31 PM Vlastimil Babka <vbabka@xxxxxxx> wrote: > There used to be a rcu_barrier() for SLAB_TYPESAFE_BY_RCU caches in > kmem_cache_destroy() until commit 657dc2f97220 ("slab: remove > synchronous rcu_barrier() call in memcg cache release path") moved it to > an asynchronous work that finishes the destroying of such caches. > > The motivation for that commit was the MEMCG_KMEM integration that at > the time created and removed clones of the global slab caches together > with their cgroups, and blocking cgroups removal was unwelcome. The > implementation later changed to per-object memcg tracking using a single > cache, so there should be no more need for a fast non-blocking > kmem_cache_destroy(), which is typically only done when a module is > unloaded etc. > > Going back to synchronous barrier has the following advantages: > > - simpler implementation > - it's easier to test the result of kmem_cache_destroy() in a kunit test > > Thus effectively revert commit 657dc2f97220. It is not a 1:1 revert as > the code has changed since. The main part is that kmem_cache_release(s) > is always called from kmem_cache_destroy(), but for SLAB_TYPESAFE_BY_RCU > caches there's a rcu_barrier() first. > > Suggested-by: Mateusz Guzik <mjguzik@xxxxxxxxx> > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Reviewed-by: Jann Horn <jannh@xxxxxxxxxx>