On Mon, 15 Jul 2024, Vlastimil Babka wrote: > Currently SLUB counts per-node slabs and total objects only with > CONFIG_SLUB_DEBUG, in order to minimize overhead. However, the detection > in __kmem_cache_shutdown() whether there are no outstanding object > relies on the per-node slab count (node_nr_slabs()) so it may be > unreliable without CONFIG_SLUB_DEBUG. Thus we might be failing to warn > about such situations, and instead destroy a cache while leaving its > slab(s) around (due to a buggy slab user creating such a scenario, not > in normal operation). > > We will also need node_nr_slabs() to be reliable in the following work > to gracefully handle kmem_cache_destroy() with kfree_rcu() objects in > flight. Thus make the counting of per-node slabs and objects > unconditional. > > Note that CONFIG_SLUB_DEBUG is the default anyway, and the counting is > done only when allocating or freeing a slab page, so even in > !CONFIG_SLUB_DEBUG configs the overhead should be negligible. > > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Acked-by: David Rientjes <rientjes@xxxxxxxxxx>