On Thu, Jun 11, 2020 at 11:56:53AM +0200, Vlastimil Babka wrote: > On 6/11/20 12:46 AM, Roman Gushchin wrote: > > On Wed, Jun 10, 2020 at 06:31:35PM +0200, Vlastimil Babka wrote: > >> @@ -3672,6 +3672,14 @@ void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller) > >> } > >> EXPORT_SYMBOL(__kmalloc_track_caller); > >> > >> +static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) > >> +{ > >> + if (memcg_kmem_enabled()) > >> + return virt_to_cache(x); > >> + else > >> + return s; > >> +} > > > > Hm, it looks like all the SLAB version doesn't perform any sanity checks anymore. > > Is it intended? > > Yes, it was the same before commit b9ce5ef49f00. The commit could have been more > precise - kmemcg needs virt_to_cache(), but not the sanity check. The SLUB > version also shouldn't really be doing the sanity check if only > memcg_kmem_enabled() is true (and not the debugging/hardening), but the code > then looks ugly and I hope this will just fix itself with your kmemcg slab rework. Got it. > > > Also, Is it ever possible that s != virt_to_cache(x) if there are no bugs? > > Well, only in the kmemcg case it should be possible. > > > kmem_cache_free_bulk() in slab.c does contain the following: > > if (!orig_s) /* called via kfree_bulk */ > > s = virt_to_cache(objp); > > else > > s = cache_from_obj(orig_s, objp); > > which looks a bit strange with the version above. > > Looks fine to me. If we are called with non-NULL s, and kmemcg is not enabled, > we can just trust s. If we are called with NULL s (via kfree_bulk()) we need to > get cache from the object, even if kmemcg is not enabled, so we do > virt_to_cache() unconditionally. > Once your series is fully accepted, we can remove SLAB's cache_from_obj() and > the whole 'else' part in the hunk above. Or am I missing something? Right. I guess there will be even more cleanups possible, let's see where we'll end up. It looks like nothing prevents it from being queued for 5.9 after 5.8-rc1 will be out, right? > > > >> @@ -3175,6 +3179,23 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) > >> } > >> #endif > >> > >> +static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) > >> +{ > >> + struct kmem_cache *cachep; > >> + > >> + if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && > >> + !memcg_kmem_enabled() && > >> + !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) > >> + return s; > >> + > >> + cachep = virt_to_cache(x); > >> + if (WARN(cachep && !slab_equal_or_root(cachep, s), > >> + "%s: Wrong slab cache. %s but object is from %s\n", > >> + __func__, s->name, cachep->name)) > >> + print_tracking(cachep, x); > >> + return cachep; > >> +} > > > > Maybe we can define a trivial SLAB version of kmem_cache_debug_flags() > > and keep a single version of cache_from_obj()? > > I think the result would be more obfuscated than just making it plain that SLAB > doesn't have those SLUB features. And I still hope SLAB's version will go away > completely. If your series is accepted first, then this patch based in that will > not introduce slab.c cache_from_obj() at all. Ok, makes sense to me. Thank you!