On 8/26/22 10:35, Vlastimil Babka wrote: > On 8/25/22 21:51, Joel Fernandes wrote: >>> But we need to check the slab freeing functions themselves. >>> It looks like SLAB is fine, slab_destroy() there takes care to copy the >>> freelist pointer away before call_rcu() already and then it only works with >>> the copy. Also nothing should need s_mem either, in case you need more bytes >>> for the rcu_head debug. >>> >>> SLUB seems also fine as the freeing doesn't try to look at the freelist even >>> in case of debugging consistency checks being enabled. It could be a problem >>> in case of a double-free, but that's already a problem anyway. >>> In case you need more than the 8 bytes of freelist pointer, then overwriting >>> the next 'counters' union would be a problem for the consistency checks as >>> it reads slab->objects from there. >>> But I guess we could do those checks before call_rcu, so the callback would >>> then only free the slab page. The state expected by those checks doesn't >>> depend on the grace period expiring. >> >> Ok, sounds good, I can run it for a few days on real hardware and see if >> something breaks or such. The most I can get with your patch is 12 bytes >> before the FOLIO checks start complainting. I can stick to 8 bytes, and >> perhaps we can allocate a separate debug object if we need more or something. > > Weird, it works for me with 16 bytes, both SLAB and SLUB. Oh, I see, it breaks in some 32bit vdso object, in the struct page's asserts, not slab's. Likely some false positive due to header creep, I doubt the vdso really works with 32bit struct pages as those don't exist on 64 bit kernel. Maybe we should move the asserts from the header to e.g. page_alloc.c, willy? > >>> >> @@ -66,9 +74,13 @@ struct slab { >>> >> #define SLAB_MATCH(pg, sl) \ >>> >> static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl)) >>> >> SLAB_MATCH(flags, __page_flags); >>> >> +#ifdef CONFIG_SLUB >>> >> +SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ >>> >> +#else >>> >> SLAB_MATCH(compound_head, slab_list); /* Ensure bit 0 is clear */ >>> >> +#endif >>> >> #ifndef CONFIG_SLOB >>> >> -SLAB_MATCH(rcu_head, rcu_head); >>> >> +// SLAB_MATCH(rcu_head, rcu_head); /* not necessary, hopefully? */ >>> > >>> > I think if it helps, I can make this check as a dependency on a default-off >>> > debug config option. >>> >>> It wouldn't help as with this patch, for SLUB the rcu_head would start at >>> different offset regardless of whether you enable the option to increase its >>> size. >> >> True, unless we maintained 2 versions of the structs based on CONFIGs but >> that's really ugly. >> >> Thanks so much for this patch, its surely helpful for debugging purposes and >> I look forward to testing it more and providing you any feedback! > > I'll send a formal RFC soon. >