On Fri, May 24, 2024 at 10:54:58AM -0400, Kent Overstreet wrote: > On Wed, Apr 24, 2024 at 02:40:57PM -0700, Kees Cook wrote: > > Hi, > > > > Series change history: > > > > v3: > > - clarify rationale and purpose in commit log > > - rebase to -next (CONFIG_CODE_TAGGING) > > - simplify calling styles and split out bucket plumbing more cleanly > > - consolidate kmem_buckets_*() family introduction patches > > v2: https://lore.kernel.org/lkml/20240305100933.it.923-kees@xxxxxxxxxx/ > > v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@xxxxxxxxxx/ > > > > For the cover letter, I'm repeating commit log for patch 4 here, which has > > additional clarifications and rationale since v2: > > > > Dedicated caches are available for fixed size allocations via > > kmem_cache_alloc(), but for dynamically sized allocations there is only > > the global kmalloc API's set of buckets available. This means it isn't > > possible to separate specific sets of dynamically sized allocations into > > a separate collection of caches. > > > > This leads to a use-after-free exploitation weakness in the Linux > > kernel since many heap memory spraying/grooming attacks depend on using > > userspace-controllable dynamically sized allocations to collide with > > fixed size allocations that end up in same cache. > > This is going to increase internal fragmentation in the slab allocator, > so we're going to need better, more visible numbers on the amount of > memory stranded thusly, so users can easily see the effect this has. Yes, but not significantly. It's less than the 16-buckets randomized kmalloc implementation. The numbers will be visible in /proc/slabinfo just like any other. > Please also document this effect and point users in the documentation > where to check, so that we devs can get feedback on this. Okay, sure. In the commit log, or did you have somewhere else in mind? -- Kees Cook