On 6/25/20 11:55 PM, Kees Cook wrote: > Include SLAB caches when performing kmem_cache pointer verification. A > defense against such corruption[1] should be applied to all the > allocators. With this added, the "SLAB_FREE_CROSS" and "SLAB_FREE_PAGE" > LKDTM tests now pass on SLAB: > > lkdtm: Performing direct entry SLAB_FREE_CROSS > lkdtm: Attempting cross-cache slab free ... > ------------[ cut here ]------------ > cache_from_obj: Wrong slab cache. lkdtm-heap-b but object is from lkdtm-heap-a > WARNING: CPU: 2 PID: 2195 at mm/slab.h:530 kmem_cache_free+0x8d/0x1d0 > ... > lkdtm: Performing direct entry SLAB_FREE_PAGE > lkdtm: Attempting non-Slab slab free ... > ------------[ cut here ]------------ > virt_to_cache: Object is not a Slab page! > WARNING: CPU: 1 PID: 2202 at mm/slab.h:489 kmem_cache_free+0x196/0x1d0 > > Additionally clean up neighboring Kconfig entries for clarity, > readability, and redundant option removal. > > [1] https://github.com/ThomasKing2014/slides/raw/master/Building%20universal%20Android%20rooting%20with%20a%20type%20confusion%20vulnerability.pdf > > Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist hardening") > Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> > --- > init/Kconfig | 9 +++++---- > 1 file changed, 5 insertions(+), 4 deletions(-) > > diff --git a/init/Kconfig b/init/Kconfig > index a46aa8f3174d..7542d28c6f61 100644 > --- a/init/Kconfig > +++ b/init/Kconfig > @@ -1885,9 +1885,8 @@ config SLAB_MERGE_DEFAULT > command line. > > config SLAB_FREELIST_RANDOM > - default n > + bool "Randomize slab freelist" > depends on SLAB || SLUB > - bool "SLAB freelist randomization" > help > Randomizes the freelist order used on creating new pages. This > security feature reduces the predictability of the kernel slab > @@ -1895,12 +1894,14 @@ config SLAB_FREELIST_RANDOM > > config SLAB_FREELIST_HARDENED > bool "Harden slab freelist metadata" > - depends on SLUB > + depends on SLAB || SLUB > help > Many kernel heap attacks try to target slab cache metadata and > other infrastructure. This options makes minor performance > sacrifices to harden the kernel slab allocator against common > - freelist exploit methods. > + freelist exploit methods. Some slab implementations have more > + sanity-checking than others. This option is most effective with > + CONFIG_SLUB. > > config SHUFFLE_PAGE_ALLOCATOR > bool "Page allocator randomization" >