The patch titled Subject: mm: kmsan: call KMSAN hooks from SLUB code has been added to the -mm mm-unstable branch. Its filename is mm-kmsan-call-kmsan-hooks-from-slub-code.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-kmsan-call-kmsan-hooks-from-slub-code.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Alexander Potapenko <glider@xxxxxxxxxx> Subject: mm: kmsan: call KMSAN hooks from SLUB code Date: Mon, 5 Sep 2022 14:24:23 +0200 In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. In addition, we apply __no_kmsan_checks to get_freepointer_safe() to suppress reports when accessing freelist pointers that reside in freed objects. Link: https://lkml.kernel.org/r/20220905122452.2258262-16-glider@xxxxxxxxxx Signed-off-by: Alexander Potapenko <glider@xxxxxxxxxx> Reviewed-by: Marco Elver <elver@xxxxxxxxxx> Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Alexei Starovoitov <ast@xxxxxxxxxx> Cc: Andrey Konovalov <andreyknvl@xxxxxxxxx> Cc: Andrey Konovalov <andreyknvl@xxxxxxxxxx> Cc: Andy Lutomirski <luto@xxxxxxxxxx> Cc: Arnd Bergmann <arnd@xxxxxxxx> Cc: Borislav Petkov <bp@xxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Cc: Eric Biggers <ebiggers@xxxxxxxxxx> Cc: Eric Dumazet <edumazet@xxxxxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Cc: Ilya Leoshkevich <iii@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Jens Axboe <axboe@xxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Kees Cook <keescook@xxxxxxxxxxxx> Cc: Liu Shixin <liushixin2@xxxxxxxxxx> Cc: Mark Rutland <mark.rutland@xxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Michael S. Tsirkin <mst@xxxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Petr Mladek <pmladek@xxxxxxxx> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Vegard Nossum <vegard.nossum@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/kmsan.h | 57 ++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 76 ++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 1 mm/slub.c | 17 ++++++++ 4 files changed, 151 insertions(+) --- a/include/linux/kmsan.h~mm-kmsan-call-kmsan-hooks-from-slub-code +++ a/include/linux/kmsan.h @@ -14,6 +14,7 @@ #include <linux/types.h> struct page; +struct kmem_cache; #ifdef CONFIG_KMSAN @@ -49,6 +50,44 @@ void kmsan_free_page(struct page *page, void kmsan_copy_page_meta(struct page *dst, struct page *src); /** + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * @flags: GFP flags passed to the allocator. + * + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the + * newly created object, marking it as initialized or uninitialized. + */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); + +/** + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * + * KMSAN marks the freed object as uninitialized. + */ +void kmsan_slab_free(struct kmem_cache *s, void *object); + +/** + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. + * @ptr: object pointer. + * @size: object size. + * @flags: GFP flags passed to the allocator. + * + * Similar to kmsan_slab_alloc(), but for large allocations. + */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); + +/** + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. + * @ptr: object pointer. + * + * Similar to kmsan_slab_free(), but for large allocations. + */ +void kmsan_kfree_large(const void *ptr); + +/** * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. * @start: start of vmapped range. * @end: end of vmapped range. @@ -114,6 +153,24 @@ static inline void kmsan_copy_page_meta( { } +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ +} + +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) +{ +} + +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ +} + +static inline void kmsan_kfree_large(const void *ptr) +{ +} + static inline void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, pgprot_t prot, --- a/mm/kmsan/hooks.c~mm-kmsan-call-kmsan-hooks-from-slub-code +++ a/mm/kmsan/hooks.c @@ -27,6 +27,82 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + if (unlikely(object == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_memory(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} + +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * until the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(object, s->object_size, GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} + +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + if (unlikely(ptr == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory((void *)ptr, size, + /*checked*/ true); + else + kmsan_internal_poison_memory((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} + +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + KMSAN_WARN_ON(ptr != page_address(page)); + kmsan_internal_poison_memory((void *)ptr, + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} + static unsigned long vmalloc_shadow(unsigned long addr) { return (unsigned long)kmsan_get_metadata((void *)addr, --- a/mm/slab.h~mm-kmsan-call-kmsan-hooks-from-slub-code +++ a/mm/slab.h @@ -729,6 +729,7 @@ static inline void slab_post_alloc_hook( memset(p[i], 0, s->object_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); --- a/mm/slub.c~mm-kmsan-call-kmsan-hooks-from-slub-code +++ a/mm/slub.c @@ -22,6 +22,7 @@ #include <linux/proc_fs.h> #include <linux/seq_file.h> #include <linux/kasan.h> +#include <linux/kmsan.h> #include <linux/cpu.h> #include <linux/cpuset.h> #include <linux/mempolicy.h> @@ -359,6 +360,17 @@ static void prefetch_freepointer(const s prefetchw(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, we apply __no_kmsan_checks to ensure that + * get_freepointer_safe() returns initialized memory. + */ +__no_kmsan_checks static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; @@ -1709,6 +1721,7 @@ static inline void *kmalloc_large_node_h ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1716,12 +1729,14 @@ static __always_inline void kfree_hook(v { kmemleak_free(x); kasan_kfree_large(x); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); @@ -5941,6 +5956,7 @@ static char *create_unique_id(struct kme p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_memory(name, p - name); return name; } @@ -6042,6 +6058,7 @@ static int sysfs_slab_alias(struct kmem_ al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_memory(al, sizeof(*al)); return 0; } _ Patches currently in -mm which might be from glider@xxxxxxxxxx are stackdepot-reserve-5-extra-bits-in-depot_stack_handle_t.patch instrumentedh-allow-instrumenting-both-sides-of-copy_from_user.patch x86-asm-instrument-usercopy-in-get_user-and-put_user.patch asm-generic-instrument-usercopy-in-cacheflushh.patch kmsan-add-rest-documentation.patch kmsan-introduce-__no_sanitize_memory-and-__no_kmsan_checks.patch kmsan-mark-noinstr-as-__no_sanitize_memory.patch x86-kmsan-pgtable-reduce-vmalloc-space.patch libnvdimm-pfn_dev-increase-max_struct_page_size.patch kmsan-add-kmsan-runtime-core.patch kmsan-disable-instrumentation-of-unsupported-common-kernel-code.patch maintainers-add-entry-for-kmsan.patch mm-kmsan-maintain-kmsan-metadata-for-page-operations.patch mm-kmsan-call-kmsan-hooks-from-slub-code.patch kmsan-handle-task-creation-and-exiting.patch init-kmsan-call-kmsan-initialization-routines.patch instrumentedh-add-kmsan-support.patch kmsan-unpoison-tlb-in-arch_tlb_gather_mmu.patch kmsan-add-iomap-support.patch input-libps2-mark-data-received-in-__ps2_command-as-initialized.patch dma-kmsan-unpoison-dma-mappings.patch virtio-kmsan-check-unpoison-scatterlist-in-vring_map_one_sg.patch kmsan-handle-memory-sent-to-from-usb.patch kmsan-add-tests-for-kmsan.patch kmsan-disable-strscpy-optimization-under-kmsan.patch crypto-kmsan-disable-accelerated-configs-under-kmsan.patch kmsan-disable-physical-page-merging-in-biovec.patch block-kmsan-skip-bio-block-merging-logic-for-kmsan.patch kcov-kmsan-unpoison-area-list-in-kcov_remote_area_put.patch security-kmsan-fix-interoperability-with-auto-initialization.patch objtool-kmsan-list-kmsan-api-functions-as-uaccess-safe.patch x86-kmsan-disable-instrumentation-of-unsupported-code.patch x86-kmsan-skip-shadow-checks-in-__switch_to.patch x86-kmsan-handle-open-coded-assembly-in-lib-iomemc.patch x86-kmsan-use-__msan_-string-functions-where-possible.patch x86-kmsan-sync-metadata-pages-on-page-fault.patch x86-kasan-kmsan-support-config_generic_csum-on-x86-enable-it-for-kasan-kmsan.patch x86-fs-kmsan-disable-config_dcache_word_access.patch entry-kmsan-introduce-kmsan_unpoison_entry_regs.patch bpf-kmsan-initialize-bpf-registers-with-zeroes.patch mm-fs-initialize-fsdata-passed-to-write_begin-write_end-interface.patch x86-kmsan-enable-kmsan-builds-for-x86.patch